<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      radhat6.6上安裝oracle12c RAC (一)

      參考:http://www.rzrgm.cn/jyzhao/p/7494070.html

      軟件環境:VMware、redhat6.6、oracle12c(linuxx64_12201_database.zip)、12cgrid(linuxx64_12201_grid_home.zip)

      一、前期準備工作

      虛擬機先配置一個節點即可,第二個節點由第一個節點克隆再修改相關參數(環境變量中的sid名稱、網絡等)

      1、服務器基本配置

      (操作系統、安裝包、網絡、用戶、環境變量)

      1.1.1、服務器安裝操作系統

        選擇最小安裝即可,磁盤分配:35G,內存:4G(最少可能也得2G),swap:8G

        關閉防火墻、SELinux

        關閉ntpd(mv /etc/ntp.conf /etc/ntp.conf_bak)

        添加四塊網卡:分別用于公網2塊(僅主機模式,并進行bonding)、私網2塊(隨便劃分一個vlan1模擬私網,同事作為存儲雙路徑)

      1.1.2、檢查并安裝oracle12c需要的rpm包

        檢查

      rpm -q binutils compat-libcap1 compat-libstdc++-33 gcc gcc-c++\
      e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel \
      libxcb libX11 libXau libXi libXtst make \
      net-tools nfs-utils smartmontools sysstat
      //基本上就需要這些包,到安裝那一步的檢查的時候如果有其他包提示未安裝則補充安裝

        將查詢到的未安裝的包安裝(VMware連接鏡像,配置本地yum)

      [root@jydb1 ~]#mount /dev/cdrom /mnt
      [root@jydb1 ~]# cat /etc/yum.repos.d/rhel-source.repo 
      [ISO] name
      =iso baseurl=file:///mnt enabled=1 gpgcheck=0

         yum install安裝

      yum install binutils compat-libcap1 compat-libstdc++-33 \
      e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel \
      libxcb libX11 libXau libXi libXtst make \
      net-tools nfs-utils smartmontools sysstat

        另外再安裝cvuqdisk包(rac_grid自檢需要的包,在grid的安裝包中有)

      rpm -qi cvuqdisk
      CVUQDISK_GRP=oinstall; export CVUQDISK_GRP        \\這里需要先創建oinstall組再安裝,后面教程有創建,所以等創建后再進行這一步
      rpm -iv cvuqdisk-1.0.10-1.rpm

      1.1.3、配置各節點的/etc/hosts

      [root@jydb1 ~]# cat /etc/hosts
      127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 jydb1.rac
      ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 jydb1.rac
      
      #eth0 public
      192.168.137.11  jydb1
      192.168.137.12  jydb2
      
      #eth0 vip                                              
      192.168.137.21  jydb1-vip 
      192.168.137.22  jydb2-vip 
      
      #eth1 private                                             
      10.0.0.1   jydb1-priv
      10.0.0.2   jydb2-priv
      10.0.0.11  jydb1-priv2
      10.0.0.22  jydb2-priv2
      
      #scan ip
      192.168.137.137 jydb-cluster-scan

       

      1.1.4、各節點創建需要的用戶和組

      創建group & user:

      groupadd -g 54321 oinstall  
      groupadd -g 54322 dba  
      groupadd -g 54323 oper  
      groupadd -g 54324 backupdba  
      groupadd -g 54325 dgdba  
      groupadd -g 54326 kmdba  
      groupadd -g 54327 asmdba  
      groupadd -g 54328 asmoper  
      groupadd -g 54329 asmadmin  
      groupadd -g 54330 racdba  
        
      useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle  
      useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid 

      自行設置oracle、grid密碼

      1.1.5、各節點創建安裝目錄(root)

      mkdir -p /u01/app/12.2.0/grid
      mkdir -p /u01/app/grid
      mkdir -p /u01/app/oracle
      chown -R grid:oinstall /u01
      chown oracle:oinstall /u01/app/oracle
      chmod -R 775 /u01/

       

      1.1.6、各節點配置文件修改

      內核參數修改:vi /etc/sysctl.conf

      # vi /etc/sysctl.conf  增加如下內容:
      fs.file-max = 6815744  
      kernel.sem = 250 32000 100 128  
      kernel.shmmni = 4096  
      kernel.shmall = 1073741824  
      kernel.shmmax = 6597069766656
      kernel.panic_on_oops = 1  
      net.core.rmem_default = 262144  
      net.core.rmem_max = 4194304  
      net.core.wmem_default = 262144  
      net.core.wmem_max = 1048576  
      #net.ipv4.conf.eth3.rp_filter = 2       
      #net.ipv4.conf.eth2.rp_filter = 2
      #net.ipv4.conf.eth0.rp_filter = 1  
      fs.aio-max-nr = 1048576  
      net.ipv4.ip_local_port_range = 9000 65500 

       

      修改生效:sysctl -p

      用戶shell的限制:vi /etc/security/limits.conf

      #在/etc/security/limits.conf 增加如下內容:
      grid soft nproc 2047
      grid hard nproc 16384
      grid soft nofile 1024
      grid hard nofile 65536
      grid soft stack 10240
      oracle soft nproc 2047
      oracle hard nproc 16384
      oracle soft nofile 1024
      oracle hard nofile 65536
      oracle soft stack 10240

      -加載 pam_limits.so插入式認證模塊:vi /etc/pam.d/login

      vi /etc/pam.d/login 添加如下內容:
      session required pam_limits.so

       

      1.1.7、各節點用戶環境變量配置

      [root@jydb1 ~]# cat /home/grid/.bash_profile

      export ORACLE_SID=+ASM1;
      export ORACLE_HOME=/u01/app/12.2.0/grid; 
      export PATH=$ORACLE_HOME/bin:$PATH;
      export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; 
      export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib 
      export DISPLAY=192.168.88.121:0.0

       

      [root@jydb1 ~]# cat /home/oracle/.bash_profile

      export ORACLE_SID=racdb1; 
      export ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1;       
      export ORACLE_HOSTNAME=jydb1;
      export PATH=$ORACLE_HOME/bin:$PATH; 
      export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; 
      export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
      export DISPLAY=192.168.88.121:0.0

       

      上面的步驟完成后可以克隆node2了,克隆完后,修改下第二臺的環境變量

      1.1.8、配置各節點ssh互信

      克隆出第二臺,網絡更改沒問題后

      以grid用戶為例,oracle用戶同樣要配置互信:

      ①先生成節點一grid的公鑰
      [grid@jydb1 ~]$ ssh-keygen -t rsa -P ''    
      Generating public/private rsa key pair.
      Enter file in which to save the key (/home/grid/.ssh/id_rsa): 
      Your identification has been saved in /home/grid/.ssh/id_rsa.
      Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
      The key fingerprint is:
      b6:07:65:3f:a2:e8:75:14:33:26:c0:de:47:73:5b:95 grid@jydb1.rac
      The key's randomart image is:
      +--[ RSA 2048]----+
      |     ..        .o|
      |      ..  o . .E |
      |     . ...Bo o   |
      |      . .=.=.    |
      |        S.o o    |
      |       o = . .   |
      |      . + o      |
      |     . . o       |
      |      .          |
      +-----------------+
      把它通過命令傳到節點二,
      [grid@jydb1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub grid@10.0.0.2
      grid@10.0.0.2's password: 
      Now try logging into the machine, with "ssh 'grid@10.0.0.2'", and check in:
      
        .ssh/authorized_keys
      
      to make sure we haven't added extra keys that you weren't expecting.
      
      ②在第二個節點上也生成公鑰,并追加到authorized_keys
      [grid@jydb2 .ssh]$ ssh-keygen -t rsa -P ''
      ......
      [grid@jydb2 .ssh]$ cat id_rsa.pub >> authorized_keys
      [grid@jydb2 .ssh]$ scp authorized_keys grid@10.0.0.1:.ssh/
      The authenticity of host '10.0.0.1 (10.0.0.1)' can't be established.
      RSA key fingerprint is d1:21:03:35:9d:f2:a2:81:e7:e1:7b:d0:79:f4:d3:be.
      Are you sure you want to continue connecting (yes/no)? yes
      Warning: Permanently added '10.0.0.1' (RSA) to the list of known hosts.
      grid@10.0.0.1's password: 
      authorized_keys                                                                                                            100%  792     0.8KB/s   00:00
      
      ③驗證
      [grid@jydb1 .ssh]$ ssh jydb1 date
      2018年 03月 30日 星期五 08:01:20 CST
      [grid@jydb1 .ssh]$ ssh jydb2 date
      2018年 03月 30日 星期五 08:01:20 CST
      [grid@jydb1 .ssh]$ ssh jydb1-priv date
      2018年 03月 30日 星期五 08:01:20 CST
      [grid@jydb2 .ssh]$ ssh jydb2-priv date
      2018年 03月 30日 星期五
      08:01:20 CST

      jydb2上只需要修改藍色字體

      2、共享存儲配置

        添加一臺服務器模擬存儲服務器,配置兩個私有地址和rac客戶端連接多路徑,磁盤劃分和配置

        目標:從存儲中劃分出來兩臺主機可以同時看到的共享LUN,一共六個:3個1G的盤用作OCR和Voting Disk,1個40G的盤做GIMR,其余規劃做DATA和FRA。

        注:由于是實驗環境,重點說明磁盤的作用,生產環境需要將DATA規劃的大一些。

       為存儲服務器加63g的硬盤

      //2.3的lv劃分
      asmdisk1         1G
      asmdisk2         1G
      asmdisk3         1G
      asmdisk4         40G
      asmdisk5         10G
      asmdisk6         10G

       

      1.2.1、檢查存儲網絡

        rac為存儲客戶端

        VMware建立vlan1,兩個rac節點、存儲服務器上的兩塊網卡,劃分到vlan1,這樣就可以通過多路徑和存儲進行連接。

        存儲(服務端):10.0.0.111、10.0.0.222

           rac-jydb1(客戶端):10.0.0.1、10.0.0.2

        rac-jydb2(客戶端):10.0.0.11、10.0.0.22

        最后測試網路互通沒問題即可進行下一步

      1.2.2、安裝iscsi軟件包 

        --服務端
        yum安裝scsi-target-utils

      yum install scsi-target-utils

        --客戶端
        yum安裝iscsi-initiator-utils

      yum install iscsi-initiator-utils

      1.2.3、模擬存儲加盤

        --服務端操作

      填加一個63G的盤,實際就是用來模擬存儲新增實際的一塊盤
      我這里新增加的盤顯示為/dev/sdb,我將它創建成lvm

      # pvcreate /dev/sdb
      Physical volume "/dev/sdb" successfully created
      
      # vgcreate vg_storage /dev/sdb
        Volume group "vg_storage" successfully created
      
      # lvcreate -L 10g -n lv_lun1 vg_storage     //按照之前劃分的磁盤容量分配多少g
        Logical volume "lv_lun1" created

      1.2.4、配置iscsi服務端

        iSCSI服務端主要配置文件:/etc/tgt/targets.conf

        所以我這里按照規范設置的名稱,添加好如下配置:

      <target iqn.2018-03.com.cnblogs.test:alfreddisk>
          backing-store /dev/vg_storage/lv_lun1 # Becomes LUN 1
          backing-store /dev/vg_storage/lv_lun2 # Becomes LUN 2
          backing-store /dev/vg_storage/lv_lun3 # Becomes LUN 3
          backing-store /dev/vg_storage/lv_lun4 # Becomes LUN 4
          backing-store /dev/vg_storage/lv_lun5 # Becomes LUN 5
          backing-store /dev/vg_storage/lv_lun6 # Becomes LUN 6
      </target>

       

        配置完成后,就啟動服務和設置開機自啟動:

      [root@Storage ~]# service tgtd start
      Starting SCSI target daemon: [  OK  ]
      [root@Storage ~]# chkconfig tgtd on
      [root@Storage ~]# chkconfig --list|grep tgtd
      tgtd            0:off   1:off   2:on    3:on    4:on    5:on    6:off
      [root@Storage ~]# service tgtd status
      tgtd (pid 1763 1760) is running...

        然后查詢下相關的信息,比如占用的端口、LUN信息(Type:disk):

      [root@Storage ~]# netstat -tlunp |grep tgt
      tcp        0      0 0.0.0.0:3260                0.0.0.0:*                   LISTEN      1760/tgtd           
      tcp        0      0 :::3260                     :::*                        LISTEN      1760/tgtd           
      
      [root@Storage ~]# tgt-admin --show
      Target 1: iqn.2018-03.com.cnblogs.test:alfreddisk
          System information:
              Driver: iscsi
              State: ready
          I_T nexus information:
          LUN information:
              LUN: 0
                  Type: controller
                  SCSI ID: IET     00010000
                  SCSI SN: beaf10
                  Size: 0 MB, Block size: 1
                  Online: Yes
                  Removable media: No
                  Prevent removal: No
                  Readonly: No
                  Backing store type: null
                  Backing store path: None
                  Backing store flags: 
              LUN: 1
                  Type: disk
                  SCSI ID: IET     00010001
                  SCSI SN: beaf11
                  Size: 10737 MB, Block size: 512
                  Online: Yes
                  Removable media: No
                  Prevent removal: No
                  Readonly: No
                  Backing store type: rdwr
                  Backing store path: /dev/vg_storage/lv_lun1
                  Backing store flags: 
          Account information:
          ACL information:
              ALL

      1.2.5、配置iscsi客戶端

      確認開機啟動項設置開啟:

      #  chkconfig --list|grep scsi
      iscsi           0:off   1:off   2:off   3:on    4:on    5:on    6:off
      iscsid          0:off   1:off   2:off   3:on    4:on    5:on    6:off

      使用iscsiadm命令掃描服務端的LUN(探測iSCSI Target)

        iscsiadm -m discovery -t sendtargets -p 10.0.1.99

      [root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.1.99
      10.0.1.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
      [root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.2.99
      10.0.2.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk

      查看iscsiadm -m node

       [root@jydb1 ~]# iscsiadm -m node
       10.0.1.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
       10.0.2.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk

         查看/var/lib/iscsi/nodes/下的文件:

      [root@jydb1 ~]# ll -R /var/lib/iscsi/nodes/
      /var/lib/iscsi/jydbs/:
      總用量 4
      drw------- 4 root root 4096 3月  29 00:59 iqn.2018-03.com.cnblogs.test:alfreddisk
      
      /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk:
      總用量 8
      drw------- 2 root root 4096 3月  29 00:59 10.0.1.99,3260,1
      drw------- 2 root root 4096 3月  29 00:59 10.0.2.99,3260,1
      
      /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.1.99,3260,1:
      總用量 4
      -rw------- 1 root root 2049 3月  29 00:59 default
      
      /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.2.99,3260,1:
      總用量 4
      -rw------- 1 root root 2049 3月  29 00:59 default

      掛載iscsi磁盤

        根據上面探測的結果,執行下面命令,掛載共享磁盤:

      iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login

      [root@jydb1 ~]# iscsiadm -m node  -T iqn.2018-03.com.cnblogs.test:alfreddisk --login
      Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.2.99,3260] (multiple)
      Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.1.99,3260] (multiple)
      Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.2.99,3260] successful.
      Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.1.99,3260] successful.
      顯示掛載成功

       

      通過(fdisk -l或lsblk)命令查看掛載的iscsi硬盤

      [root@jydb1 ~]# lsblk 
      NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      sda      8:0    0   35G  0 disk 
      ├─sda1   8:1    0  200M  0 part /boot
      ├─sda2   8:2    0  7.8G  0 part [SWAP]
      └─sda3   8:3    0   27G  0 part /
      sr0     11:0    1  3.5G  0 rom  /mnt
      sdb      8:16   0    1G  0 disk 
      sdc      8:32   0    1G  0 disk 
      sdd      8:48   0    1G  0 disk 
      sde      8:64   0    1G  0 disk 
      sdf      8:80   0    1G  0 disk 
      sdg      8:96   0    1G  0 disk 
      sdi      8:128  0   40G  0 disk 
      sdk      8:160  0   10G  0 disk 
      sdm      8:192  0   10G  0 disk 
      sdj      8:144  0   10G  0 disk 
      sdh      8:112  0   40G  0 disk 
      sdl      8:176  0   10G  0 disk 

      1.2.6、配置multipath多路徑

      安裝多路徑軟件包:

      rpm -qa |grep device-mapper-multipath
      沒有安裝則yum安裝
      #yum install -y device-mapper-multipath
      或下載安裝這兩個rpm
      device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm
      device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

       

      添加開機啟動

      chkconfig multipathd on

       

      生成多路徑配置文件

      --生成multipath配置文件
      /sbin/mpathconf --enable
      
      --顯示多路徑的布局
      multipath -ll
      
      --重新刷取
      multipath -v2      或-v3
      
      --清空所有多路徑
      multipath -F

       

      以下是操作輸出,供參考

       

      [root@jydb1 ~]# multipath -v3

       

      [root@jydb1 ~]# multipath -ll
      Mar 29 03:40:10 | multipath.conf line 109, invalid keyword: multipaths
      Mar 29 03:40:10 | multipath.conf line 115, invalid keyword: multipaths
      Mar 29 03:40:10 | multipath.conf line 121, invalid keyword: multipaths
      Mar 29 03:40:10 | multipath.conf line 127, invalid keyword: multipaths
      Mar 29 03:40:10 | multipath.conf line 133, invalid keyword: multipaths
      Mar 29 03:40:10 | multipath.conf line 139, invalid keyword: multipaths
      asmdisk6 (1IET     00010006) dm-5 IET,VIRTUAL-DISK           //wwid
      size=10.0G features='0' hwhandler='0' wp=rw
      |-+- policy='round-robin 0' prio=1 status=active
      | `- 33:0:0:6 sdj 8:144 active ready running
      `-+- policy='round-robin 0' prio=1 status=enabled
        `- 34:0:0:6 sdm 8:192 active ready running
      asmdisk5 (1IET     00010005) dm-2 IET,VIRTUAL-DISK
      size=10G features='0' hwhandler='0' wp=rw
      |-+- policy='round-robin 0' prio=1 status=active
      | `- 33:0:0:5 sdh 8:112 active ready running
      `-+- policy='round-robin 0' prio=1 status=enabled
        `- 34:0:0:5 sdl 8:176 active ready running
      asmdisk4 (1IET     00010004) dm-4 IET,VIRTUAL-DISK
      size=40G features='0' hwhandler='0' wp=rw
      |-+- policy='round-robin 0' prio=1 status=active
      | `- 33:0:0:4 sdf 8:80  active ready running
      `-+- policy='round-robin 0' prio=1 status=enabled
        `- 34:0:0:4 sdk 8:160 active ready running
      asmdisk3 (1IET     00010003) dm-3 IET,VIRTUAL-DISK
      size=1.0G features='0' hwhandler='0' wp=rw
      |-+- policy='round-robin 0' prio=1 status=active
      | `- 33:0:0:3 sdd 8:48  active ready running
      `-+- policy='round-robin 0' prio=1 status=enabled
        `- 34:0:0:3 sdi 8:128 active ready running
      asmdisk2 (1IET     00010002) dm-1 IET,VIRTUAL-DISK
      size=1.0G features='0' hwhandler='0' wp=rw
      |-+- policy='round-robin 0' prio=1 status=active
      | `- 33:0:0:2 sdc 8:32  active ready running
      `-+- policy='round-robin 0' prio=1 status=enabled
        `- 34:0:0:2 sdg 8:96  active ready running
      asmdisk1 (1IET     00010001) dm-0 IET,VIRTUAL-DISK
      size=1.0G features='0' hwhandler='0' wp=rw
      |-+- policy='round-robin 0' prio=1 status=active
      | `- 33:0:0:1 sdb 8:16  active ready running
      `-+- policy='round-robin 0' prio=1 status=enabled
        `- 34:0:0:1 sde 8:64  active ready running

       

      啟動multipath服務

      #service multipathd start

       

      配置multipath

      修改第一處:
      #建議user_friendly_names設為no。如果設定為 no,即指定該系統應使用WWID 作為該多路徑的別名。如果將其設為 yes,系統使用文件 #/etc/multipath/mpathn 作為別名。
       
      #當將 user_friendly_names 配置選項設為 yes 時,該多路徑設備的名稱對于一個節點來說是唯一的,但不保證對使用多路徑設備的所有節點都一致。也就是說,
       
      在節點一上的mpath1和節點二上的mpath1可能不是同一個LUN,但是各個服務器上看到的相同LUN的WWID都是一樣的,所以不建議設為yes,而是設為#no,用WWID作為別名。
       
      defaults {
              user_friendly_names no
              path_grouping_policy failover                //表示multipath工作模式為主備,path_grouping_policy  multibus為主主
      }
       
      添加第二處:綁定wwid<br>這里的wwid在multipath -l中體現
      multipaths {
             multipath {
                     wwid                      "1IET     00010001"
                     alias                     asmdisk1
             }
       
      multipaths {
             multipath {
                     wwid                      "1IET     00010002"
                     alias                     asmdisk2
             }
      
      multipaths {
             multipath {
                     wwid                      "1IET     00010003"
                     alias                     asmdisk3
             }
       
      multipaths {
             multipath {
                     wwid                      "1IET     00010004"
                     alias                     asmdisk4
             }
       
      multipaths {
             multipath {
                     wwid                      "1IET     00010005"
                     alias                     asmdisk5
             }
       
      multipaths {
             multipath {
                     wwid                      "1IET     00010006"
                     alias                     asmdisk6
             }

       

        配置完成要生效得重啟multipathd

      綁定后查看multipath別名

      [root@jydb1 ~]# cd /dev/mapper/
      [root@jydb1 mapper]# ls
      asmdisk1  asmdisk2  asmdisk3  asmdisk4  asmdisk5  asmdisk6  control

       

      udev綁定裸設備

      首先進行UDEV權限綁定,否則權限不對安裝時將掃描不到共享磁盤

        修改之前:

      [root@jydb1 ~]# ls -lh /dev/dm*
      brw-rw---- 1 root disk  253, 0 4月   2 16:18 /dev/dm-0
      brw-rw---- 1 root disk  253, 1 4月   2 16:18 /dev/dm-1
      brw-rw---- 1 root disk  253, 2 4月   2 16:18 /dev/dm-2
      brw-rw---- 1 root disk  253, 3 4月   2 16:18 /dev/dm-3
      brw-rw---- 1 root disk  253, 4 4月   2 16:18 /dev/dm-4
      brw-rw---- 1 root disk  253, 5 4月   2 16:18 /dev/dm-5
      crw-rw---- 1 root audio  14, 9 4月   2 16:18 /dev/dmmidi

       

        我這里系統是RHEL6.6,對于multipath的權限,手工去修改幾秒后會變回root。所以需要使用udev去綁定好權限。
        搜索對應的配置文件模板:

      [root@jyrac1 ~]# find / -name 12-*
      /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

       

        根據模板新增12-dm-permissions.rules文件在/etc/udev/rules.d/下面:

      vi /etc/udev/rules.d/12-dm-permissions.rules
      # MULTIPATH DEVICES
      #
      # Set permissions for all multipath devices
      ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"          //修改這里
      
      # Set permissions for first two partitions created on a multipath device (and detected by kpartx)
      # ENV{DM_UUID}=="part[1-2]-mpath-?*", OWNER:="root", GROUP:="root", MODE:="660"

       

        完成后啟動start_udev,30s后權限正常則OK

      [root@jydb1 ~]# start_udev 
      正在啟動 udev:[確定]
      [root@jydb1 ~]# ls -lh /dev/dm*
      brw-rw---- 1 grid asmadmin 253, 0 4月   2 16:25 /dev/dm-0
      brw-rw---- 1 grid asmadmin 253, 1 4月   2 16:25 /dev/dm-1
      brw-rw---- 1 grid asmadmin 253, 2 4月   2 16:25 /dev/dm-2
      brw-rw---- 1 grid asmadmin 253, 3 4月   2 16:25 /dev/dm-3
      brw-rw---- 1 grid asmadmin 253, 4 4月   2 16:25 /dev/dm-4
      brw-rw---- 1 grid asmadmin 253, 5 4月   2 16:25 /dev/dm-5
      crw-rw---- 1 root audio     14, 9 4月   2 16:24 /dev/dmmidi

       

      磁盤設備綁定

        查詢裸設備的主設備號、次設備號

      [root@jydb1 ~]# ls -lt /dev/dm-*
      brw-rw---- 1 grid asmadmin 253, 5 3月  29 04:00 /dev/dm-5
      brw-rw---- 1 grid asmadmin 253, 3 3月  29 04:00 /dev/dm-3
      brw-rw---- 1 grid asmadmin 253, 2 3月  29 04:00 /dev/dm-2
      brw-rw---- 1 grid asmadmin 253, 4 3月  29 04:00 /dev/dm-4
      brw-rw---- 1 grid asmadmin 253, 1 3月  29 04:00 /dev/dm-1
      brw-rw---- 1 grid asmadmin 253, 0 3月  29 04:00 /dev/dm-0
      
      
      [root@jydb1 ~]# dmsetup ls|sort
      asmdisk1        (253:0)
      asmdisk2        (253:1)
      asmdisk3        (253:3)
      asmdisk4        (253:4)
      asmdisk5        (253:2)
      asmdisk6        (253:5)
      
      根據對應關系綁定裸設備
      vi  /etc/udev/rules.d/60-raw.rules
      # Enter raw device bindings here.
      #
      # An example would be:
      #   ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"
      # to bind /dev/raw/raw1 to /dev/sda, or
      #   ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
      # to bind /dev/raw/raw2 to the device with major 8, minor 1.
      ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="0", RUN+="/bin/raw /dev/raw/raw1 %M %m"
      ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
      ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="2", RUN+="/bin/raw /dev/raw/raw3 %M %m"
      ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="3", RUN+="/bin/raw /dev/raw/raw4 %M %m"
      ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="4", RUN+="/bin/raw /dev/raw/raw5 %M %m"
      ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="5", RUN+="/bin/raw /dev/raw/raw6 %M %m"
      
      
      ACTION=="add", KERNEL=="raw1", OWNER="grid", GROUP="asmadmin", MODE="660"
      ACTION=="add", KERNEL=="raw2", OWNER="grid", GROUP="asmadmin", MODE="660"
      ACTION=="add", KERNEL=="raw3", OWNER="grid", GROUP="asmadmin", MODE="660"
      ACTION=="add", KERNEL=="raw4", OWNER="grid", GROUP="asmadmin", MODE="660"
      ACTION=="add", KERNEL=="raw5", OWNER="grid", GROUP="asmadmin", MODE="660"
      ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="asmadmin", MODE="660"

       完成后查看

      [root@jydb1 ~]# start_udev
      正在啟動 udev:[確定]
      [root@jydb1 ~]# ll /dev/raw/raw*
      crw-rw---- 1 grid asmadmin 162, 1 5月 25 05:03 /dev/raw/raw1
      crw-rw---- 1 grid asmadmin 162, 2 5月 25 05:03 /dev/raw/raw2
      crw-rw---- 1 grid asmadmin 162, 3 5月 25 05:03 /dev/raw/raw3
      crw-rw---- 1 grid asmadmin 162, 4 5月 25 05:03 /dev/raw/raw4
      crw-rw---- 1 grid asmadmin 162, 5 5月 25 05:03 /dev/raw/raw5
      crw-rw---- 1 grid asmadmin 162, 6 5月 25 05:03 /dev/raw/raw6
      crw-rw---- 1 root disk 162, 0 5月 25 05:03 /dev/raw/rawctl

       

      posted @ 2018-06-24 19:13  abm  閱讀(1643)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 精品无码成人片一区二区| 午夜DY888国产精品影院| 美乳丰满人妻无码视频| 亚洲人成网线在线播放VA| 九九热免费精品在线视频| 国99久9在线 | 免费| 国产精品福利中文字幕| 日本精品一区二区不卡| brazzers欧美巨大| 日本一卡二卡3卡四卡网站精品| 久久亚洲日本激情战少妇| 精品一精品国产一级毛片| 亚洲精品久久久久国色天香| 色情一区二区三区免费看| 亚洲激情在线一区二区三区| 中文字幕在线国产精品| 亚洲男女羞羞无遮挡久久丫| 国产老熟女视频一区二区| 久久青青草原精品国产app| 日本夜爽爽一区二区三区| 99蜜桃在线观看免费视频网站| 91精品午夜福利在线观看| 中文字幕乱码一区二区免费| 色香欲天天影视综合网| 韩国无码AV片午夜福利| 久久精品国产亚洲av高| 亚洲男女羞羞无遮挡久久丫| 国产女人在线视频| 精品久久久久久中文字幕| 日韩精品国内国产一区二| 精品久久久久无码| 亚洲AV无码国产在丝袜APP| 国内精品人妻一区二区三区 | 亚洲av永久无码精品水牛影视| 2020精品自拍视频曝光| 亚洲国产精品ⅴa在线观看| 99精品国产精品一区二区| 亚洲精品熟女一区二区| 亚洲色精品VR一区二区三区| 日本一区二区三区后入式| 综合色一色综合久久网|