<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      tianyou.zhu

      博客園 首頁 新隨筆 聯系 訂閱 管理

      redis集群方案與redis cluster集群實現

      閱讀(682)

      一:redis集群與高可用有多重方式可以實現,比如高可用可以使用哨兵或者redis主從+keepalived的方式實現,redis集群支持多重方式,比如客戶端分片、代理方式、redis cluster以及Coodis,每個實現方式都有自己的優缺點,具體方案及實現如下:

      1.1:客戶端分片:  

      mysql、memcached以及redis等都可以通過客戶端分片實現,其中mysql還可以通過客戶端實現分庫分表,客戶端分片是在客戶端將key進行hash按照不同的值保存到不同的redis 服務器,讀取的話也是按照不同的位置進行讀取,優勢是比較靈活,不存在單點故障,缺點是添加節點需要重新配置分片算法,并且需要手動同步數據,在緩存場景客戶端分片最適用于使用memcached,因為緩存是可以丟失一部分數據的,但是memcached可以做集群進行數據同步。
      1.2:Redis Cluster:

      在3.0版本以后支持,無中心,在某種情況下會造成數據丟失,其也是通過算法將數據分片保存至某個redis服務器,即不再通過客戶端計算key保存的redis服務器,redis服務器需要提前設置好自己所負責的槽位,比如redis A負責處理0-5000的哈希槽位數據,redis B負責處理5001-10000的hash槽位數據,redis C負責處理10001-16384的hash槽位數據,redis cluster需要特定的客戶端,要求客戶端必須支持集群協議 ,但是目前還沒有比較好的客戶端。

      這種將哈希槽分布到不同節點的做法使得用戶可以很容易地向集群中添加或者刪除節點。 比如說:
      如果用戶將新節點 D 添加到集群中, 那么集群只需要將節點 A 、B 、 C 中的某些槽移動到節點 D 就可以了。
      與此類似, 如果用戶要從集群中移除節點 A , 那么集群只需要將節點 A 中的所有哈希槽移動到節點 B 和節點 C , 然后再移除空白(不包含任何哈希槽)的節點 A 就可以了。
      因為將一個哈希槽從一個節點移動到另一個節點不會造成節點阻塞, 所以無論是添加新節點還是移除已存在節點, 又或者改變某個節點包含的哈希槽數量, 都不會造成集群下線。
      redis cluster需要專門的客戶端,比如python當中引入的redis模塊也不能使用了, 目前官方的客戶端也不是很多,需要自己開發,

      1.3:代理:

      例如Twemproxy,由proxy代替客戶端換和服務端實現分片,可以使用在緩存場景中允許數據丟失的場景,其還支持memcached,可以為proxy配置算法,缺點為twemproxy是瓶頸,不支持數據遷移,官方github地址https://github.com/twitter/twemproxy/ 

      1.4:Codis:豌豆莢的開源方案,目前redis集群比較穩定的方案,豌豆莢gitlab地址https://github.com/pingcap:

      豌豆莢codis項目官方github地址https://github.com/CodisLabs/codis

      可以無縫遷移到codis

      可以動態擴容和縮容

      多業務完全透明,業務不知道運行的是codis

      支持多核心CPU,twemproxy只能單核

      codis是有中心基于proxy的設計,是客戶端像連接單機一樣操作proxy

      有部分命令不能支持,比如keys *等

      支持group劃分,組內可以設置一個主多個從,通過sentinel 監控redis主從,當主down了自動將從切換為主

      設置的進程要最大等于CPU的核心,不能超過CPU的核心數

      其基于zookeeper,里面保存的是key保存的redis主機位置,因此zookeeper要做高可用

      監控可以使用接口和dashboard

      tidb豌豆莢團隊實現的分布式mysql數據庫,github地址https://github.com/pingcap/tidb

      二:實現redis cluster:

      2.1:環境:

      操作系統:Centos 7.2-1511 

      服務器數量:2 臺,啟動8個redis服務

      redis 版本:3.2.6

      2.2:部署redis cluster集群,并實現動態增加主機:

      2.2.1:服務器分別下載并安裝redis:

      # cd /opt
      # wget http://download.redis.io/releases/redis-3.2.6.tar.gz
      # tar xvf redis-3.2.6.tar.gz
      # ln -sv /opt/redis-3.2.6 /usr/local/redis
      # cd /usr/local/redis/
      # make && make install

      2.2.2:需要服務器啟動6個redis 服務做集群,實現3主3從,另外在準備2個服務做動態集群的主機添加,一個主一個從,一共是每個服務器啟動8個redis 服務,因此需要準備8個不通的redis 配置文件:

      [root@redis1 redis]# pwd
      /usr/local/redi

      # mkdir   conf.d && cd conf.d

      # mkdir `seq 6381  6388`

      # cp /usr/local/redis/redis.conf  /usr/local/redis/conf.d/6381/

      # vim /usr/local/redis/conf.d/6381/redis.conf #主要修改以下地方,將各服務的端口、PID、日志文件以及數據持久保存的路徑進行單獨保存:

       

      bind  0.0.0.0
      port 6381
      daemonize yes
      pidfile /var/run/redis_6381.pid
      loglevel notice
      logfile "/usr/local/redis/conf.d/6381/6381.log"
      dir /usr/local/redis/conf.d/6381/
      maxmemory 512M
      appendonly yes
      appendfilename "6381.aof"
      appendfsync everysec
      cluster-enabled yes #必須打開cluster功能,否則集群創建不成功
      cluster-config-file  6381.conf #每個主機一個配置文件,有集群創建和管理,集群內的各主機不能重名

      2.2.3:服務器批量生成redis配置文件:

      # cp /usr/local/redis/conf.d/6381/redis.conf  /opt/  #復制配置文件為模板,通過sed批量生成reids 配置文件
      # sed 's/6381/6382/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6382/redis.conf
      # sed 's/6381/6383/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6383/redis.conf
      # sed 's/6381/6384/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6384/redis.conf
      # sed 's/6381/6385/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6385/redis.conf
      # sed 's/6381/6386/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6386/redis.conf
      # sed 's/6381/6387/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6387/redis.conf
      # sed 's/6381/6388/g' /opt/redis.conf  >> /usr/local/redis/conf.d/6388/redis.conf

      2.2.4:服務器批量啟動redis 服務并驗證端口存在:

      # for i in `seq 6381 6388`;do  /usr/local/redis/src/redis-server /usr/local/redis/conf.d/$i/redis.conf;done

      2.2.5:服務器驗證redis 命令可以連接到redis服務:

      # redis-cli  -h 192.168.10.101 -p 6388
      192.168.10.101:6388> 

      2.3:安裝ruby 管理工具:

      2.3.1:配置ruby源并安裝redis管理工具:

       

      # yum install ruby rubygems -y
      # gem install redis 
      # gem sources -l  #當前使用的源為國外,但是連接很慢而且經常連接不上,因此將其刪除并添加阿里的ruby源
      *** CURRENT SOURCES ***
      https://rubygems.org/
      
      # gem source -r https://rubygems.org/  #刪除默認的國外ruby源
      # gem sources --add https://ruby.taobao.org/   #添加淘寶的ruby源
      # gem sources -l #驗證ruby源已經更換
      *** CURRENT SOURCES ***
      https://ruby.taobao.org/
      # gem sources -u  #更新緩存
      # gem install redis #安裝redis 工具
      Fetching: redis-3.3.2.gem (100%)
      Successfully installed redis-3.3.2
      Parsing documentation for redis-3.3.2
      Installing ri documentation for redis-3.3.2
      1 gem installed

      2.3.2:復制ruby的管理腳本:

      # cp /usr/local/redis/src/redis-trib.rb  /usr/local/bin/redis-trib

      2.3.3:redis-trib命令介紹:

      # redis-trib  help
      Usage: redis-trib <command> <options> <arguments ...>
      
        help            (show this help)
        del-node        host:port node_id #刪除節點
        reshard         host:port #重新分片
                        --timeout <arg>
                        --pipeline <arg>
                        --slots <arg>
                        --to <arg>
                        --yes
                        --from <arg>
        fix             host:port
                        --timeout <arg>
        create          host1:port1 ... hostN:portN #創建集群
                        --replicas <arg>
        rebalance       host:port
                        --timeout <arg>
                        --simulate
                        --pipeline <arg>
                        --threshold <arg>
                        --use-empty-masters
                        --auto-weights
                        --weight <arg>
        call            host:port command arg arg .. arg
        add-node        new_host:new_port existing_host:existing_port #添加節點
                        --slave
                        --master-id <arg>
        check           host:port #檢測節點
        import          host:port
                        --replace
                        --copy
                        --from <arg>
        set-timeout     host:port milliseconds
        info            host:port

      2.4:在單機部署redis cluster集群:

      2.4.1:創建redis cluster:

      # redis-trib  create --replicas 1  192.168.10.101:6381  192.168.10.101:6382 192.168.10.101:6383 192.168.10.101:6384 192.168.10.101:6385 192.168.10.101:6386
      >>> Creating cluster
      >>> Performing hash slots allocation on 6 nodes...
      Using 3 masters:
      192.168.10.101:6381
      192.168.10.101:6382
      192.168.10.101:6383
      Adding replica 192.168.10.101:6384 to 192.168.10.101:6381 #前三個節點主主,后三個節點是從
      Adding replica 192.168.10.101:6385 to 192.168.10.101:6382
      Adding replica 192.168.10.101:6386 to 192.168.10.101:6383
      M: fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381 #主
         slots:0-5460 (5461 slots) master #master的分片位置
      M: b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382 #主
         slots:5461-10922 (5462 slots) master #master 的分片位置
      M: 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383 #主
         slots:10923-16383 (5461 slots) master #master的分片位置
      S: b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384 #從
         replicates fc3b44c6d18abbf7191338a8a7fafdc516b6d758
      S: 7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385 #從
         replicates b3f5ba5a3b1f53e358f438c923f9591055510b96
      S: cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386 #從,可見后三個節點是前三個節點的從
         replicates 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5
      Can I set the above configuration? (type 'yes' to accept): yes #輸入yes繼續,no退出
      >>> Nodes configuration updated
      >>> Assign a different config epoch to each node
      >>> Sending CLUSTER MEET messages to join the cluster
      Waiting for the cluster to join......
      >>> Performing Cluster Check (using node 192.168.10.101:6381)
      M: fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381 #當前redis服務的ID即主機IP短褲呢
         slots:0-5460 (5461 slots) master #對應的分片位置,0-5460
         1 additional replica(s)
      S: cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386 #從redis的IP即端口
         slots: (0 slots) slave #沒有分片位置
         replicates 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5
      S: b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384
         slots: (0 slots) slave
         replicates fc3b44c6d18abbf7191338a8a7fafdc516b6d758
      M: 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383
         slots:10923-16383 (5461 slots) master
         1 additional replica(s)
      M: b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382
         slots:5461-10922 (5462 slots) master
         1 additional replica(s)
      S: 7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385
         slots: (0 slots) slave
         replicates b3f5ba5a3b1f53e358f438c923f9591055510b96
      [OK] All nodes agree about slots configuration. #配置完成
      >>> Check for open slots...
      >>> Check slots coverage...
      [OK] All 16384 slots covered. #一共16384個分片

      2.4.2:連接到redis cluster:

       

      # redis-cli -c   -h 192.168.10.101  -p 6381
      # Replication
      role:master
      connected_slaves:1
      slave0:ip=192.168.10.101,port=6384,state=online,offset=771,lag=0
      master_repl_offset:771
      repl_backlog_active:1
      repl_backlog_size:1048576
      repl_backlog_first_byte_offset:2
      repl_backlog_histlen:770
      
      # CPU
      used_cpu_sys:2.05
      used_cpu_user:0.13
      used_cpu_sys_children:0.00
      used_cpu_user_children:0.00
      
      # Cluster
      cluster_enabled:1 #當前狀態已經開啟集群
      
      # redis-cli -c   -h 192.168.10.101  -p 6384  #從redis 的狀態
      # Replication #復制的信息
      role:slave #狀態為從
      master_host:192.168.10.101 #master的IP
      master_port:6381
      master_link_status:up
      master_last_io_seconds_ago:8
      master_sync_in_progress:0
      slave_repl_offset:1051
      slave_priority:100
      slave_read_only:1
      connected_slaves:0
      master_repl_offset:0
      repl_backlog_active:0
      repl_backlog_size:1048576
      repl_backlog_first_byte_offset:0
      repl_backlog_histlen:0
      
      # Cluster
      cluster_enabled:1 #開啟cluster

      2.4.3:寫入數據:

      [root@redis1 6381]# redis-cli -c   -h 192.168.10.101  -p 6381
      192.168.10.101:6381> set k1 v1
      -> Redirected to slot [12706] located at 192.168.10.101:6383
      OK
      192.168.10.101:6383> set k2 v2
      -> Redirected to slot [449] located at 192.168.10.101:6381
      OK
      192.168.10.101:6381> set k3  v3 
      OK
      192.168.10.101:6381> set k4  v4
      -> Redirected to slot [8455] located at 192.168.10.101:6382
      OK
      192.168.10.101:6382> set k5  v5
      -> Redirected to slot [12582] located at 192.168.10.101:6383
      OK
      192.168.10.101:6383> set k6   v6
      -> Redirected to slot [325] located at 192.168.10.101:6381
      OK
      192.168.10.101:6381> set k7   v7
      OK
      192.168.10.101:6381> set k8   v8
      -> Redirected to slot [8331] located at 192.168.10.101:6382
      OK
      192.168.10.101:6382> set k9   v9 #可以看出寫入數據是有規律的,類似于輪訓寫入
      -> Redirected to slot [12458] located at 192.168.10.101:6383

      2.4.4.:查看集群信息:

      192.168.10.101:6383> CLUSTER INFO
      cluster_state:ok
      cluster_slots_assigned:16384
      cluster_slots_ok:16384
      cluster_slots_pfail:0
      cluster_slots_fail:0
      cluster_known_nodes:6
      cluster_size:3
      cluster_current_epoch:6
      cluster_my_epoch:3
      cluster_stats_messages_sent:2570
      cluster_stats_messages_received:2570

      2.4.5:查看集群種的主從關系:

      cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386 slave 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 0 1482903991630 6 connected
      b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382 master - 0 1482903990620 2 connected 5461-10922
      7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385 slave b3f5ba5a3b1f53e358f438c923f9591055510b96 0 1482903989610 5 connected
      b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384 slave fc3b44c6d18abbf7191338a8a7fafdc516b6d758 0 1482903988600 4 connected
      fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381 master - 0 1482903989105 1 connected 0-5460
      8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383 myself,master - 0 0 3 connected 10923-16383

      2,5:動態增加redis服務節點并重新分片:

      2.5.1:添加redis主機到集群種:

                              要添加的redis節點IP和端口    添加到的集群中的master IP:端口
      # redis-trib  add-node   192.168.10.101:6387         192.168.10.101:6381 
      [root@redis1 6388]# redis-trib  add-node   192.168.10.101:6387   192.168.10.101:6381
      >>> Adding node 192.168.10.101:6387 to cluster 192.168.10.101:6381
      >>> Performing Cluster Check (using node 192.168.10.101:6381)
      M: fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381
         slots:0-5460 (5461 slots) master
         1 additional replica(s)
      S: cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386
         slots: (0 slots) slave
         replicates 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5
      S: b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384
         slots: (0 slots) slave
         replicates fc3b44c6d18abbf7191338a8a7fafdc516b6d758
      M: 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383
         slots:10923-16383 (5461 slots) master
         1 additional replica(s)
      M: b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382
         slots:5461-10922 (5462 slots) master
         1 additional replica(s)
      S: 7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385
         slots: (0 slots) slave
         replicates b3f5ba5a3b1f53e358f438c923f9591055510b96
      [OK] All nodes agree about slots configuration.
      >>> Check for open slots...
      >>> Check slots coverage...
      [OK] All 16384 slots covered.
      >>> Send CLUSTER MEET to node 192.168.10.101:6387 to make it join the cluster.
      [OK] New node added correctly.

      2.6:添加主機之后需要對添加至集群種的新主機重新分片否則其沒有分片,如下:

      2.6.1:驗證新添加主機的分片:

      192.168.10.101:6381> CLUSTER nodes
      cc76054bf257f9bbfec868b86880e22f04fab96e 192.168.10.101:6386 slave 8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 0 1482905220043 6 connected
      b786bd3a567634a7087aa289714063ed6e53bb47 192.168.10.101:6384 slave fc3b44c6d18abbf7191338a8a7fafdc516b6d758 0 1482905221559 4 connected
      45df2831eaba9b2d0108a38e6def32e76b12e027 192.168.10.101:6387 master - 0 1482905220548 0 connected #此redis master沒有分片,即不會被分配到數據
      fc3b44c6d18abbf7191338a8a7fafdc516b6d758 192.168.10.101:6381 myself,master - 0 0 1 connected 0-5460
      8fe3c52b352a1209dfc3b9f2f9fa1be9bc2e69a5 192.168.10.101:6383 master - 0 1482905217518 3 connected 10923-16383
      b3f5ba5a3b1f53e358f438c923f9591055510b96 192.168.10.101:6382 master - 0 1482905222568 2 connected 5461-10922
      7d2da4da536003b2c25c8599526c1c937d071e1e 192.168.10.101:6385 slave b3f5ba5a3b1f53e358f438c923f9591055510b96 0 1482905222063 5 connected

      2.6.2:對添加的redis服務重新分片:

      # redis-trib reshard 192.168.10.101:6387  #上一步驟添加到redis cluster的主機
      >>> Performing Cluster Check (using node 192.168.10.101:6387)
      M: 141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387
         slots: (0 slots) master
         0 additional replica(s)
      S: 4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384
         slots: (0 slots) slave
         replicates a6c2425025a04039185d33997092f1738d43614c
      M: f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383
         slots:10923-16383 (5461 slots) master
         1 additional replica(s)
      S: 6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386
         slots: (0 slots) slave
         replicates f331d11d3b1409c30c7f7473d9f9e72634b673fe
      S: 0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385
         slots: (0 slots) slave
         replicates ffcba6782faa729d0e794c8a0ff3241b068b39e3
      M: ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382
         slots:5461-10922 (5462 slots) master
         1 additional replica(s)
      M: a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381
         slots:0-5460 (5461 slots) master
         1 additional replica(s)
      [OK] All nodes agree about slots configuration.
      >>> Check for open slots...
      >>> Check slots coverage...
      [OK] All 16384 slots covered.
      How many slots do you want to move (from 1 to 16384)? 4000  #分配多少個槽位給192.168.10.101:6387
      What is the receiving node ID? 141580829002b23dbff5a5d7609eaa5e9ec4710b  #192.168.10.101:6387的ID
      Please enter all the source node IDs.
        Type 'all' to use all the nodes as source nodes for the hash slots.
        Type 'done' once you entered all the source nodes IDs.
      Source node #1:all #將哪些源主機的槽位分配給192.168.10.101:6387,all是自動在所有的redis選擇,如果是從redis cluster刪除主機可以使用此方式將主機上的槽位全部移動到別的redis主機
      
      Ready to move 4000 slots.
        Source nodes:
          M: f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383
         slots:10923-16383 (5461 slots) master
         1 additional replica(s)
          M: ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382
         slots:5461-10922 (5462 slots) master
         1 additional replica(s)
          M: a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381
         slots:0-5460 (5461 slots) master
         1 additional replica(s)
        Destination node:
          M: 141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387
         slots: (0 slots) master
         0 additional replica(s)
        Resharding plan:
          Moving slot 5461 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5462 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5463 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5464 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5465 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5466 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5467 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5468 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5469 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5470 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5471 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5472 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5473 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
          Moving slot 5474 from ffcba6782faa729d0e794c8a0ff3241b068b39e3
      	。。。。。。。。。。 
      	Do you want to proceed with the proposed reshard plan (yes/no)? yes #確認
      Moving slot 5461 from 192.168.10.101:6382 to 192.168.10.101:6387:  #開始將每個主機的槽位分配給新主機
      Moving slot 5462 from 192.168.10.101:6382 to 192.168.10.101:6387: 
      Moving slot 5463 from 192.168.10.101:6382 to 192.168.10.101:6387: 
      Moving slot 5464 from 192.168.10.101:6382 to 192.168.10.101:6387: 
      Moving slot 5465 from 192.168.10.101:6382 to 192.168.10.101:6387Moving slot 10973 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10974 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10975 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10976 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10977 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10978 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10979 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10980 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 10981 from 192.168.10.101:6383 to 192.168.10.101:6387:Moving slot 684 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 685 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 686 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 687 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 688 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 689 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 690 from 192.168.10.101:6381 to 192.168.10.101:6387:Moving slot 691 from 192.168.10.101:6381 to 192.168.10.101:6387:

      2.6.3:查看重新分片之后的redis cluster狀態:

      [root@redis1 ~]# redis-cli   -c -h 192.168.10.101 -p 6387
      192.168.10.101:6387> CLUSTER NODES
      4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484327687435 1 connected
      141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 myself,master - 0 0 7 connected 0-1332 5461-6794 10923-12255 #新節點已經有了分片
      f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484327686426 3 connected 12256-16383
      6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484327684411 3 connected
      0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484327688442 2 connected
      ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484327689449 2 connected 6795-10922
      a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484327683405 1 connected 1333-5460
      192.168.10.101:6387> CLUSTER INFO
      cluster_state:ok
      cluster_slots_assigned:16384
      cluster_slots_ok:16384
      cluster_slots_pfail:0
      cluster_slots_fail:0
      cluster_known_nodes:7
      cluster_size:4
      cluster_current_epoch:7
      cluster_my_epoch:7
      cluster_stats_messages_sent:5000
      cluster_stats_messages_received:4987

      2.6.4:為新的節點添加redis從節點:

      [root@redis1 conf.d]# redis-trib  add-node  192.168.10.101:6388  192.168.10.101:6387  #格式為 新節點:IP   已存在節點:IP
      >>> Adding node 192.168.10.101:6388 to cluster 192.168.10.101:6387
      >>> Performing Cluster Check (using node 192.168.10.101:6387)
      M: 141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387
         slots:0-1332,5461-6794,10923-12255 (4000 slots) master
         0 additional replica(s)
      S: 4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384
         slots: (0 slots) slave
         replicates a6c2425025a04039185d33997092f1738d43614c
      M: f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383
         slots:12256-16383 (4128 slots) master
         1 additional replica(s)
      S: 6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386
         slots: (0 slots) slave
         replicates f331d11d3b1409c30c7f7473d9f9e72634b673fe
      S: 0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385
         slots: (0 slots) slave
         replicates ffcba6782faa729d0e794c8a0ff3241b068b39e3
      M: ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382
         slots:6795-10922 (4128 slots) master
         1 additional replica(s)
      M: a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381
         slots:1333-5460 (4128 slots) master
         1 additional replica(s)
      [OK] All nodes agree about slots configuration.
      >>> Check for open slots...
      >>> Check slots coverage...
      [OK] All 16384 slots covered.
      >>> Send CLUSTER MEET to node 192.168.10.101:6388 to make it join the cluster.
      [OK] New node added correctly.

      2.6.5:登錄到新添加的節點并將其設置為從節點:

      [root@redis1 redis]# redis-cli  -c -h 192.168.10.101 -p 6388 #登錄到新添加的節點
      192.168.10.101:6388> CLUSTER NODES
      4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484328517178 1 connected
      8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388 myself,master - 0 0 0 connected #該節點默認為master
      a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484328518189 1 connected 1333-5460
      f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484328515158 3 connected 12256-16383
      6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484328519197 3 connected
      ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484328516170 2 connected 6795-10922
      0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484328520207 2 connected
      141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 master - 0 1484328517683 7 connected 0-1332 5461-6794 10923-12255
      192.168.10.101:6388> CLUSTER REPLICATE 141580829002b23dbff5a5d7609eaa5e9ec4710b #將其設置slave,命令格式為cluster replicate MASTERID
      OK
      192.168.10.101:6388> CLUSTER NODES #再次查看主機狀態
      4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484328550457 1 connected
      8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388 myself,slave 141580829002b23dbff5a5d7609eaa5e9ec4710b 0 0 0 connected #已經變為從節點
      a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484328550964 1 connected 1333-5460
      f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484328549952 3 connected 12256-16383
      6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484328549449 3 connected
      ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484328551467 2 connected 6795-10922
      0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484328546423 2 connected
      141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 master - 0 1484328548440 7 connected 0-1332 5461-6794 10923-12255

      2.7:從redis cluster刪除節點,適用于物理機硬件損壞、架構變更等場景,假如我要講上面添加的2個redis server刪除:

      2.7.1:先將master節點上的槽位全部遷移到其他master節點:

      [root@redis1 redis]# redis-trib   reshard 192.168.10.101:6381 
      >>> Performing Cluster Check (using node 192.168.10.101:6381)
      M: a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381
         slots:1333-5460 (4128 slots) master
         1 additional replica(s)
      S: 8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388
         slots: (0 slots) slave
         replicates 141580829002b23dbff5a5d7609eaa5e9ec4710b
      M: ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382
         slots:6795-10922 (4128 slots) master
         1 additional replica(s)
      S: 6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386
         slots: (0 slots) slave
         replicates f331d11d3b1409c30c7f7473d9f9e72634b673fe
      M: f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383
         slots:12256-16383 (4128 slots) master
         1 additional replica(s)
      S: 0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385
         slots: (0 slots) slave
         replicates ffcba6782faa729d0e794c8a0ff3241b068b39e3
      S: 4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384
         slots: (0 slots) slave
         replicates a6c2425025a04039185d33997092f1738d43614c
      M: 141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 
         slots:0-1332,5461-6794,10923-12255 (4000 slots) master
         1 additional replica(s)
      [OK] All nodes agree about slots configuration.
      >>> Check for open slots...
      >>> Check slots coverage...
      [OK] All 16384 slots covered.
      How many slots do you want to move (from 1 to 16384)? 4000 #遷移多少個槽位,要等于被遷移服務器的數量,上面的M上Master會有,slave沒有
      What is the receiving node ID? a6c2425025a04039185d33997092f1738d43614c  #目標服務器的ID,即將槽位遷移到目標服務器
      Please enter all the source node IDs.
        Type 'all' to use all the nodes as source nodes for the hash slots.
        Type 'done' once you entered all the source nodes IDs.
      Source node #1:141580829002b23dbff5a5d7609eaa5e9ec4710b #被遷移槽位的服務器,即源服務器
      Source node #2:done
          Moving slot 12252 from 141580829002b23dbff5a5d7609eaa5e9ec4710b
          Moving slot 12253 from 141580829002b23dbff5a5d7609eaa5e9ec4710b
          Moving slot 12254 from 141580829002b23dbff5a5d7609eaa5e9ec4710b
          Moving slot 12255 from 141580829002b23dbff5a5d7609eaa5e9ec4710b
      Do you want to proceed with the proposed reshard plan (yes/no)? yes
      Moving slot 12248 from 192.168.10.101:6387 to 192.168.10.101:6381: 
      Moving slot 12249 from 192.168.10.101:6387 to 192.168.10.101:6381: 
      Moving slot 12250 from 192.168.10.101:6387 to 192.168.10.101:6381: 
      Moving slot 12251 from 192.168.10.101:6387 to 192.168.10.101:6381: 
      Moving slot 12252 from 192.168.10.101:6387 to 192.168.10.101:6381: 
      Moving slot 12253 from 192.168.10.101:6387 to 192.168.10.101:6381: 
      Moving slot 12254 from 192.168.10.101:6387 to 192.168.10.101:6381: 
      Moving slot 12255 from 192.168.10.101:6387 to 192.168.10.101:6381: 
      。。。。。。。。。。。。。。遷移過程中

      2.7.2:刪除服務器:

       

      [root@redis1 redis]# redis-cli  -c -h 192.168.10.101 -p 6388 #先連接到集群插卡要刪除主機的ID
      192.168.10.101:6388> CLUSTER NODES
      4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484329347313 8 connected
      8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388 myself,slave a6c2425025a04039185d33997092f1738d43614c 0 0 0 connected
      a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484329344289 8 connected 0-6794 10923-12255
      f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484329346305 3 connected 12256-16383
      6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484329350343 3 connected
      ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484329347313 2 connected 6795-10922
      0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484329348323 2 connected
      141580829002b23dbff5a5d7609eaa5e9ec4710b 192.168.10.101:6387 master - 0 1484329349333 7 connected #這是我要刪除的主機ID和IP地址,Master需要和上一步一樣將槽位全部移走,否則丟失數據不負責
      192.168.10.101:6388> 
      [root@redis1 redis]# redis-trib  del-node 192.168.10.101:6387 141580829002b23dbff5a5d7609eaa5e9ec4710b #刪除主機redis-trib del-node IP:PORT ID
      >>> Removing node 141580829002b23dbff5a5d7609eaa5e9ec4710b from cluster 192.168.10.101:6387
      >>> Sending CLUSTER FORGET messages to the cluster...
      >>> SHUTDOWN the node.
      [root@redis1 redis]# redis-cli  -c -h 192.168.10.101 -p 6388 #再次查看信息,確認刪除成功,slave狀態的主機可以直接刪除,只有master才需要遷移槽位
      192.168.10.101:6388> CLUSTER  nodes
      4a968803993de79e660193c28d034af68a750a97 192.168.10.101:6384 slave a6c2425025a04039185d33997092f1738d43614c 0 1484329378619 8 connected
      8e40a0498f3645e812e463198e1b68ec54d5fdce 192.168.10.101:6388 myself,slave a6c2425025a04039185d33997092f1738d43614c 0 0 0 connected
      a6c2425025a04039185d33997092f1738d43614c 192.168.10.101:6381 master - 0 1484329378113 8 connected 0-6794 10923-12255
      f331d11d3b1409c30c7f7473d9f9e72634b673fe 192.168.10.101:6383 master - 0 1484329376600 3 connected 12256-16383
      6143ab6e475612877da739f61933873b1594e290 192.168.10.101:6386 slave f331d11d3b1409c30c7f7473d9f9e72634b673fe 0 1484329375590 3 connected
      ffcba6782faa729d0e794c8a0ff3241b068b39e3 192.168.10.101:6382 master - 0 1484329377610 2 connected 6795-10922
      0ef2996f7b2e8a77eb16f7d94829b9906d5a532f 192.168.10.101:6385 slave ffcba6782faa729d0e794c8a0ff3241b068b39e3 0 1484329379627 2 connected

       

      posted on 2018-03-17 17:43  tianyou.zhu  閱讀(345)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 国产精品毛片一区二区| 日本久久久久久久做爰片日本| 午夜成人无码免费看网站| 日本极品少妇videossexhd| 久久久久亚洲A√无码| 亚洲精品无码日韩国产不卡av| 国产精品无遮挡猛进猛出| 老司机午夜精品视频资源| 国产偷国产偷亚洲清高网站| 亚洲爆乳成av人在线视菜奈实 | 亚洲欧美综合精品成人导航| 又粗又硬又黄a级毛片| 一本久久a久久精品综合| 久久亚洲中文字幕伊人久久大| 里番全彩爆乳女教师| 日韩精品av一区二区三区| 亚洲熟女片嫩草影院| 中文字幕日韩国产精品| 日本伊人色综合网| 亚洲伊人久久精品影院| 阜城县| 亚洲乱码中文字幕综合| 噜噜综合亚洲av中文无码| 女人被狂躁的高潮免费视频| 亚洲欧洲一区二区三区久久 | 国产女人18毛片水真多1| 国产太嫩了在线观看| 日韩视频一区二区三区视频| 亚洲伊人久久综合成人| 国产免费午夜福利蜜芽无码| 一区二区精品久久蜜精品| 国产v亚洲v天堂a无码| 奇米影视7777久久精品| 极品少妇无套内射视频| 拉萨市| 久热这里只有精品在线观看| 国产睡熟迷奷系列网站| 国产jjizz女人多水喷水| 亚洲国产性夜夜综合| 亚洲AV无码AV在线影院| 精品无码成人片一区二区|