<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      clickhouse搭建單機版和集群版本

       

      官網地址:https://clickhouse.com/docs/zh/install/redhat

      本次部署版本是25.7.8.71,github下載地址https://github.com/ClickHouse/ClickHouse/releases

      一、單機版

      采用tgz方式部署

      cd /opt/clickhouse

      1、下載clickhouse-common-static包,然后解壓運行腳本

      wget https://github.com/ClickHouse/ClickHouse/releases/download/v25.7.8.71-stable/clickhouse-common-static-25.7.8.71-amd64.tgz
      tar zxvf clickhouse-common-static-25.7.8.71-amd64.tgz
      clickhouse-common-static-25.7.8.71/install/doinst.sh

      2、下載clickhouse-common-static-dbg包,然后解壓運行腳本

      wget https://github.com/ClickHouse/ClickHouse/releases/download/v25.7.8.71-stable/clickhouse-common-static-dbg-25.7.8.71-amd64.tgz
      tar zxvf clickhouse-common-static-dbg-25.7.8.71-amd64.tgz
      clickhouse-common-static-dbg-25.7.8.71/install/doinst.sh

      3、下載clickhouse-server包,然后解壓運行腳本,然后會提示輸入默認賬戶default的密碼,這里我輸入密碼是sykj1234

      wget https://github.com/ClickHouse/ClickHouse/releases/download/v25.7.8.71-stable/clickhouse-server-25.7.8.71-amd64.tgz
      tar zxvf clickhouse-server-25.7.8.71-amd64.tgz
      clickhouse-server-25.7.8.71/install/doinst.sh

      4、下載clickhouse-client包,然后解壓運行腳本

      wget https://github.com/ClickHouse/ClickHouse/releases/download/v25.7.8.71-stable/clickhouse-client-25.7.8.71-amd64.tgz
      tar zxvf clickhouse-client-25.7.8.71-amd64.tgzclickhouse-client-25.7.8.71-amd64.tgz
      clickhouse-client-25.7.8.71/install/doinst.sh

      5、修改配置

      vim /etc/clickhouse-server/config.xml
      <!-- 找到<listen_host>配置修改監聽IP -->
          <listen_host>0.0.0.0</listen_host>

      6、啟動clickhouse-server

      systemctl start clickhouse-server

       

      二、集群版

      本次部署為3節點3分片1副本,使用clickhouse-keeper代替zookeeper,服務器信息如下

      服務器ip server_id    
      192.168.18.12 1    
      192.168.18.13 2    
      192.168.18.14 3    

       

       

       

       

      1、各點節點按照單機版部署clickhouse

       

      2、在各個節點配置clickhouse-keeper,主要是修改server_id、listen_host和raft_configuration配置

       vim /etc/clickhouse-keeper/keeper_config.xml
          # 添加listen_host配置
          <listen_host>0.0.0.0</listen_host>
      
          <keeper_server>
                  <tcp_port>9181</tcp_port>
      
                  <!-- Must be unique among all keeper serves -->
                  <server_id>1</server_id>
      
                  <log_storage_path>/var/lib/clickhouse/coordination/logs</log_storage_path>
                  <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
      
                  <coordination_settings>
                      <operation_timeout_ms>10000</operation_timeout_ms>
                      <min_session_timeout_ms>10000</min_session_timeout_ms>
                      <session_timeout_ms>100000</session_timeout_ms>
                      <raft_logs_level>information</raft_logs_level>
                      <compress_logs>false</compress_logs>
      
                  <!-- enable sanity hostname checks for cluster configuration (e.g. if localhost is used with remote endpoints) -->
                  <hostname_checks_enabled>true</hostname_checks_enabled>
                  <raft_configuration>
                      <server>
                          <id>1</id>
      
                          <!-- Internal port and hostname -->
                          <hostname>192.168.18.12</hostname>
                          <port>9234</port>
                      </server>
      
                      <!-- Add more servers here -->
                      <server>
                          <id>2</id>
      
                          <!-- Internal port and hostname -->
                          <hostname>192.168.18.13</hostname>
                          <port>9234</port>
                      </server>
      
                      <server>
                          <id>3</id>
      
                          <!-- Internal port and hostname -->
                          <hostname>192.168.18.14</hostname>
                          <port>9234</port>
                      </server>
      
      
                  </raft_configuration>
          </keeper_server>

       

      3、在各個節點啟動clickhouse-keeper

      systemctl start clickhouse-keeper

       

      4、在各個節點配置clickhouse-server

      配置clickhouse-server指向clickhouse-keeper

      vim /etc/clickhouse-server/config.xml 
      #找到zookeeper字段并配置
      
          <zookeeper>
              <node>
                  <host>192.168.18.12</host>
                  <port>9181</port>
              </node>
              <node>
                  <host>192.168.18.13</host>
                  <port>9181</port>
              </node>
              <node>
                  <host>192.168.18.14</host>
                  <port>9181</port>
              </node>
          </zookeeper>

       

      5、配置remote_servers,其中my_cluster是自定義的集群名稱

          <remote_servers>
              <!-- Test only shard config for testing distributed storage -->
              <my_cluster>
                  <!-- Inter-server per-cluster secret for Distributed queries
                       default: no secret (no authentication will be performed)
      
                       If set, then Distributed queries will be validated on shards, so at least:
                       - such cluster should exist on the shard,
                       - such cluster should have the same secret.
      
                       And also (and which is more important), the initial_user will
                       be used as current user for the query.
      
                       Right now the protocol is pretty simple, and it only takes into account:
                       - cluster name
                       - query
      
                       Also, it will be nice if the following will be implemented:
                       - source hostname (see interserver_http_host), but then it will depend on DNS,
                         it can use IP address instead, but then you need to get correct on the initiator node.
                       - target hostname / ip address (same notes as for source hostname)
                       - time-based security tokens
                  -->
                  <!-- <secret></secret> -->
      
                  <shard>
                      <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
                      <!-- <internal_replication>false</internal_replication> -->
                      <!-- Optional. Shard weight when writing data. Default: 1. -->
                      <!-- <weight>1</weight> -->
                      <replica>
                          <host>192.168.18.12</host>
                          <port>9000</port>
                          <user>default</user>
                          <password>sykj1234</password>
                          <!-- Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). -->
                          <!-- <priority>1</priority> -->
                          <!-- Use SSL? Default: no -->
                          <!-- <secure>0</secure> -->
                          <!-- Optional. Bind to specific host before connecting to use a specific network. -->
                          <!-- <bind_host>10.0.0.1</bind_host> -->
                      </replica>
      
                  </shard>
                  <shard>
                      <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
                      <!-- <internal_replication>false</internal_replication> -->
                      <!-- Optional. Shard weight when writing data. Default: 1. -->
                      <!-- <weight>1</weight> -->
                      <replica>
                          <host>192.168.18.13</host>
                          <port>9000</port>
                          <user>default</user>
                          <password>sykj1234</password>
      
                          <!-- Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). -->
                          <!-- <priority>1</priority> -->
                          <!-- Use SSL? Default: no -->
                          <!-- <secure>0</secure> -->
                          <!-- Optional. Bind to specific host before connecting to use a specific network. -->
                          <!-- <bind_host>10.0.0.1</bind_host> -->
                      </replica>
      
                  </shard>
                  <shard>
                      <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
                      <!-- <internal_replication>false</internal_replication> -->
                      <!-- Optional. Shard weight when writing data. Default: 1. -->
                      <!-- <weight>1</weight> -->
                      <replica>
                          <host>192.168.18.14</host>
                          <port>9000</port>
                          <user>default</user>
                          <password>sykj1234</password>
      
                         <!-- Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). -->
                          <!-- <priority>1</priority> -->
                          <!-- Use SSL? Default: no -->
                          <!-- <secure>0</secure> -->
                          <!-- Optional. Bind to specific host before connecting to use a specific network. -->
                          <!-- <bind_host>10.0.0.1</bind_host> -->
                      </replica>
      
                  </shard>
      
      
      
              </my_cluster>
          </remote_servers>

       

      6、在各個節點啟動clickhouse-server

      systemctl start clickhouse-server

       

      7、驗證集群

      在其中3個節點中的任意一個上執行

      登錄clickhouse

      clickhouse-client

      查看集群消息

      show clusters;
      select * from system.cluster

       

      8、插入數據驗證分片

      在其中3個節點中的任意一個上執行

      創建數據庫

      create database ruby_db01 on cluster my_cluster

      創建存儲表

      create table ruby_db01.c_mgtree_l on cluster my_cluster (
      id Int,
      label String,
      dt Date
      )engine = MergeTree()
      partition by toYYYYMMDD(dt)
      order by id

      創建分布式表

      CREATE TABLE ruby_db01.c_mgtree_c ON CLUSTER my_cluster
      (
          `id` Int,
          `label` String,
          `dt` Date
      )ENGINE = Distributed('my_cluster', 'ruby_db01', 'c_mgtree_l', rand())

      插入數據

      INSERT INTO ruby_db01.c_mgtree_c (id, label, dt) VALUES (1, '蘋果', '2025-10-19 17:42:01')
      
      INSERT INTO ruby_db01.c_mgtree_c (id, label, dt) VALUES (2, '香蕉', '2025-10-19 17:42:01')
      
      INSERT INTO ruby_db01.c_mgtree_c (id, label, dt) VALUES (3, '橙子', '2025-10-19 17:42:01')
      
      INSERT INTO ruby_db01.c_mgtree_c (id, label, dt) VALUES (4, '葡萄', '2025-10-19 17:42:01')
      
      INSERT INTO ruby_db01.c_mgtree_c (id, label, dt) VALUES (5, '西瓜', '2025-10-19 17:42:01')
      
      INSERT INTO ruby_db01.c_mgtree_c (id, label, dt) VALUES (6, '哈密瓜', '2025-10-19 17:42:01')
      
      INSERT INTO ruby_db01.c_mgtree_l (id, label, dt) VALUES (7, '番茄', '2025-10-19 17:42:01')

      然后分別在3個節點上查看數據

      #可以得到所有數據
      select * from ruby_db01.c_mgtree_c
      
      #只得到部分數據
      select * from ruby_db01.c_mgtree_l

       

      posted @ 2025-10-19 20:16  苦逼yw  閱讀(110)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 亚洲精品一区二区妖精| 国产精品久久毛片| 99在线精品视频观看免费| 精品国产中文字幕在线看| 國產尤物AV尤物在線觀看| 久久国产热这里只有精品| 52熟女露脸国语对白视频| 平原县| jizz国产免费观看| 亚洲精品成人无限看| av中文字幕国产精品| 无码人妻一区二区三区四区AV| 亚洲VA成无码人在线观看天堂| 国产女高清在线看免费观看| 亚洲国产高清av网站| 中文字幕日韩区二区三区| 国产成人AV在线免播放观看新| 精品无码中文视频在线观看| 一本久道久久综合中文字幕 | 大尺度国产一区二区视频| 登封市| 国产精品区一区第一页| 成人免费A级毛片无码片2022| 国产精品 自在自线| 人妻系列中文字幕精品| 人妻久久久一区二区三区| 狠狠色狠狠色综合日日不卡| 久久99精品国产99久久6男男| 97久久久亚洲综合久久| 国产一区二区日韩经典| 亚洲精品国产精品国自产| 中文字幕乱码十国产乱码| 欧美人成精品网站播放| 青浦区| 熟妇的奶头又大又长奶水视频| 农村熟女大胆露脸自拍| 亚洲中文字幕无码中字| 国产乱人伦真实精品视频| av色国产色拍| 亚洲综合一区国产精品| 久久久午夜精品福利内容|