<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      基于CentOS7安裝Openstack (Train版)

      安裝參考:https://docs.openstack.org/zh_CN/install-guide/

      角色 配置 IP
      控制節點(controller) CPU 4+
      MEM 6G+
      DISK 50G+
      172.173.10.110 (管理網)
      10.1.1.10 (外部網絡)
      計算節點(compute) CPU 4+
      MEM 6G+
      DISK 50G+
      172.173.10.111 (管理網)
      10.1.1.11 (外部網絡)
      存儲節點(cinder) CPU 4+
      MEM 6G+
      DISK1 50G+
      DISK2 50G+
      172.173.10.112 (管理網)

      一、環境初始化

      1.1 配置靜態IP并禁用NetworkManager

      # 略過配置ip
      systemctl disable NetworkManager --now
      

      1.2 主機名解析

      cat <<EOF>> /etc/hosts
      172.173.10.110 controller
      172.173.10.111 compute
      172.173.10.112 cinder
      EOF
      

      1.3 關閉防火墻和selinux

      systemctl disable firewalld --now && setenforce 0 && sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
      

      1.4 配置時間同步

      sed -i '/^server [0-3]\.centos\.pool\.ntp\.org iburst/d' /etc/chrony.conf
      sed -i '3i server ntp.aliyun.com iburst' /etc/chrony.conf
      systemctl restart chronyd && systemctl enable chronyd
      

      1.5 配置yum源

      rm -rf /etc/yum.repos.d/*
      curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.huaweicloud.com/repository/conf/CentOS-7-anon.repo
      cat <<EOF> /etc/yum.repos.d/CentOS-OpenStack-train.repo
      [openstack-train]
      name=openstack-train
      baseurl=https://mirrors.huaweicloud.com/centos-vault/7.9.2009/cloud/x86_64/openstack-train/
      gpgcheck=0
      enabled=1
      [qumu-kvm]
      name=qume-kvm
      baseurl=https://mirrors.huaweicloud.com/centos-vault/7.9.2009/virt/x86_64/kvm-common/
      gpgcheck=0
      enabled=1
      EOF
      yum clean all && yum makecache
      

      二、安裝

      2.1 所有節點安裝openstack基礎工具

      yum install -y python-openstackclient
      

      2.2 計算節點安裝基本軟件包

      [root@compute ~]# yum install qemu-kvm libvirt bridge-utils -y
      [root@compute ~]# ln -sv /usr/libexec/qemu-kvm /usr/bin/
      

      三、安裝支撐性服務

      3.1 數據庫

      控制節點安裝mariadb(也可以安裝單獨的節點,甚至安裝數據庫集 群)

      yum install mariadb mariadb-server python2-PyMySQL -y
      

      增加子配置文件:

      cat >/etc/my.cnf.d/openstack.cnf<<'EOF'
      [mysqld]
      bind-address = 0.0.0.0
      default-storage-engine = innodb
      innodb_file_per_table = on
      max_connections = 4096
      collation-server = utf8_general_ci
      character-set-server = utf8
      EOF
      

      啟動數據庫

      systemctl enable mariadb --now
      

      安裝初始化

      mysql_secure_installation
      

      root密碼建議就先不設置,他只允許本地登錄

      3.2 消息隊列

      消息隊列rabbitmq的目的:

      ? ·組件之間相互通訊的工具

      ? ·異步方式信息同步

      1.在控制節點安裝rabbitmq:

      yum install erlang socat rabbitmq-server -y
      

      2.啟動服務:

      systemctl enable rabbitmq-server --now
      

      3.創建openstack并授權:

      rabbitmqctl add_user openstack guojie.com
      rabbitmqctl set_user_tags openstack administrator
      rabbitmqctl set_permissions openstack ".*" ".*" ".*"
      

      3.3 memcache部署

      memcache作用:

      ? ·memcached緩存openstack各類服務的驗證的token令牌。

      1.在控制節點安裝相關軟件包

      yum install memcached python-memcached -y
      

      2.配置memcached監聽

      sed -i 's#127.0.0.1#0.0.0.0#g' /etc/sysconfig/memcached
      

      3.啟動

      systemctl enable memcached --now
      

      四、安裝認證服務keystone

      4.1 配置數據庫

      在安裝和配置身份服務之前,您必須創建一個數據庫
      1.使用數據庫訪問客戶端以root用戶身份連接到數據庫服務器:

      mysql -u root -p
      

      2.創建keystone數據庫:

      CREATE DATABASE keystone;
      GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'guojie.com';
      FLUSH PRIVILEGES;
      QUIT;
      

      4.2 安裝keystone組件

      控制節點安裝keystone相關軟件
      1.運行以下命令安裝包:

      yum install -y openstack-keystone httpd mod_wsgi
      

      keystone基于httpd啟動

      httpd需要mod_wsgi模塊才能運行python開發的程序

      2.編輯/etc/keystone/keystone.conf 文件并完成以下操作:

      配置:

      cp /etc/keystone/keystone.conf{,.bak}
      grep -Ev '^$|#' /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
      sed -i '/^\[database\]/a connection = mysql+pymysql://keystone:guojie.com@controller/keystone' /etc/keystone/keystone.conf
      sed -i '/^\[token\]/a provider = fernet' /etc/keystone/keystone.conf
      sed -i '/^\[DEFAULT\]/a transport_url = rabbit://openstack:guojie.com@controller:5672' /etc/keystone/keystone.conf
      sed -i '/^\[DEFAULT\]/a log_file = /var/log/keystone/keystone.log' /etc/keystone/keystone.conf
      

      3.初始化數據庫

      su -s /bin/sh -c "keystone-manage db_sync" keystone
      

      4.初始化Fernet密鑰存儲庫:

      keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
      keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
      

      5.引導身份服務:

      keystone-manage bootstrap --bootstrap-password guojie.com \
        --bootstrap-admin-url http://controller:35357/v3/ \
        --bootstrap-internal-url http://controller:5000/v3/ \
        --bootstrap-public-url http://controller:5000/v3/ \
        --bootstrap-region-id RegionOne
      

      guojie.com是我設置的openstack的管理員密碼。

      4.3 配置Apache HTTP服務器

      1.編輯/etc/httpd/conf/httpd.conf文件,將ServerName選項配置為引用控制器節點:

      sed -i 's/^#ServerName www.example.com:80/ServerName controller:80/' /etc/httpd/conf/httpd.conf
      ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
      

      2.啟動服務

      systemctl enable httpd --now 
      

      4.4 創建domain,project,user和role

      1.配置管理賬戶

      cat << EOF > ~/.admin-openrc
      export OS_PROJECT_DOMAIN_NAME=Default
      export OS_USER_DOMAIN_NAME=Default
      export OS_PROJECT_NAME=admin
      export OS_USERNAME=admin
      export OS_PASSWORD=guojie.com
      export OS_AUTH_URL=http://controller:5000/v3
      export OS_IDENTITY_API_VERSION=3
      export OS_IMAGE_API_VERSION=2
      EOF
      

      2.依次創建domain, projects, users, roles,需要先安裝好python3-openstackclient:

      yum -y install python-openstackclient
      

      3.導入環境變量

      source ~/.admin-openrc
      

      創建project service,其中 domain default 在 keystone-manage bootstrap 時已創建

      openstack domain create --description "An Example Domain" example
      openstack project create --domain default --description "Service Project" service
      

      五、鏡像服務glance

      鏡像服務使用戶能夠發現、注冊和檢索虛擬機映像。它提供了一個RESTAPI,允許您查詢虛擬機映像元數據并檢索實際映像。
      您可以將通過鏡像服務提供的虛擬機映像存儲在各種位置,從簡單的文件系統到對象存儲系統,如 Openstack 對象存儲。

      參考文檔:OpenStack Docs: Install and configure (Red Hat)

      5.1 配置數據庫

      在安裝和配置 lmage 服務之前,您必須創建數據庫、服務憑證和 API終端節點。
      1.要創建數據庫,請完成以下步驟:
      使用數據庫訪問客戶端以root用戶身份連接到數據庫服務器:

      mysql -uroot -p
      

      創建glance數據庫

      CREATE DATABASE glance;
      GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'guojie.com';
      GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'guojie.com';
      FLUSH PRIVILEGES;
      QUIT;
      

      5.2 權限配置

      1.創建用戶

      source .admin-openrc
      openstack user create --domain default --password guojie.com glance
      openstack role add --project service --user glance admin
      

      3.創建 glance服務

      openstack service create --name glance --description "OpenStack Image" image
      

      4.創建glance服務的API的endpoint(url訪問)

      openstack endpoint create --region RegionOne image public http://controller:9292
      openstack endpoint create --region RegionOne image internal http://controller:9292
      openstack endpoint create --region RegionOne image admin http://controller:9292
      

      5.3 glance安裝與配置

      1.安裝:

      yum -y install openstack-glance
      

      2.備份配置文件:

      cp /etc/glance/glance-api.conf{,.bak}
      grep -Ev '^#|^$' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf
      vim /etc/glance/glance-api.conf
      
      [DEFAULT]
      log_file = /var/log/glance/glance-api.log
      
      [database]
      connection = mysql+pymysql://glance:guojie.com@controller/glance
      
      [glance_store]
      stores = file,http
      default_store = file
      filesystem_store_datadir = /var/lib/glance/images/
      
      [keystone_authtoken]
      www_authenticate_uri  = http://controller:5000
      auth_url = http://controller:5000
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = Default
      user_domain_name = Default
      project_name = service
      username = glance
      password = guojie.com
      
      [paste_deploy]
      flavor = keystone
      
      

      完整配置:

      [root@controller ~]# cat /etc/glance/glance-api.conf
      [DEFAULT]
      log_file = /var/log/glance/glance-api.log
      [cinder]
      [cors]
      [database]
      connection = mysql+pymysql://glance:guojie.com@controller/glance
      [file]
      [glance.store.http.store]
      [glance.store.rbd.store]
      [glance.store.sheepdog.store]
      [glance.store.swift.store]
      [glance.store.vmware_datastore.store]
      [glance_store]
      stores = file,http
      default_store = file
      filesystem_store_datadir = /var/lib/glance/images/
      [image_format]
      [keystone_authtoken]
      www_authenticate_uri  = http://controller:5000
      auth_url = http://controller:5000
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = Default
      user_domain_name = Default
      project_name = service
      username = glance
      password = guojie.com
      [oslo_concurrency]
      [oslo_messaging_amqp]
      [oslo_messaging_kafka]
      [oslo_messaging_notifications]
      [oslo_messaging_rabbit]
      [oslo_middleware]
      [oslo_policy]
      [paste_deploy]
      flavor = keystone
      [profiler]
      [store_type_location_strategy]
      [task]
      [taskflow_executor]
      

      導入數據

      su -s /bin/sh -c "glance-manage db_sync" glance
      

      5.4 啟動服務

      systemctl enable openstack-glance-api --now 
      

      5.5 驗證

      1.下載測試鏡像

      wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
      

      2.上傳鏡像

      openstack image create --disk-format qcow2 --container-format bare --file cirros-0.4.0-x86_64-disk.img --public cirros
      

      public表示所有項目可用

      3.驗證鏡像是否上傳

      [root@controller ~]#  openstack image list
      +--------------------------------------+--------+--------+
      | ID                                   | Name   | Status |
      +--------------------------------------+--------+--------+
      | 03a823ea-6883-4a4b-9629-1b4839f0644a | cirros | active |
      +--------------------------------------+--------+--------+
      

      六、計算組件nova

      參考:OpenStack Docs: Install and configure controller node for Red Hat Enterprise Linux and CentOS

      6.1 nova控制節點部署

      6.1 .1 配置數據庫

      [root@controller ~]# mysql -uroot -p
      

      創建 nova_api, nova, nova_cell0數據庫:

      MariaDB [(none)]> CREATE DATABASE nova_api;
      MariaDB [(none)]> CREATE DATABASE nova;
      MariaDB [(none)]> CREATE DATABASE nova_cell0;
      

      數據庫授權給用戶:

      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'guojie.com';
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'guojie.com';
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'guojie.com';
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'guojie.com';
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'guojie.com';
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'guojie.com';
      # 刷新授權
      MariaDB [(none)]> FLUSH PRIVILEGES;
      MariaDB [(none)]> QUIT;
      

      驗證:

      [root@controller ~]# mysql -h controller -u nova -pguojie.com -e 'show databases'
      +--------------------+
      | Database           |
      +--------------------+
      | information_schema |
      | nova               |
      | nova_api           |
      | nova_cell0         |
      +--------------------+
      

      6.1.2 權限配置

      獲取管理員憑據以訪問僅管理員的CLI命令:

      [root@controller ~]# source admin-openrc.sh 
      

      1.創建nova用戶:

      [root@controller ~]# openstack user create --domain default --password guojie.com nova
      +---------------------+----------------------------------+
      | Field               | Value                            |
      +---------------------+----------------------------------+
      | domain_id           | default                          |
      | enabled             | True                             |
      | id                  | 31f7b758bfe64f16b47d3f934b8ff94b |
      | name                | nova                             |
      | options             | {}                               |
      | password_expires_at | None                             |
      +---------------------+----------------------------------+
      
      # 驗證
      [root@controller ~]# openstack user list
      +----------------------------------+--------+
      | ID                               | Name   |
      +----------------------------------+--------+
      | 281ca4a010a44d56bc3ad29ccadf15d8 | glance |
      | 31f7b758bfe64f16b47d3f934b8ff94b | nova   |
      | 4093e7a9f5454322ba9987581b564fe4 | admin  |
      | e05800abc0c64c3ea73db2557dda4cb7 | demo   |
      +----------------------------------+--------+
      

      2.把nova用戶加入到Service項目的admin角色組

      [root@controller ~]# openstack role add --project service --user nova admin
      

      3.創建nova服務

      [root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
      +-------------+----------------------------------+
      | Field       | Value                            |
      +-------------+----------------------------------+
      | description | OpenStack Compute                |
      | enabled     | True                             |
      | id          | 5628e23741b1491697d811c84bfefd1c |
      | name        | nova                             |
      | type        | compute                          |
      +-------------+----------------------------------+
      
      #驗證
      [root@controller ~]# openstack service list
      +----------------------------------+----------+----------+
      | ID                               | Name     | Type     |
      +----------------------------------+----------+----------+
      | 5628e23741b1491697d811c84bfefd1c | nova     | compute  |
      | dcf75ac097884c1cba3bbab762a2d971 | keystone | identity |
      | fd8bef823fd141e0bf47cbd01115a8f1 | glance   | image    |
      +----------------------------------+----------+----------+
      

      4.配置nova服務的api地址記錄

      [root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
      +--------------+----------------------------------+
      | Field        | Value                            |
      +--------------+----------------------------------+
      | enabled      | True                             |
      | id           | b8f00538c287433faee862766a97e408 |
      | interface    | public                           |
      | region       | RegionOne                        |
      | region_id    | RegionOne                        |
      | service_id   | 5628e23741b1491697d811c84bfefd1c |
      | service_name | nova                             |
      | service_type | compute                          |
      | url          | http://controller:8774/v2.1      |
      +--------------+----------------------------------+
      
      [root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
      +--------------+----------------------------------+
      | Field        | Value                            |
      +--------------+----------------------------------+
      | enabled      | True                             |
      | id           | 5158c5701fd4487aae75c5edea040761 |
      | interface    | internal                         |
      | region       | RegionOne                        |
      | region_id    | RegionOne                        |
      | service_id   | 5628e23741b1491697d811c84bfefd1c |
      | service_name | nova                             |
      | service_type | compute                          |
      | url          | http://controller:8774/v2.1      |
      +--------------+----------------------------------+
        
      [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
      +--------------+----------------------------------+
      | Field        | Value                            |
      +--------------+----------------------------------+
      | enabled      | True                             |
      | id           | 06ae3fb2885647c697e7a316842be102 |
      | interface    | admin                            |
      | region       | RegionOne                        |
      | region_id    | RegionOne                        |
      | service_id   | 5628e23741b1491697d811c84bfefd1c |
      | service_name | nova                             |
      | service_type | compute                          |
      | url          | http://controller:8774/v2.1      |
      +--------------+----------------------------------+
      

      驗證:

      [root@controller ~]# openstack endpoint list
      

      image-20250529133425197

      5.創建placement用戶,用于資源的追蹤記錄

      [root@controller ~]# openstack user create --domain default --password guojie.com placement
      +---------------------+----------------------------------+
      | Field               | Value                            |
      +---------------------+----------------------------------+
      | domain_id           | default                          |
      | enabled             | True                             |
      | id                  | 4ff73c3f796f424d94ad92de74132525 |
      | name                | placement                        |
      | options             | {}                               |
      | password_expires_at | None                             |
      +---------------------+----------------------------------+
      
      # 驗證
      [root@controller ~]# openstack user list
      +----------------------------------+-----------+
      | ID                               | Name      |
      +----------------------------------+-----------+
      | 281ca4a010a44d56bc3ad29ccadf15d8 | glance    |
      | 31f7b758bfe64f16b47d3f934b8ff94b | nova      |
      | 4093e7a9f5454322ba9987581b564fe4 | admin     |
      | 4ff73c3f796f424d94ad92de74132525 | placement |
      | e05800abc0c64c3ea73db2557dda4cb7 | demo      |
      +----------------------------------+-----------+
      
      1. 把placement用戶加入到Service項目的admin角色組
      [root@controller ~]# openstack role add --project service --user placement admin
      

      7.創建placement服務

      [root@controller ~]# openstack service create --name placement --description "Placement API" placement
      +-------------+----------------------------------+
      | Field       | Value                            |
      +-------------+----------------------------------+
      | description | Placement API                    |
      | enabled     | True                             |
      | id          | 99141222efcb43a8891505d6b367e226 |
      | name        | placement                        |
      | type        | placement                        |
      +-------------+----------------------------------+
      
      # 驗證
      [root@controller ~]# openstack service list
      +----------------------------------+-----------+-----------+
      | ID                               | Name      | Type      |
      +----------------------------------+-----------+-----------+
      | 5628e23741b1491697d811c84bfefd1c | nova      | compute   |
      | 99141222efcb43a8891505d6b367e226 | placement | placement |
      | dcf75ac097884c1cba3bbab762a2d971 | keystone  | identity  |
      | fd8bef823fd141e0bf47cbd01115a8f1 | glance    | image     |
      +----------------------------------+-----------+-----------+
      

      8.創建placement服務的api地址記錄

      [root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
      +--------------+----------------------------------+
      | Field        | Value                            |
      +--------------+----------------------------------+
      | enabled      | True                             |
      | id           | 3874c5c7858a4533bae3256d0ef19b9e |
      | interface    | public                           |
      | region       | RegionOne                        |
      | region_id    | RegionOne                        |
      | service_id   | 99141222efcb43a8891505d6b367e226 |
      | service_name | placement                        |
      | service_type | placement                        |
      | url          | http://controller:8778           |
      +--------------+----------------------------------+
      
      [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
      +--------------+----------------------------------++--------------+----------------------------------+
      | Field        | Value                            |
      +--------------+----------------------------------+
      | enabled      | True                             |
      | id           | be709881b529471b9323495481e9b305 |
      | interface    | internal                         |
      | region       | RegionOne                        |
      | region_id    | RegionOne                        |
      | service_id   | 99141222efcb43a8891505d6b367e226 |
      | service_name | placement                        |
      | service_type | placement                        |
      | url          | http://controller:8778           |
      +--------------+----------------------------------+
      
      [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
      +--------------+----------------------------------+
      | Field        | Value                            |
      +--------------+----------------------------------+
      | enabled      | True                             |
      | id           | 7648b38754704013ace5d4a115cc8b6d |
      | interface    | admin                            |
      | region       | RegionOne                        |
      | region_id    | RegionOne                        |
      | service_id   | 99141222efcb43a8891505d6b367e226 |
      | service_name | placement                        |
      | service_type | placement                        |
      | url          | http://controller:8778           |
      +--------------+----------------------------------+
      

      驗證:

      [root@controller ~]# openstack endpoint list
      

      image-20250529134047102

      6.1.3 nova安裝與配置

      1.在控制節點安裝nova相關組件

      [root@controller ~]# yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
      

      2.備份配置文件

      [root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
      [root@controller ~]# cp /etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf.bak
      

      3.修改nova.conf配置文件

      改的東西比較多,建議直接去后面復制改好的。

      2753 enabled_apis=osapi_compute,metadata
      
      3479 connection=mysql+pymysql://nova:guojie.com@controller/nova_api
      
      4453 connection=mysql+pymysql://nova:guojie.com@controller/nova
      
      3130 transport_url=rabbit://openstack:guojie.com@controller
      
      3193 auth_strategy=keystone
      
      
      5771 [keystone_authtoken]  #自帶的,不用改
      5772 uth_uri = http://controller:5000
      5773 auth_url = http://controller:35357
      5774 memcached_servers = controller:11211
      5775 auth_type = password
      5776 project_domain_name = default
      5777 user_domain_name = default
      5778 project_name = service
      5779 username = nova
      5780 password = guojie.com  #寫上一步權限配置忠nova的密碼。
      
      1817 use_neutron=true
      
      2479 firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
      
      9896 enabled=true
      
      9918 vncserver_listen=172.173.10.110     #這里寫控制節點IP
      
      9929 vncserver_proxyclient_address=172.173.10.110	#這里寫控制節點IP
      
      5067 api_servers=http://controller:9292
      
      7488 lock_path=/var/lib/nova/tmp
      
      8303 [placement]  #自帶的,不用改
      8304 os_region_name = RegionOne
      8305 project_domain_name = Default
      8306 project_name = service
      8307 auth_type = password
      8308 user_domain_name = Default
      8309 auth_url = http://controller:35357/v3
      8310 username = placement
      8311 password = guojie.com  #填上一章節權限配置placement用戶的密碼
      

      驗證:

      [root@controller ~]# egrep -v '^#|^$' /etc/nova/nova.conf
      [DEFAULT]
      use_neutron=true
      firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
      enabled_apis=osapi_compute,metadata
      transport_url=rabbit://openstack:guojie.com@controller
      [api]
      auth_strategy=keystone
      [api_database]
      connection=mysql+pymysql://nova:guojie.com@controller/nova_api
      [barbican]
      [cache]
      [cells]
      [cinder]
      [compute]
      [conductor]
      [console]
      [consoleauth]
      [cors]
      [crypto]
      [database]
      connection=mysql+pymysql://nova:guojie.com@controller/nova
      [ephemeral_storage_encryption]
      [filter_scheduler]
      [glance]
      api_servers=http://controller:9292
      [guestfs]
      [healthcheck]
      [hyperv]
      [ironic]
      [key_manager]
      [keystone]
      [keystone_authtoken]
      uth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = nova
      password = guojie.com
      [libvirt]
      [matchmaker_redis]
      [metrics]
      [mks]
      [neutron]
      [notifications]
      [osapi_v21]
      [oslo_concurrency]
      lock_path=/var/lib/nova/tmp
      [oslo_messaging_amqp]
      [oslo_messaging_kafka]
      [oslo_messaging_notifications]
      [oslo_messaging_rabbit]
      [oslo_messaging_zmq]
      [oslo_middleware]
      [oslo_policy]
      [pci]
      [placement]
      os_region_name = RegionOne
      project_domain_name = Default
      project_name = service
      auth_type = password
      user_domain_name = Default
      auth_url = http://controller:35357/v3
      username = placement
      password = guojie.com
      [quota]
      [rdp]
      [remote_debug]
      [scheduler]
      [serial_console]
      [service_user]
      [spice]
      [trusted_computing]
      [upgrade_levels]
      [vendordata_dynamic_auth]
      [vmware]
      [vnc]
      enabled=true
      vncserver_listen=172.173.10.110
      vncserver_proxyclient_address=172.173.10.110
      [workarounds]
      [wsgi]
      [xenserver]
      [xvp]
      

      4.配置00-nova-placement-api.conf配置文件

      將下面的內容加到</VirtualHost>標簽當中

      <Directory /usr/bin>
         <IfVersion >= 2.4>
            Require all granted
         </IfVersion>
         <IfVersion < 2.4>
            Order allow,deny
            Allow from all
         </IfVersion>
      </Directory>
      

      如下:

      [root@controller ~]# cat /etc/httpd/conf.d/00-nova-placement-api.conf
      Listen 8778
      
      <VirtualHost *:8778>
        WSGIProcessGroup nova-placement-api
        WSGIApplicationGroup %{GLOBAL}
        WSGIPassAuthorization On
        WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
        WSGIScriptAlias / /usr/bin/nova-placement-api
        <IfVersion >= 2.4>
          ErrorLogFormat "%M"
        </IfVersion>
        ErrorLog /var/log/nova/nova-placement-api.log
        #SSLEngine On
        #SSLCertificateFile ...
        #SSLCertificateKeyFile ...
       <Directory /usr/bin>
          <IfVersion >= 2.4>
             Require all granted
          </IfVersion>
          <IfVersion < 2.4>
             Order allow,deny
             Allow from all
          </IfVersion>
       </Directory>
      </VirtualHost>
      
      Alias /nova-placement-api /usr/bin/nova-placement-api
      <Location /nova-placement-api>
        SetHandler wsgi-script
        Options +ExecCGI
        WSGIProcessGroup nova-placement-api
        WSGIApplicationGroup %{GLOBAL}
        WSGIPassAuthorization On
      </Location>
      

      6.1.4重啟apache服務

      [root@controller ~]# systemctl restart httpd
      

      6.1.5 導入相關nova相關數據庫

      #導入數據到nova_api庫
      [root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
      
      # 注冊cell0數據庫
      [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
      
      # 創建cell1數據庫
      [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
      3c837947-854a-4d61-9af0-722cd9cbebc0
      
      # 再次同步信息到nova庫(nova庫與nova_cell0庫里有相關的表數據)
      [root@controller ~]#  su -s /bin/sh -c "nova-manage db sync" nova   ##忽略告警信息
      

      驗證:

      [root@controller ~]# nova-manage cell_v2 list_cells
      

      image-20250529142521955

      [root@controller ~]# mysql -hcontroller -unova -pguojie.com -e 'use nova;show tables;' |wc -l
      111
      [root@controller ~]# mysql -hcontroller -unova -pguojie.com -e 'use nova_api;show tables;' |wc -l
      33
      [root@controller ~]# mysql -hcontroller -unova -pguojie.com -e 'use nova_cell0;show tables;' |wc -l
      111
      

      6.1.6啟動服務

      [root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service --now
      

      驗證:

      [root@controller ~]# openstack catalog list
      

      image-20250529142956933

      6.2 nova計算節點部署

      參考:OpenStack Docs: Install and configure a compute node for Red Hat Enterprise Linux and CentOS以下操作都在計算節點

      6.2.1 安裝與配置

      1.安裝軟件

      [root@compute ~]# yum -y install openstack-nova-compute sysfsutils
      

      2.備份配置文件

      [root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
      

      3.修改配置文件

      # 這里建議直接復制控制節點的配置文件來修改
      [root@compute ~]# scp root@controller:/etc/nova/nova.conf /etc/nova/nova.conf
      root@controller's password: 
      nova.conf                                                                                          100%  345KB  85.9MB/s   00:00
      
      #修改如下幾個地方:
      1.[vnc]下的幾個參數有所不同
      vncserver_listen=0.0.0.0  監聽任意地址過來連接vnc控制臺
      vncserver_proxyclient_address 接的IP為compute節點管理網絡IP
      novncproxy_base_url = http://172.173.10.110:6080/vnc_auto.html  #這是控制臺轉發的url,里面的ip寫控制節點的IP,不要寫主機名,主機名試了不好使
      
      2.[libvirt]參數組下面加上virt_type=qemu
      不能使用kvm,因為我們本來就在kvm里面搭建的云平臺,cat /proc/cpuinfo |egrep 'vmx|svm'是查不出來的,但如果是生產環境用物理服務器搭建就應該為virt_type=kvm
      

      最終效果:

      [root@compute ~]# egrep -v '^#|^$' /etc/nova/nova.conf
      [DEFAULT]
      use_neutron=true
      firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
      enabled_apis=osapi_compute,metadata
      transport_url=rabbit://openstack:guojie.com@controller
      [api]
      auth_strategy=keystone
      [api_database]
      connection=mysql+pymysql://nova:guojie.com@controller/nova_api
      [barbican]
      [cache]
      [cells]
      [cinder]
      [compute]
      [conductor]
      [console]
      [consoleauth]
      [cors]
      [crypto]
      [database]
      connection=mysql+pymysql://nova:guojie.com@controller/nova
      [ephemeral_storage_encryption]
      [filter_scheduler]
      [glance]
      api_servers=http://controller:9292
      [guestfs]
      [healthcheck]
      [hyperv]
      [ironic]
      [key_manager]
      [keystone]
      [keystone_authtoken]
      uth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = nova
      password = guojie.com
      [libvirt]
      virt_type=qemu
      [matchmaker_redis]
      [metrics]
      [mks]
      [neutron]
      [notifications]
      [osapi_v21]
      [oslo_concurrency]
      lock_path=/var/lib/nova/tmp
      [oslo_messaging_amqp]
      [oslo_messaging_kafka]
      [oslo_messaging_notifications]
      [oslo_messaging_rabbit]
      [oslo_messaging_zmq]
      [oslo_middleware]
      [oslo_policy]
      [pci]
      [placement]
      os_region_name = RegionOne
      project_domain_name = Default
      project_name = service
      auth_type = password
      user_domain_name = Default
      auth_url = http://controller:35357/v3
      username = placement
      password = guojie.com
      [quota]
      [rdp]
      [remote_debug]
      [scheduler]
      [serial_console]
      [service_user]
      [spice]
      [trusted_computing]
      [upgrade_levels]
      [vendordata_dynamic_auth]
      [vmware]
      [vnc]
      enabled=true
      vncserver_listen=0.0.0.0
      vncserver_proxyclient_address=172.173.10.111
      novncproxy_base_url = http://172.173.10.110:6080/vnc_auto.html
      [workarounds]
      [wsgi]
      [xenserver]
      [xvp]
      

      啟動服務:

      [root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service --now
      

      6.2.2 在控制節點上添加計算節點

      1.查看服務

      [root@controller ~]# openstack compute service list
      

      image-20250529150045450

      計算節點服務啟動之后在控制節點上看狀態就為UP,如果不是就要檢查nova日志和檢查配置。

      2.新增計算節點記錄,增加到nova數據庫中

      [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
      Found 2 cell mappings.
      Getting computes from cell 'cell1': 3c837947-854a-4d61-9af0-722cd9cbebc0
      Checking host mapping for compute host 'compute': d9a0c826-ab7f-4b06-8059-0c23cba6adbf
      Creating host mapping for compute host 'compute': d9a0c826-ab7f-4b06-8059-0c23cba6adbf
      Found 1 unmapped computes in cell: 3c837947-854a-4d61-9af0-722cd9cbebc0
      Skipping cell0 since it does not contain hosts.
      

      3.驗證所有API是否正常

      [root@controller ~]# nova-status upgrade check
      +--------------------------+
      | 升級檢查結果             |
      +--------------------------+
      | 檢查: Cells v2           |
      | 結果: 成功               |
      | 詳情: None               |
      +--------------------------+
      | 檢查: Placement API      |
      | 結果: 成功               |
      | 詳情: None               |
      +--------------------------+
      | 檢查: Resource Providers |
      | 結果: 成功               |
      | 詳情: None               |
      +--------------------------+
      

      七、網絡組件neutron

      7.1 neutron控制節點部署

      參考文檔:OpenStack Docs: Install and configure controller node

      7.1.1 數據庫配置

      登錄數據庫

      [root@controller ~]# mysql -uroot -p
      

      創建neutron數據庫:

      MariaDB [(none)]> CREATE DATABASE neutron;
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'guojie.com';
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'guojie.com';
      MariaDB [(none)]> FLUSH PRIVILEGES;
      MariaDB [(none)]> QUIT;
      

      驗證:

      [root@controller ~]# mysql -h controller -u neutron -pguojie.com -e 'show databases';
      +--------------------+
      | Database           |
      +--------------------+
      | information_schema |
      | neutron            |
      +--------------------+
      

      7.1.2 權限配置

      1.創建neutron用戶

      [root@controller ~]# source admin-openrc.sh 
      [root@controller ~]# openstack user create --domain  default --password guojie.com neutron
      +---------------------+----------------------------------+
      | Field               | Value                            |
      +---------------------+----------------------------------+
      | domain_id           | default                          |
      | enabled             | True                             |
      | id                  | be3796e423e0417d8f71f7fc640e5b48 |
      | name                | neutron                          |
      | options             | {}                               |
      | password_expires_at | None                             |
      +---------------------+----------------------------------+
      
      # 驗證
      [root@controller ~]# openstack user list
      +----------------------------------+-----------+
      | ID                               | Name      |
      +----------------------------------+-----------+
      | 281ca4a010a44d56bc3ad29ccadf15d8 | glance    |
      | 31f7b758bfe64f16b47d3f934b8ff94b | nova      |
      | 4093e7a9f5454322ba9987581b564fe4 | admin     |
      | 4ff73c3f796f424d94ad92de74132525 | placement |
      | be3796e423e0417d8f71f7fc640e5b48 | neutron   |
      | e05800abc0c64c3ea73db2557dda4cb7 | demo      |
      +----------------------------------+-----------+
      

      2.把neutron用戶到Service項目的admin角色組

      [root@controller ~]# openstack role add --project service --user neutron admin
      

      3.創建neutron服務

      [root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
      +-------------+----------------------------------+
      | Field       | Value                            |
      +-------------+----------------------------------+
      | description | OpenStack Networking             |
      | enabled     | True                             |
      | id          | 327d2f586001475ea8a3d12cca191c25 |
      | name        | neutron                          |
      | type        | network                          |
      +-------------+----------------------------------+
      
      # 驗證
      [root@controller ~]# openstack service list
      +----------------------------------+-----------+-----------+
      | ID                               | Name      | Type      |
      +----------------------------------+-----------+-----------+
      | 327d2f586001475ea8a3d12cca191c25 | neutron   | network   |
      | 5628e23741b1491697d811c84bfefd1c | nova      | compute   |
      | 99141222efcb43a8891505d6b367e226 | placement | placement |
      | dcf75ac097884c1cba3bbab762a2d971 | keystone  | identity  |
      | fd8bef823fd141e0bf47cbd01115a8f1 | glance    | image     |
      +----------------------------------+-----------+-----------+
      

      4.配置neutron服務的api地址記錄

      [root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
      +--------------+----------------------------------+
      | Field        | Value                            |
      +--------------+----------------------------------+
      | enabled      | True                             |
      | id           | 95bdbf04b5224511885280d24f2eb340 |
      | interface    | public                           |
      | region       | RegionOne                        |
      | region_id    | RegionOne                        |
      | service_id   | 327d2f586001475ea8a3d12cca191c25 |
      | service_name | neutron                          |
      | service_type | network                          |
      | url          | http://controller:9696           |
      +--------------+----------------------------------+
      
      [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
      +--------------+----------------------------------+
      | Field        | Value                            |
      +--------------+----------------------------------+
      | enabled      | True                             |
      | id           | 19e4d54254f244a6afb502b3098d9ae9 |
      | interface    | internal                         |
      | region       | RegionOne                        |
      | region_id    | RegionOne                        |
      | service_id   | 327d2f586001475ea8a3d12cca191c25 |
      | service_name | neutron                          |
      | service_type | network                          |
      | url          | http://controller:9696           |
      +--------------+----------------------------------+
      
      [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
      +--------------+----------------------------------+
      | Field        | Value                            |
      +--------------+----------------------------------+
      | enabled      | True                             |
      | id           | 2ee270dd60d749159ca4f6ae27796dc4 |
      | interface    | admin                            |
      | region       | RegionOne                        |
      | region_id    | RegionOne                        |
      | service_id   | 327d2f586001475ea8a3d12cca191c25 |
      | service_name | neutron                          |
      | service_type | network                          |
      | url          | http://controller:9696           |
      +--------------+----------------------------------+
      

      驗證:

      [root@controller ~]# openstack endpoint list
      

      image-20250529151957629

      7.1.3 軟件安裝配置

      這里選擇第二種,參考文檔:OpenStack Docs: Networking Option 2: Self-service networks

      1.在控制節點安裝neutron相關軟件

      [root@controller ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
      

      2.備份配置文件

      [root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
      [root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
      [root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
      

      3.配置neutron.conf文件

      27 auth_strategy = keystone
      
      30 core_plugin = ml2
      
      33 service_plugins = router
      
      85 allow_overlapping_ips = True
      
      98 notify_nova_on_port_status_changes = true
      
      102 notify_nova_on_port_data_changes = true
      
      553 transport_url = rabbit://openstack:guojie.com@controller #注意密碼要寫rabbitmq的密碼
      
      560 rpc_backend = rabbit
      
      710 connection = mysql+pymysql://neutron:guojie.com@controller/neutron
      
      794 [keystone_authtoken]   #不用改
      795 auth_uri = http://controller:5000
      796 auth_url = http://controller:35357
      797 memcached_servers = controller:11211
      798 auth_type = password
      799 project_domain_name = default
      800 user_domain_name = default
      801 project_name = service
      802 username = neutron
      803 password = guojie.com ##上一章節權限配置中設置的密碼
      
      1022 [nova]
      1023 auth_url = http://controller:35357
      1024 auth_type = password
      1025 project_domain_name = default
      1026 user_domain_name = default
      1027 region_name = RegionOne
      1028 project_name = service
      1029 username = nova
      1030 password = guojie.com  ##6.1.2章節當中navo權限配置nova設置的密碼。
      
      1141 lock_path = /var/lib/neutron/tmp
      

      結果驗證:

      [root@controller ~]# egrep -v '^#|^$' /etc/neutron/neutron.conf
      [DEFAULT]
      auth_strategy = keystone
      core_plugin = ml2
      service_plugins = router
      allow_overlapping_ips = True
      notify_nova_on_port_status_changes = true
      notify_nova_on_port_data_changes = true
      transport_url = rabbit://openstack:guojie.com@controller
      rpc_backend = rabbit
      [agent]
      [cors]
      [database]
      connection = mysql+pymysql://neutron:guojie.com@controller/neutron
      [keystone_authtoken]
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = neutron
      password = guojie.com
      [matchmaker_redis]
      [nova]
      auth_url = http://controller:35357
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = nova
      password = guojie.com
      [oslo_concurrency]
      lock_path = /var/lib/neutron/tmp
      [oslo_messaging_amqp]
      [oslo_messaging_kafka]
      [oslo_messaging_notifications]
      [oslo_messaging_rabbit]
      [oslo_messaging_zmq]
      [oslo_middleware]
      [oslo_policy]
      [quotas]
      [ssl]
      

      4.配置Modular Layer 2 (ML2)插件 ml2_conf.ini 配置文件

      132 type_drivers = flat,vlan,vxlan
      
      137 tenant_network_types = vxlan
      
      141 mechanism_drivers = linuxbridge,l2population
      
      146 extension_drivers = port_security
      
      182 flat_networks = provider
      
      235 vni_ranges = 1:1000  ##支持1000個隧道網絡(注意:在193行也有1個相同參數,不要配錯位置了,否則無法創建自助的私有網絡)
      
      259 enable_ipset = true
      
      

      配置檢查:

      [root@controller ~]# egrep -v '^$|^#' /etc/neutron/plugins/ml2/ml2_conf.ini
      [DEFAULT]
      [l2pop]
      [ml2]
      type_drivers = flat,vlan,vxlan
      tenant_network_types = vxlan
      mechanism_drivers = linuxbridge,l2population
      extension_drivers = port_security
      [ml2_type_flat]
      flat_networks = provider
      [ml2_type_geneve]
      [ml2_type_gre]
      [ml2_type_vlan]
      [ml2_type_vxlan]
      vni_ranges = 1:1000
      [securitygroup]
      enable_ipset = true
      

      5.配置linuxbridge_agent.ini文件

      142 physical_interface_mappings = provider:eth1 ##注意網卡為eth1,也就是走外網網卡名
      
      175 enable_vxlan = true
      
      196 local_ip = 172.173.10.110 ##此IP為管理網卡的IP
      
      220 l2_population = true
      
      155 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
      
      160 enable_security_group = true
      

      驗證:

      [root@controller ~]# egrep -v '^$|^#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
      [DEFAULT]
      [agent]
      [linux_bridge]
      physical_interface_mappings = provider:eth1
      [securitygroup]
      firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
      enable_security_group = true
      [vxlan]
      enable_vxlan = true
      local_ip = 172.173.10.110
      l2_population = true
      

      6.配置l3_agent.ini文件

      [root@controller ~]# vi /etc/neutron/l3_agent.ini
      # 修改第16行配置如下
      16 interface_driver = linuxbridge
      
      # 檢查
      [root@controller ~]# egrep -v '^$|^#' /etc/neutron/l3_agent.ini 
      [DEFAULT]
      interface_driver = linuxbridge
      [agent]
      [ovs]
      

      7.配置dhcp_agent.ini文件

      [root@controller ~]# vi /etc/neutron/dhcp_agent.ini
      #修改如下配置
      16 interface_driver = linuxbridge
      37 dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
      46 enable_isolated_metadata = true
      
      #配置檢查
      [root@controller ~]# egrep -v '^$|^#' /etc/neutron/dhcp_agent.ini 
      [DEFAULT]
      interface_driver = linuxbridge
      dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
      enable_isolated_metadata = true
      [agent]
      [ovs]
      

      8.配置metadata_agent.ini文件

      [root@controller ~]# vi /etc/neutron/metadata_agent.ini
      #該如下配置
      23 nova_metadata_host = controller
      35 metadata_proxy_shared_secret =  metadata_daniel
      
      #注意:這里的metadata_daniel僅為一個字符串,需要和nova配置文件里的metadata_proxy_shared_secret對應
      
      檢查:
      [root@controller ~]# egrep -v '^$|^#' /etc/neutron/metadata_agent.ini 
      [DEFAULT]
      nova_metadata_host = controller
      metadata_proxy_shared_secret =  metadata_daniel
      [agent]
      [cache]
      

      9.在nova.conf配置文件中加上下面一段

      在[neutron]配置段下添加下面一段:

      [root@controller ~]# vi /etc/nova/nova.conf
      [neutron]
      url = http://controller:9696
      auth_url = http://controller:35357
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = neutron
      password = guojie.com     #注意修改成neutron授權中的密碼
      service_metadata_proxy = true
      metadata_proxy_shared_secret = metadata_daniel
      

      檢查:

      [root@controller ~]# egrep -v '^$|^#' /etc/nova/nova.conf
      [DEFAULT]
      use_neutron=true
      firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
      enabled_apis=osapi_compute,metadata
      transport_url=rabbit://openstack:guojie.com@controller
      [api]
      auth_strategy=keystone
      [api_database]
      connection=mysql+pymysql://nova:guojie.com@controller/nova_api
      [barbican]
      [cache]
      [cells]
      [cinder]
      [compute]
      [conductor]
      [console]
      [consoleauth]
      [cors]
      [crypto]
      [database]
      connection=mysql+pymysql://nova:guojie.com@controller/nova
      [ephemeral_storage_encryption]
      [filter_scheduler]
      [glance]
      api_servers=http://controller:9292
      [guestfs]
      [healthcheck]
      [hyperv]
      [ironic]
      [key_manager]
      [keystone]
      [keystone_authtoken]
      uth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = nova
      password = guojie.com
      [libvirt]
      [matchmaker_redis]
      [metrics]
      [mks]
      [neutron]
      url = http://controller:9696
      auth_url = http://controller:35357
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = neutron
      password = guojie.com
      service_metadata_proxy = true
      metadata_proxy_shared_secret = metadata_daniel
      [notifications]
      [osapi_v21]
      [oslo_concurrency]
      lock_path=/var/lib/nova/tmp
      [oslo_messaging_amqp]
      [oslo_messaging_kafka]
      [oslo_messaging_notifications]
      [oslo_messaging_rabbit]
      [oslo_messaging_zmq]
      [oslo_middleware]
      [oslo_policy]
      [pci]
      [placement]
      os_region_name = RegionOne
      project_domain_name = Default
      project_name = service
      auth_type = password
      user_domain_name = Default
      auth_url = http://controller:35357/v3
      username = placement
      password = guojie.com
      [quota]
      [rdp]
      [remote_debug]
      [scheduler]
      [serial_console]
      [service_user]
      [spice]
      [trusted_computing]
      [upgrade_levels]
      [vendordata_dynamic_auth]
      [vmware]
      [vnc]
      enabled=true
      vncserver_listen=172.173.10.110
      vncserver_proxyclient_address=172.173.10.110
      novncproxy_base_url = http://172.173.10.110:6080/vnc_auto.html
      [workarounds]
      [wsgi]
      [xenserver]
      [xvp]
      

      10.網絡服務初始化腳本需要訪問/etc/neutron/plugin.ini來指向 ml2_conf.ini配置文件,所以需要做一個軟鏈接

      [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
      

      11.同步數據

      [root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
      
      7.1.4 啟動服務

      重啟nova服務:

      [root@controller ~]# systemctl restart openstack-nova-api.service --now
      

      啟動neutron服務:

      [root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service --now
      

      7.2 neutron計算節點部署

      參考文檔:OpenStack Docs: Install and configure compute node

      7.2.1 安裝與配置

      1.計算節點安裝相關軟件

      [root@compute ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
      

      2.備份配置文件

      [root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
      [root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
      

      3..配置neutron.conf文件

      27 auth_strategy = keystone
      
      553 transport_url = rabbit://openstack:guojie.com@controller
      
      794 [keystone_authtoken]   #自帶的,不用改
      795 auth_uri = http://controller:5000
      796 auth_url = http://controller:35357
      797 memcached_servers = controller:11211
      798 auth_type = password
      799 project_domain_name = default
      800 user_domain_name = default
      801 project_name = service
      802 username = neutron
      803 password = guojie.com
      
      1134 lock_path = /var/lib/neutron/tmp
      

      配置檢查:

      [root@compute ~]# egrep -v '^$|^#' /etc/neutron/neutron.conf
      [DEFAULT]
      auth_strategy = keystone
      transport_url = rabbit://openstack:guojie.com@controller
      [agent]
      [cors]
      [database]
      [keystone_authtoken]
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = neutron
      password = guojie.com
      [matchmaker_redis]
      [nova]
      [oslo_concurrency]
      lock_path = /var/lib/neutron/tmp
      [oslo_messaging_amqp]
      [oslo_messaging_kafka]
      [oslo_messaging_notifications]
      [oslo_messaging_rabbit]
      [oslo_messaging_zmq]
      [oslo_middleware]
      [oslo_policy]
      [quotas]
      [ssl]
      

      4.仍然是第2類型網絡配置

      參考:OpenStack Docs: Networking Option 2: Self-service networks

      配置linuxbridge_agent.ini文件:

      [root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
      #修改如下配置
      142 physical_interface_mappings = provider:eth1 #為走外部網絡網卡名
      
      175 enable_vxlan = true
      
      196 local_ip = 172.173.10.111 #本機管理網絡的IP(重點注意)
      
      220 l2_population = true
      
      155 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
      
      160 enable_security_group = true
      
      

      配置檢查:

      [root@compute ~]# egrep -v '^$|^#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
      [DEFAULT]
      [agent]
      [linux_bridge]
      physical_interface_mappings = provider:eth1
      [securitygroup]
      firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
      enable_security_group = true
      [vxlan]
      enable_vxlan = true
      local_ip = 172.173.10.111
      l2_population = true
      

      5.配置nova.conf配置文件

      在[neutron]下添如下內容

      7185 [neutron]
      7186 url = http://controller:9696
      7187 auth_url = http://controller:35357
      7188 auth_type = password
      7189 project_domain_name = default
      7190 user_domain_name = default
      7191 region_name = RegionOne
      7192 project_name = service
      7193 username = neutron
      7194 password = guojie.com
      

      配置驗證:

      [root@compute ~]# egrep -v '^$|^#' /etc/nova/nova.conf
      [DEFAULT]
      use_neutron=true
      firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
      enabled_apis=osapi_compute,metadata
      transport_url=rabbit://openstack:guojie.com@controller
      [api]
      auth_strategy=keystone
      [api_database]
      connection=mysql+pymysql://nova:guojie.com@controller/nova_api
      [barbican]
      [cache]
      [cells]
      [cinder]
      [compute]
      [conductor]
      [console]
      [consoleauth]
      [cors]
      [crypto]
      [database]
      connection=mysql+pymysql://nova:guojie.com@controller/nova
      [ephemeral_storage_encryption]
      [filter_scheduler]
      [glance]
      api_servers=http://controller:9292
      [guestfs]
      [healthcheck]
      [hyperv]
      [ironic]
      [key_manager]
      [keystone]
      [keystone_authtoken]
      uth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = nova
      password = guojie.com
      [libvirt]
      virt_type=qemu
      [matchmaker_redis]
      [metrics]
      [mks]
      [neutron]
      url = http://controller:9696
      auth_url = http://controller:35357
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = neutron
      password = guojie.com
      [notifications]
      [osapi_v21]
      [oslo_concurrency]
      lock_path=/var/lib/nova/tmp
      [oslo_messaging_amqp]
      [oslo_messaging_kafka]
      [oslo_messaging_notifications]
      [oslo_messaging_rabbit]
      [oslo_messaging_zmq]
      [oslo_middleware]
      [oslo_policy]
      [pci]
      [placement]
      os_region_name = RegionOne
      project_domain_name = Default
      project_name = service
      auth_type = password
      user_domain_name = Default
      auth_url = http://controller:35357/v3
      username = placement
      password = guojie.com
      [quota]
      [rdp]
      [remote_debug]
      [scheduler]
      [serial_console]
      [service_user]
      [spice]
      [trusted_computing]
      [upgrade_levels]
      [vendordata_dynamic_auth]
      [vmware]
      [vnc]
      enabled=true
      vncserver_listen=172.173.10.110
      vncserver_proxyclient_address=172.173.10.111
      [workarounds]
      [wsgi]
      [xenserver]
      [xvp]
      

      7.2.2 啟動服務

      [root@compute ~]# systemctl restart openstack-nova-compute.service
      [root@compute ~]# systemctl enable neutron-linuxbridge-agent.service --now
      

      7.2.3 控制節點上驗證

      [root@controller ~]# source admin-openrc.sh
      [root@controller ~]# openstack network agent list
      

      image-20250529170348447

      八、dashboard組件horizon

      參考:OpenStack Docs: Install and configure for Red Hat Enterprise Linux and CentOS

      8.1安裝與配置

      1.在控制節點安裝組件

      [root@controller neutron]# yum -y install openstack-dashboard
      

      2.備份配置文件

      [root@controller ~]# cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.bak
      

      3.配置local_settings文件

      
      38 ALLOWED_HOSTS = ['*',]
      
      64 OPENSTACK_API_VERSIONS = {
      65     "data-processing": 1.1,
      66     "identity": 3,
      67     "image": 2,
      68     "volume": 2,
      69     "compute": 2,
      70 }
      
      
      75 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
      
      97 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
      
      153 SESSION_ENGINE = 'django.contrib.sessions.backends.cache'  ##這行沒有,要自己添加
      154 CACHES = {
      155     'default': {
      156         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
      157         'LOCATION': 'controller:11211', ##表示把會話給controller的memcache
      158     },
      159 }
      
      #上面配置好之后下面就要注釋起來
      161 #CACHES = {
      162 #    'default': {
      163 #        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
      164 #    },
      165 #}
      
      184 OPENSTACK_HOST = "controller"
      185 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST  #改為V3而不是V2.0
      186 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"  #默認角色
      
      # 全打開(全改為True),我們用的是第2種網絡類型
      313 OPENSTACK_NEUTRON_NETWORK = {
      314     'enable_router': True,
      315     'enable_quotas': True,
      316     'enable_ipv6': True,
      317     'enable_distributed_router': True,
      318     'enable_ha_router': True,
      319     'enable_fip_topology_check': True,
      
      453 TIME_ZONE = "Asia/Shanghai"  #時區改為上海
      

      4.配置dashborad的httpd子配置文件

      [root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf
      

      在第四行加上如下內容:

      4 WSGIApplicationGroup %{GLOBAL}
      

      檢查:

      [root@controller ~]# cat /etc/httpd/conf.d/openstack-dashboard.conf 
      WSGIDaemonProcess dashboard
      WSGIProcessGroup dashboard
      WSGISocketPrefix run/wsgi
      WSGIApplicationGroup %{GLOBAL}
      WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
      Alias /dashboard/static /usr/share/openstack-dashboard/static
      
      <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
        Options All
        AllowOverride All
        Require all granted
      </Directory>
      
      <Directory /usr/share/openstack-dashboard/static>
        Options All
        AllowOverride All
        Require all granted
      </Directory>
      

      第4行加上這一句,在官方centos文檔里沒有,但ubuntu有.我們這里要 加上,否則后面dashboard訪問不了

      8.2 啟動服務

      [root@controller ~]# systemctl restart httpd memcached
      

      登錄驗證:

      http://IP地址/dashboard/auth/login/?next=/dashboard/

      image-20250529173446545

      域:default

      用戶名:admin

      密碼:guojie.com

      九、塊存儲組件cinder

      參考:https://docs.openstack.org/cinder/pike/install/

      9.1 控制節點部署cinder

      OpenStack Docs: Install and configure controller node

      9.1.1 數據庫配置

      [root@controller ~]# mysql -uroot -p
      MariaDB [(none)]> CREATE DATABASE cinder;
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'guojie.com';
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'guojie.com';
      MariaDB [(none)]> FLUSH PRIVILEGES;
      MariaDB [(none)]> QUIT;
      
      #驗證
      [root@controller ~]# mysql -h controller -ucinder -pguojie.com -e 'show databases';
      +--------------------+
      | Database           |
      +--------------------+
      | cinder             |
      | information_schema |
      +--------------------+
      

      9.1.2 權限配置

      1.創建用戶

      [root@controller ~]# source admin-openrc.sh 
      [root@controller ~]# openstack user create --domain default --password guojie.com cinder
      +---------------------+----------------------------------+
      | Field               | Value                            |
      +---------------------+----------------------------------+
      | domain_id           | default                          |
      | enabled             | True                             |
      | id                  | 7f943f4a425840c98749a23eefa0ad69 |
      | name                | cinder                           |
      | options             | {}                               |
      | password_expires_at | None                             |
      +---------------------+----------------------------------+
      
      # 驗證
      [root@controller ~]# openstack user list
      +----------------------------------+-----------+
      | ID                               | Name      |
      +----------------------------------+-----------+
      | 281ca4a010a44d56bc3ad29ccadf15d8 | glance    |
      | 31f7b758bfe64f16b47d3f934b8ff94b | nova      |
      | 4093e7a9f5454322ba9987581b564fe4 | admin     |
      | 4ff73c3f796f424d94ad92de74132525 | placement |
      | 7f943f4a425840c98749a23eefa0ad69 | cinder    |
      | be3796e423e0417d8f71f7fc640e5b48 | neutron   |
      | e05800abc0c64c3ea73db2557dda4cb7 | demo      |
      +----------------------------------+-----------+
      

      2.把cinder用戶添加到service項目中,并賦予admin角色

      [root@controller ~]# openstack role add --project service --user cinder admin
      

      3.創建cinderv2和cinderv3服務

      [root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
      +-------------+----------------------------------+
      | Field       | Value                            |
      +-------------+----------------------------------+
      | description | OpenStack Block Storage          |
      | enabled     | True                             |
      | id          | b84b5a11e32a4d95a5ed2a5107defbe3 |
      | name        | cinderv2                         |
      | type        | volumev2                         |
      +-------------+----------------------------------+
      [root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
      +-------------+----------------------------------+
      | Field       | Value                            |
      +-------------+----------------------------------+
      | description | OpenStack Block Storage          |
      | enabled     | True                             |
      | id          | b798707278a74512bc9df7be2e9dee17 |
      | name        | cinderv3                         |
      | type        | volumev3                         |
      +-------------+----------------------------------+
      

      image-20250530091004075

      4.創建cinder相關endpoint地址記錄

      [root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
      +--------------+------------------------------------------+
      | Field        | Value                                    |
      +--------------+------------------------------------------+
      | enabled      | True                                     |
      | id           | 16afcec244344e95a2da6d328264c18e         |
      | interface    | public                                   |
      | region       | RegionOne                                |
      | region_id    | RegionOne                                |
      | service_id   | b84b5a11e32a4d95a5ed2a5107defbe3         |
      | service_name | cinderv2                                 |
      | service_type | volumev2                                 |
      | url          | http://controller:8776/v2/%(project_id)s |
      +--------------+------------------------------------------+
      [root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
      +--------------+------------------------------------------+
      | Field        | Value                                    |
      +--------------+------------------------------------------+
      | enabled      | True                                     |
      | id           | bcf936a4a99c46efbaebd3ec05e52827         |
      | interface    | internal                                 |
      | region       | RegionOne                                |
      | region_id    | RegionOne                                |
      | service_id   | b84b5a11e32a4d95a5ed2a5107defbe3         |
      | service_name | cinderv2                                 |
      | service_type | volumev2                                 |
      | url          | http://controller:8776/v2/%(project_id)s |
      +--------------+------------------------------------------+
      [root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
      +--------------+------------------------------------------+
      | Field        | Value                                    |
      +--------------+------------------------------------------+
      | enabled      | True                                     |
      | id           | 35994ff3bdb64c67bf4137526cc2b38c         |
      | interface    | admin                                    |
      | region       | RegionOne                                |
      | region_id    | RegionOne                                |
      | service_id   | b84b5a11e32a4d95a5ed2a5107defbe3         |
      | service_name | cinderv2                                 |
      | service_type | volumev2                                 |
      | url          | http://controller:8776/v2/%(project_id)s |
      +--------------+------------------------------------------+
      
      
      
      [root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
      +--------------+------------------------------------------+
      | Field        | Value                                    |
      +--------------+------------------------------------------+
      | enabled      | True                                     |
      | id           | b3e31a89b29d444ba82e88a3a9a45167         |
      | interface    | public                                   |
      | region       | RegionOne                                |
      | region_id    | RegionOne                                |
      | service_id   | b798707278a74512bc9df7be2e9dee17         |
      | service_name | cinderv3                                 |
      | service_type | volumev3                                 |
      | url          | http://controller:8776/v3/%(project_id)s |
      +--------------+------------------------------------------+
      [root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
      +--------------+------------------------------------------+
      | Field        | Value                                    |
      +--------------+------------------------------------------+
      | enabled      | True                                     |
      | id           | 5d400b096e2a42f49e558e9c07ffaaea         |
      | interface    | internal                                 |
      | region       | RegionOne                                |
      | region_id    | RegionOne                                |
      | service_id   | b798707278a74512bc9df7be2e9dee17         |
      | service_name | cinderv3                                 |
      | service_type | volumev3                                 |
      | url          | http://controller:8776/v3/%(project_id)s |
      +--------------+------------------------------------------+
      [root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
      +--------------+------------------------------------------+
      | Field        | Value                                    |
      +--------------+------------------------------------------+
      | enabled      | True                                     |
      | id           | bf214a429d104a6aa829544a1dfaeb39         |
      | interface    | admin                                    |
      | region       | RegionOne                                |
      | region_id    | RegionOne                                |
      | service_id   | b798707278a74512bc9df7be2e9dee17         |
      | service_name | cinderv3                                 |
      | service_type | volumev3                                 |
      | url          | http://controller:8776/v3/%(project_id)s |
      +--------------+------------------------------------------+
      

      驗證:image-20250530091859402

      9.1.3 軟件安裝配置

      1.控制節點安裝openstack-cinder包

      [root@controller ~]# yum -y install openstack-cinder
      

      2.備份配置文件

      [root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
      

      3.配置cinder.conf配置文件

      [root@controller ~]# vi /etc/cinder/cinder.conf
      #改如下配置
      283 my_ip = 172.173.10.110
      
      288 glance_api_servers = http://controller:9292  #官方文檔沒有這一句,要加上和glance的連接
      
      400 auth_strategy = keystone
      
      1212 transport_url = rabbit://openstack:guojie.com@controller
      
      1219 rpc_backend = rabbit
      
      3782 connection = mysql+pymysql://cinder:guojie.com@controller/cinder
      
      4009 [keystone_authtoken]  #自帶,不用改
      4010 auth_uri = http://controller:5000
      4011 auth_url = http://controller:35357
      4012 memcached_servers = controller:11211
      4013 auth_type = password
      4014 project_domain_name = default
      4015 user_domain_name = default
      4016 project_name = service
      4017 username = cinder
      4018 password = guojie.com  #改成權限配置中的密碼
      
      
      4297 lock_path = /var/lib/cinder/tmp
      

      驗證:

      [root@controller ~]# egrep -v '^#|^$' /etc/cinder/cinder.conf
      [DEFAULT]
      my_ip = 172.173.10.110
      glance_api_servers = http://controller:9292
      auth_strategy = keystone
      transport_url = rabbit://openstack:guojie.com@controller
      rpc_backend = rabbit
      [backend]
      [backend_defaults]
      [barbican]
      [brcd_fabric_example]
      [cisco_fabric_example]
      [coordination]
      [cors]
      [database]
      connection = mysql+pymysql://cinder:guojie.com@controller/cinder
      [fc-zone-manager]
      [healthcheck]
      [key_manager]
      [keystone_authtoken]
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = cinder
      password = guojie.com
      [matchmaker_redis]
      [nova]
      [oslo_concurrency]
      lock_path = /var/lib/cinder/tmp
      [oslo_messaging_amqp]
      [oslo_messaging_kafka]
      [oslo_messaging_notifications]
      [oslo_messaging_rabbit]
      [oslo_messaging_zmq]
      [oslo_middleware]
      [oslo_policy]
      [oslo_reports]
      [oslo_versionedobjects]
      [profiler]
      [ssl]
      

      4.配置nova.conf配置文件

      [root@controller ~]# vi /etc/nova/nova.conf
      #找到[cinder]并在下面添加os_region_name = RegionOne
      [cinder]
      os_region_name = RegionOne
      

      5.重啟openstack-nova-api服務

      [root@controller ~]# systemctl restart openstack-nova-api.service
      

      6.數據庫導入

      [root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
      Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".
      [root@controller ~]# mysql -h controller -u cinder -pguojie.com -e 'use cinder;show tables' |wc -l
      36
      

      9.1.4 啟動服務

      在控制節點啟動服務

      [root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service --now
      

      驗證:

      [root@controller ~]# netstat -ntlup |grep :8776
      tcp        0      0 0.0.0.0:8776            0.0.0.0:*               LISTEN      21454/python2
      [root@controller ~]# openstack volume service list
      +------------------+------------+------+---------+-------+----------------------------+
      | Binary           | Host       | Zone | Status  | State | Updated At                 |
      +------------------+------------+------+---------+-------+----------------------------+
      | cinder-scheduler | controller | nova | enabled | up    | 2025-05-30T01:56:06.000000 |
      +------------------+------------+------+---------+-------+----------------------------+
      

      9.2 存儲節點部署cinder

      這里在存儲節點上添加一塊硬盤用于演示:

      [root@cindre ~]# lsblk 
      NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      fd0      2:0    1    4K  0 disk 
      sda      8:0    0   50G  0 disk 
      ├─sda1   8:1    0    1G  0 part /boot
      └─sda2   8:2    0   49G  0 part /
      sdb      8:16   0   50G  0 disk 
      sr0     11:0    1 1024M  0 rom
      
      #這里是sdb
      

      參看文檔:OpenStack Docs: Install and configure a storage node

      9.2.1 安裝與配置

      1.存儲節點安裝LVM相關軟件

      [root@cindre ~]# yum -y install lvm2 device-mapper-persistent-data
      

      2.啟動服務

      [root@cindre ~]# systemctl enable lvm2-lvmetad.service --now
      

      3.創建LVM

      [root@cindre ~]# pvcreate /dev/sdb
        Physical volume "/dev/sdb" successfully created.
      [root@cindre ~]# vgcreate cinder_lvm /dev/sdb
        Volume group "cinder_lvm" successfully created
      

      驗證(這里如果你安裝系統是分區類型選了lvm,這里會有多個,我這里裝的時候選了標準分區它就沒有):

      [root@cindre ~]# pvs
        PV         VG         Fmt  Attr PSize   PFree  
        /dev/sdb   cinder_lvm lvm2 a--  <50.00g <50.00g
      [root@cindre ~]# vgs
        VG         #PV #LV #SN Attr   VSize   VFree  
        cinder_lvm   1   0   0 wz--n- <50.00g <50.00g
      

      4.配置LVM的過濾

      [root@cindre ~]# vi /etc/lvm/lvm.conf
      # 在142行插入如下的過濾器,這里表示接受sdb并拒絕其它磁盤,避免操作系統磁盤被影響。
      142         filter = [ "a/sdb/", "r/.*/"]
      

      image-20250530101630304

      5.安裝cinder相關軟件

      [root@cindre ~]# yum install openstack-cinder targetcli python-keystone -y
      

      6.配置cinder.conf配置文件

      [root@cindre ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
      
      283 my_ip = 172.173.10.112 #這里要寫成存儲節點的管理地址
      
      288 glance_api_servers = http://controller:9292
      
      400 auth_strategy = keystone
      
      404 enabled_backends = lvm
      
      1212 transport_url = rabbit://openstack:guojie.com@controller
      
      1219 rpc_backend = rabbit
      
      3782 connection = mysql+pymysql://cinder:guojie.com@controller/cinder
      
      4009 [keystone_authtoken] # 自帶的,不用修改
      4010 auth_uri = http://controller:5000
      4011 auth_url = http://controller:35357
      4012 memcached_servers = controller:11211
      4013 auth_type = password
      4014 project_domain_name = default
      4015 user_domain_name = default
      4016 project_name = service
      4017 username = cinder
      4018 password = guojie.com  #注意修改授權密碼
      
      4297 lock_path = /var/lib/cinder/tmp
      
      # 在最后添加下面這一段
      5174 [lvm]
      5175 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
      5176 volume_group = cinder_lvm  #注意改成自己前面配置的vg的名稱
      5177 iscsi_protocol = iscsi
      5178 iscsi_helper = lioadm
      

      配置驗證:

      [root@cindre ~]# egrep -v '^$|^#' /etc/cinder/cinder.conf
      [DEFAULT]
      my_ip = 172.173.10.112
      glance_api_servers = http://controller:9292
      auth_strategy = keystone
      enabled_backends = lvm
      transport_url = rabbit://openstack:guojie.com@controller
      rpc_backend = rabbit
      [backend]
      [backend_defaults]
      [barbican]
      [brcd_fabric_example]
      [cisco_fabric_example]
      [coordination]
      [cors]
      [database]
      connection = mysql+pymysql://cinder:guojie.com@controller/cinder
      [fc-zone-manager]
      [healthcheck]
      [key_manager]
      [keystone_authtoken]
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = cinder
      password = guojie.com
      [matchmaker_redis]
      [nova]
      [oslo_concurrency]
      lock_path = /var/lib/cinder/tmp
      [oslo_messaging_amqp]
      [oslo_messaging_kafka]
      [oslo_messaging_notifications]
      [oslo_messaging_rabbit]
      [oslo_messaging_zmq]
      [oslo_middleware]
      [oslo_policy]
      [oslo_reports]
      [oslo_versionedobjects]
      [profiler]
      [ssl]
      [lvm]
      volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
      volume_group = cinder_lvm
      iscsi_protocol = iscsi
      iscsi_helper = lioadm
      

      9.2.2 啟動服務

      1.在cinder存儲節點啟動服務

      [root@cindre ~]# systemctl enable openstack-cinder-volume.service target.service --now
      

      2.在控制節點上驗證

      [root@controller ~]# openstack volume service list
      

      image-20250530104232321

      完成之后dashboard就會多出個卷,如果沒有就退出重新登錄。

      image-20250530104510012

      十、云平臺簡單使用

      參考:啟動一個實例 — Installation Guide 文檔

      10.1 網絡創建

      [root@controller ~]# openstack network list
      [root@controller ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
      +---------------------------+--------------------------------------+
      | Field                     | Value                                |
      +---------------------------+--------------------------------------+
      | admin_state_up            | UP                                   |
      | availability_zone_hints   |                                      |
      | availability_zones        |                                      |
      | created_at                | 2025-05-30T02:48:55Z                 |
      | description               |                                      |
      | dns_domain                | None                                 |
      | id                        | cbd39bfd-28f2-455c-9b85-20cf78263797 |
      | ipv4_address_scope        | None                                 |
      | ipv6_address_scope        | None                                 |
      | is_default                | False                                |
      | is_vlan_transparent       | None                                 |
      | mtu                       | 1500                                 |
      | name                      | provider                             |
      | port_security_enabled     | True                                 |
      | project_id                | fbe4fead10f94b8187e7661246c0f5e6     |
      | provider:network_type     | flat                                 |
      | provider:physical_network | provider                             |
      | provider:segmentation_id  | None                                 |
      | qos_policy_id             | None                                 |
      | revision_number           | 3                                    |
      | router:external           | External                             |
      | segments                  | None                                 |
      | shared                    | True                                 |
      | status                    | ACTIVE                               |
      | subnets                   |                                      |
      | tags                      |                                      |
      | updated_at                | 2025-05-30T02:48:55Z                 |
      +---------------------------+--------------------------------------+
      [root@controller ~]# openstack network list
      +--------------------------------------+----------+---------+
      | ID                                   | Name     | Subnets |
      +--------------------------------------+----------+---------+
      | cbd39bfd-28f2-455c-9b85-20cf78263797 | provider |         |
      +--------------------------------------+----------+---------+
      

      10.2 創建子網

      創建的網段對應我們eth1網卡的網絡:

      [root@controller ~]# openstack subnet create --network provider --allocation-pool start=10.1.1.100,end=10.1.1.250 --dns-nameserver 223.5.5.5 --gateway 10.1.1.254 --subnet-range 10.1.1.0/24 provider
      +-------------------------+--------------------------------------+
      | Field                   | Value                                |
      +-------------------------+--------------------------------------+
      | allocation_pools        | 10.1.1.100-10.1.1.250                |
      | cidr                    | 10.1.1.0/24                          |
      | created_at              | 2025-05-30T02:53:55Z                 |
      | description             |                                      |
      | dns_nameservers         | 223.5.5.5                            |
      | enable_dhcp             | True                                 |
      | gateway_ip              | 10.1.1.254                           |
      | host_routes             |                                      |
      | id                      | e705a12f-aeb2-4414-aafe-1a676a8c87f0 |
      | ip_version              | 4                                    |
      | ipv6_address_mode       | None                                 |
      | ipv6_ra_mode            | None                                 |
      | name                    | provider                             |
      | network_id              | cbd39bfd-28f2-455c-9b85-20cf78263797 |
      | project_id              | fbe4fead10f94b8187e7661246c0f5e6     |
      | revision_number         | 0                                    |
      | segment_id              | None                                 |
      | service_types           |                                      |
      | subnetpool_id           | None                                 |
      | tags                    |                                      |
      | updated_at              | 2025-05-30T02:53:55Z                 |
      | use_default_subnet_pool | None                                 |
      +-------------------------+--------------------------------------+
      

      驗證:

      [root@controller ~]# openstack network list
      +--------------------------------------+----------+--------------------------------------+
      | ID                                   | Name     | Subnets                              |
      +--------------------------------------+----------+--------------------------------------+
      | cbd39bfd-28f2-455c-9b85-20cf78263797 | provider | e705a12f-aeb2-4414-aafe-1a676a8c87f0 |
      +--------------------------------------+----------+--------------------------------------+
      [root@controller ~]# openstack subnet list
      +--------------------------------------+----------+--------------------------------------+-------------+
      | ID                                   | Name     | Network                              | Subnet      |
      +--------------------------------------+----------+--------------------------------------+-------------+
      | e705a12f-aeb2-4414-aafe-1a676a8c87f0 | provider | cbd39bfd-28f2-455c-9b85-20cf78263797 | 10.1.1.0/24 |
      +--------------------------------------+----------+--------------------------------------+-------------+
      

      image-20250530105601781

      10.3 創建虛擬機規格(flavor)

      [root@controller ~]# openstack flavor list
      
      [root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 512 --disk 1 m1.nano
      +----------------------------+---------+
      | Field                      | Value   |
      +----------------------------+---------+
      | OS-FLV-DISABLED:disabled   | False   |
      | OS-FLV-EXT-DATA:ephemeral  | 0       |
      | disk                       | 1       |
      | id                         | 0       |
      | name                       | m1.nano |
      | os-flavor-access:is_public | True    |
      | properties                 |         |
      | ram                        | 512     |
      | rxtx_factor                | 1.0     |
      | swap                       |         |
      | vcpus                      | 1       |
      +----------------------------+---------+
      [root@controller ~]# openstack flavor list
      +----+---------+-----+------+-----------+-------+-----------+
      | ID | Name    | RAM | Disk | Ephemeral | VCPUs | Is Public |
      +----+---------+-----+------+-----------+-------+-----------+
      | 0  | m1.nano | 512 |    1 |         0 |     1 | True      |
      +----+---------+-----+------+-----------+-------+-----------+
      

      10.4 創建虛擬機實例

      正常管理虛擬機不應該使用admin用戶,我們在這里簡單創建測試 一下

      1.查看鏡像,規格,網絡等信息

      [root@controller ~]# openstack image list
      +--------------------------------------+-----------------+--------+
      | ID                                   | Name            | Status |
      +--------------------------------------+-----------------+--------+
      | 03a823ea-6883-4a4b-9629-1b4839f0644a | cirros          | active |
      +--------------------------------------+-----------------+--------+
      [root@controller ~]# openstack network list
      +--------------------------------------+----------+--------------------------------------+
      | ID                                   | Name     | Subnets                              |
      +--------------------------------------+----------+--------------------------------------+
      | cbd39bfd-28f2-455c-9b85-20cf78263797 | provider | e705a12f-aeb2-4414-aafe-1a676a8c87f0 |
      +--------------------------------------+----------+--------------------------------------+
      [root@controller ~]# openstack flavor list
      +----+---------+-----+------+-----------+-------+-----------+
      | ID | Name    | RAM | Disk | Ephemeral | VCPUs | Is Public |
      +----+---------+-----+------+-----------+-------+-----------+
      | 0  | m1.nano | 512 |    1 |         0 |     1 | True      |
      +----+---------+-----+------+-----------+-------+-----------+
      

      2.創建實例

      [root@controller ~]# openstack server create --flavor m1.nano  --image cirros --nic net-id=cbd39bfd-28f2-455c-9b85-20cf78263797 vm01
      +-------------------------------------+-----------------------------------------------+
      | Field                               | Value                                         |
      +-------------------------------------+-----------------------------------------------+
      | OS-DCF:diskConfig                   | MANUAL                                        |
      | OS-EXT-AZ:availability_zone         |                                               |
      | OS-EXT-SRV-ATTR:host                | None                                          |
      | OS-EXT-SRV-ATTR:hypervisor_hostname | None                                          |
      | OS-EXT-SRV-ATTR:instance_name       |                                               |
      | OS-EXT-STS:power_state              | NOSTATE                                       |
      | OS-EXT-STS:task_state               | scheduling                                    |
      | OS-EXT-STS:vm_state                 | building                                      |
      | OS-SRV-USG:launched_at              | None                                          |
      | OS-SRV-USG:terminated_at            | None                                          |
      | accessIPv4                          |                                               |
      | accessIPv6                          |                                               |
      | addresses                           |                                               |
      | adminPass                           | wJ5piBzSJPEh                                  |
      | config_drive                        |                                               |
      | created                             | 2025-05-30T03:05:07Z                          |
      | flavor                              | m1.nano (0)                                   |
      | hostId                              |                                               |
      | id                                  | ce549054-5159-4ec4-8f4b-8195e8879c71          |
      | image                               | cirros (03a823ea-6883-4a4b-9629-1b4839f0644a) |
      | key_name                            | None                                          |
      | name                                | vm01                                          |
      | progress                            | 0                                             |
      | project_id                          | fbe4fead10f94b8187e7661246c0f5e6              |
      | properties                          |                                               |
      | security_groups                     | name='default'                                |
      | status                              | BUILD                                         |
      | updated                             | 2025-05-30T03:05:07Z                          |
      | user_id                             | 4093e7a9f5454322ba9987581b564fe4              |
      | volumes_attached                    |                                               |
      +-------------------------------------+-----------------------------------------------+
      
      [root@controller ~]# openstack server list
      +--------------------------------------+------+--------+---------------------+--------+---------+
      | ID                                   | Name | Status |      Networks       | Image  | Flavor  |
      +--------------------------------------+------+--------+---------------------+--------+---------+
      | ce549054-5159-4ec4-8f4b-8195e8879c71 | vm01 | ACTIVE | provider=10.1.1.113 | cirros | m1.nano |
      +--------------------------------------+------+--------+---------------------+--------+---------+
      
      [root@controller ~]# openstack console url show vm01
      +-------+-------------------------------------------------------------------------------------+
      | Field | Value                                                                               |
      +-------+-------------------------------------------------------------------------------------+
      | type  | novnc                                                                               |
      | url   | http://172.173.10.110:6080/vnc_auto.html?token=43a88a22-f763-4216-ac00-ef52812d3348 |
      +-------+-------------------------------------------------------------------------------------+
      

      使用瀏覽器打開鏈接,測試虛擬機是否正常:

      image-20250611103914412

      刪除:

      [root@controller ~]# openstack server stop vm01  #停止實例
      [root@controller ~]# openstack server delete vm01  #刪除實例
      [root@controller ~]# openstack volume list  #列出所有的卷
      +--------------------------------------+------+-----------+------+-------------+
      | ID                                   | Name | Status    | Size | Attached to |
      +--------------------------------------+------+-----------+------+-------------+
      | 36da451a-969c-4856-b340-6176daf19d42 |      | available |    1 |             |
      +--------------------------------------+------+-----------+------+-------------+
      [root@controller ~]# openstack volume delete 36da451a-969c-4856-b340-6176daf19d42  #刪除卷
      

      10.4.2 通過通用鏡像創建CentOS7虛擬機

      1.通用鏡像下載:CentOS Cloud Images

      這里我們下載最新版本:

      [root@controller ~]# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-2211.qcow2
      

      2.上傳鏡像到Glance

      [root@controller ~]# source admin-openrc.sh
      
      # 上傳
      [root@controller ~]# glance image-create --name "CentOS-7-x86_64" --file CentOS-7-x86_64-GenericCloud-2211.qcow2 --disk-format qcow2 --min-ram 2048 --min-disk 20 --container-format bare --visibility public
      +------------------+--------------------------------------+
      | Property         | Value                                |
      +------------------+--------------------------------------+
      | checksum         | bc0d063116620ed1745fcd0c6e28afa9     |
      | container_format | bare                                 |
      | created_at       | 2025-06-11T07:41:16Z                 |
      | disk_format      | qcow2                                |
      | id               | c9fdecf2-c9b7-46e4-89b9-56251c884518 |
      | min_disk         | 20                                   |
      | min_ram          | 2048                                 |
      | name             | CentOS-7-x86_64                      |
      | owner            | acf4ba7bf0054f23840e9863120b2a2e     |
      | protected        | False                                |
      | size             | 902889472                            |
      | status           | active                               |
      | tags             | []                                   |
      | updated_at       | 2025-06-11T07:41:20Z                 |
      | virtual_size     | None                                 |
      | visibility       | public                               |
      +------------------+--------------------------------------+
      
      # 查看
      [root@controller ~]# openstack image list
      +--------------------------------------+-----------------+--------+
      | ID                                   | Name            | Status |
      +--------------------------------------+-----------------+--------+
      | 71081af8-b4fe-4d01-9ba3-b86efd3cbe74 | CentOS-7-x86_64 | active |
      +--------------------------------------+-----------------+--------+
      
      # 這里順帶把鏡像屬性設置以下,不然有可能虛擬機啟動會報:Booting from Hard Disk…GRUB
      [root@controller ~]# openstack image set --property hw_disk_bus=ide --property hw_vif_model=e1000 <image_uuid>
      
      

      3.創建虛擬機規格:

      [root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 2048 --disk 30 linux ##這里有可能ID和前面的測試鏡像沖突,可以換ID
      +----------------------------+-------+
      | Field                      | Value |
      +----------------------------+-------+
      | OS-FLV-DISABLED:disabled   | False |
      | OS-FLV-EXT-DATA:ephemeral  | 0     |
      | disk                       | 30    |
      | id                         | 0     |
      | name                       | linux |
      | os-flavor-access:is_public | True  |
      | properties                 |       |
      | ram                        | 2048  |
      | rxtx_factor                | 1.0   |
      | swap                       |       |
      | vcpus                      | 1     |
      +----------------------------+-------+
      

      4.由于下載的鏡像安裝之后它默認禁止密碼登錄,所以要做一些操作,在虛擬機啟動的時候自動初始化密碼等會兒才能登錄。

      默認OpenStack的Dashboard修改虛擬機的密碼的功能是關閉的,你需要enable

      控制節點上:

      [root@controller ~]# vi /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
      

      image-20250611155121004

      # 改完重啟阿帕奇
      [root@controller ~]# systemctl restart httpd.service
      

      修改所有計算節點的nova.conf配置文件,在[libvirt]下面加inject_password=True:

      [root@compute ~]# vi /etc/nova/nova.conf
      

      image-20250611155659619

      節點重啟nova服務:

      [root@compute ~]# openstack-service restart nova
      

      之后就可以了。

      5.創建虛擬機,這里以web界面為例

      image-20250611160038762image-20250611160158327image-20250611160238652image-20250611161300937

      腳本:

      #!/bin/bash
      passwd root<<EOF
      Admin@123
      Admin@123
      EOF
      

      把配置驅動也勾選上完成之后即可創建實例。

      等待虛擬機啟動成功之后測試是否正常:

      image-20250611162424021

      十一、安全組

      11.1 創建安全組

      image-20250611163516793image-20250611163600812image-20250611163625213image-20250611163733027image-20250611163835203

      11.2 應用安全組

      image-20250611164000613image-20250611164112254

      image-20250611164158678

      遠程發現它禁止遠程。

      11.3 配置虛擬機允許遠程

      使用瀏覽器上的VNC登錄,修改ssh配置文件,修改如下兩個地方:

      PasswordAuthentication yes    #去掉注釋
      。。。
      PasswordAuthentication yes    #去掉注釋
      

      重啟遠程服務:

      [root@vm01 ~]# systemctl restart sshd
      

      測試:

      image-20250611165550864

      ok,沒問題。

      posted @ 2025-05-30 11:31  國杰響當當  閱讀(91)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 国产AV国片精品有毛| 无码人妻精品一区二区三区下载 | 99精品热在线在线观看视| 国产成人精品久久一区二区| 青青青国产在线观看免费| 国产精品一线天粉嫩av| 国产精品涩涩涩视频网站| 国产av综合色高清自拍| 青青草无码免费一二三区| 四虎在线播放亚洲成人| 国产精品久久久久鬼色| 一本大道无码av天堂| 日韩av片无码一区二区三区| 亚洲国产欧美在线观看| 久久精品国产亚洲av成人| 亚洲欧美日韩国产精品专区| 国产免费无遮挡吃奶视频| 国产无遮挡性视频免费看| 亚洲AV成人无码久久精品四虎 | 国产精品久久无码不卡黑寡妇| 波多野结衣av无码| 国产综合色在线精品| 欧美野外伦姧在线观看| 国产精品黄色片| 欧美日韩精品一区二区视频| 亚洲三级香港三级久久| 亚洲综合色丁香婷婷六月图片| 免费看亚洲一区二区三区| 国产精品福利中文字幕| 99久久婷婷国产综合精品青草漫画 | 久久精品一偷一偷国产| 国产女人高潮视频在线观看| 国产亚洲精品视频一二区| 欧美性xxxxx极品| 亚洲熟女精品一区二区| 九色综合国产一区二区三区| 亚洲乱码中文字幕久久孕妇黑人| 亚洲夜色噜噜av在线观看| 成人午夜在线观看日韩| 毛片无遮挡高清免费| 女同精品女同系列在线观看|