匯總OpenStack運維中遇到的問題
匯總OpenStack運維中遇到的問題
1.冷遷移和升降配
# 1.配置各計算節點nova用戶免密互信
usermod -s /bin/bash nova
echo "NOVA_PASS"|passwd --stdin nova
su - nova
ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
ssh-copy-id nova@compute01
ssh-copy-id nova@compute02
# 2.設置允許在同一臺主機升降配和遷移
openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host true
openstack-config --set /etc/nova/nova.conf DEFAULT allow_migrate_to_same_host true
openstack-config --set /etc/nova/nova.conf DEFAULT resize_confirm_window 1
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters \
RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
# 3.重啟計算節點的計算服務
systemctl restart openstack-nova-compute.service
2.設置物理機重啟后恢復虛擬機的狀態
openstack-config --set /etc/nova/nova.conf DEFAULT resume_guests_state_on_host_boot true
systemctl restart openstack-nova-compute.service
3.Build of instance aborted: Volume did not finish being created even after we waited 191 seconds or 61 attempts. And its status is downloading.
# 解決方法:在nova.conf中有一個控制卷設備重試的參數:block_device_allocate_retries,可以通過修改此參數延長等待時間。該參數默認值為60,這個對應了之前實例創建失敗消息里的61 attempts。我們可以將此參數設置的大一點,例如:180。這樣Nova組件就不會等待卷創建超時,也即解決了此問題。修改了此參數后,需要重啟Nova組件各個服務,配置才能生效。
openstack-config --set /etc/nova/nova.conf DEFAULT block_device_allocate_retries 180
systemctl restart openstack-nova-compute.service
4.Nova scheduler :Host has more disk space than database expected.
# 解決方法:空間不足,可以通過配置超分比解決。即修改cpu_allocation_ratio、ram_allocation_ratio、disk_allocation_ratio參數。
openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4
openstack-config --set /etc/nova/nova.conf DEFAULT ram_allocation_ratio 1.5
openstack-config --set /etc/nova/nova.conf DEFAULT disk_allocation_ratio 2
openstack-config --set /etc/nova/nova.conf DEFAULT reserved_host_memory_mb 2048
openstack-config --set /etc/nova/nova.conf DEFAULT reserved_host_disk_mb 20480
systemctl restart openstack-nova-compute.service
# 超線程查詢補充:
# 1. 邏輯CPU個數:
grep -c processor /proc/cpuinfo
# 2. 物理CPU個數:
grep 'physical id' /proc/cpuinfo |sort -u|wc -l
# 3.siblings指的是一個物理CPU有幾個邏輯CPU
grep 'siblings' /proc/cpuinfo
# 4.cpu cores指的是一個物理CPU有幾個核心
grep 'cpu cores' /proc/cpuinfo
# 如果siblings和cpu cores一致,則說明不支持超線程,或者超線程未打開。
# 如果siblings是cpu cores的兩倍,則說明支持超線程,并且超線程已打開。
5.Failed to allocate the network(s), not rescheduling.
# 解決方法:超時導致。
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal false
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 0
systemctl restart openstack-nova-compute.service
6.AMQPLAIN login refused: user 'openstack' - invalid credentials.
# 解決方法:一般是由于rabbitmq的用戶名密碼不對導致,檢查nova-api庫的cell_mappings表,查看數據庫中rabbitmq的用戶名密碼。
mysql -uroot -p
MariaDB [(none)]> use nova_api;
MariaDB [nova_api]> select transport_url from cell_mappings where name="cell1";
MariaDB [nova_api]> \q
7.虛擬機橋接網卡mac地址放行(原理同Keepalived VIP放行)
neutron port-list
neutron port-show b9d47bd7-04e7-4bba-8c6a-7bcae212407f
neutron port-update b9d47bd7-04e7-4bba-8c6a-7bcae212407f --allowed-address-pairs ip_address=172.18.1.0/24,mac_address=fa:16:3e:aa:15:a0
neutron port-update --no-allowed-address-pairs b9d47bd7-04e7-4bba-8c6a-7bcae212407f
8.UnicodeEncodeError: 'ascii' codec can't encode characters in position 257-260: ordinal not in range(128)
# 解決方法:熱遷移不成功,計算節點nova-compute服務日志報錯:UnicodeEncodeError: 'ascii' codec can't encode characters in position 257-260: ordinal not in range(128)。原因是python2.7默認字符集不是utf-8所致。
cat >/usr/lib/python2.7/site-packages/sitecustomize.py<<EOF
import sys
reload(sys)
sys.setdefaultencoding('utf8')
EOF
systemctl restart openstack-nova-compute.service
9.修復云主機狀態不對
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 nova reset-state --active
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 openstack server stop
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 openstack server start
10.Ceph存儲獨占鎖導致云主機無法啟動
for i in $(rbd ls -p volumes); do rbd feature disable volumes/$i exclusive-lock, object-map, fast-diff, deep-flatten; done
for i in $(rbd ls -p vms); do rbd feature disable vms/$i exclusive-lock, object-map, fast-diff, deep-flatten; done
for i in $(rbd ls -p volumesSSD); do rbd feature disable volumesSSD/$i exclusive-lock, object-map, fast-diff, deep-flatten; done
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 openstack server reboot --hard
# 參考資料
https://blog.csdn.net/xiaoquqi/article/details/119338817
https://blog.csdn.net/weixin_40579389/article/details/120875351
https://blog.51cto.com/u_13788458/2756828
https://www.136.la/nginx/show-162487.html
https://www.likecs.com/show-278361.html
11.RabbitMQ重置以及RabbitMQ無法啟動
# controller01、controller02、controller03清理數據、重啟服務
systemctl stop rabbitmq-server.service
rm -rf /var/lib/rabbitmq/mnesia/*
systemctl restart rabbitmq-server.service
# controller02、controller03加入集群
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit@controller01
rabbitmqctl start_app
# controller01啟用web插件、創建用戶、授權、檢查集群狀態
rabbitmq-plugins enable rabbitmq_management
rabbitmqctl add_user openstack RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl cluster_status
rabbitmqctl list_queues
12.從數據庫刪除狀態為BUILD的實例
# 故障原因:映射或調度過程中消息隊列通信異常導致的數據庫臟數據
DELETE FROM nova.instance_extra WHERE instance_extra.instance_uuid = '$UUID';
DELETE FROM nova.instance_faults WHERE instance_faults.instance_uuid = '$UUID';
DELETE FROM nova.instance_id_mappings WHERE instance_id_mappings.uuid = '$UUID';
DELETE FROM nova.instance_info_caches WHERE instance_info_caches.instance_uuid = '$UUID';
DELETE FROM nova.instance_system_metadata WHERE instance_system_metadata.instance_uuid = '$UUID';
DELETE FROM nova.security_group_instance_association WHERE security_group_instance_association.instance_uuid = '$UUID';
DELETE FROM nova.block_device_mapping WHERE block_device_mapping.instance_uuid = '$UUID';
DELETE FROM nova.fixed_ips WHERE fixed_ips.instance_uuid = '$UUID';
DELETE FROM nova.instance_actions_events WHERE instance_actions_events.action_id in (SELECT id from nova.instance_actions where instance_actions.instance_uuid = '$UUID');
DELETE FROM nova.instance_actions WHERE instance_actions.instance_uuid = '$UUID';
DELETE FROM nova.virtual_interfaces WHERE virtual_interfaces.instance_uuid = '$UUID';
DELETE FROM nova.instances WHERE instances.uuid = '$UUID';
# DELETE FROM nova_api.build_requests WHERE request_spec_id = '$UUID';
13、無法卸載和刪除卷
# 查找無法刪除的卷并設置為可用狀態
openstack volume list --all-project |grep dffd4456-29b5-41e4-b3b7-7b00a1b3a313
cinder reset-state dffd4456-29b5-41e4-b3b7-7b00a1b3a313 --state available
# 步驟A:獲取接口
cinder --debug show dffd4456-29b5-41e4-b3b7-7b00a1b3a313
# 獲取接口為:http://172.28.8.20:8776/v3/5145855bb46c4f129073172fb982660e/volumes/dffd4456-29b5-41e4-b3b7-7b00a1b3a313
# 步驟B:獲取token
openstack token issue
# 獲取token為:gAAAAABial5wnniQjH-iM8Y10H1li5r0GzyzEJXo4iSuDHYc4S82cuunjyKmFCZJZw3uLzEvtFGGMZ77QMkAZMKNWyq1NVFY3Lr9QgZXrh6PetBWAMCN4YMt7fLDt-IUXKx-1dWFvIZLwVvpC8Ky4S9vuMTMRT7NTM3WwkJtDE5bPLgaRixuZXc
# 步驟3:數據庫查詢卷對應的掛載id
mysql -uroot -p
>use cinder
>select * from volume_attachment where volume_id='dffd4456-29b5-41e4-b3b7-7b00a1b3a313';
# 獲取掛載id為
8edfc42e-eb4e-4405-b0c4-f35cf2c00bfe
# 將步驟A、B、C獲取的內容拼接為請求接口卸載卷掛載
curl -g -i \
-X POST http://172.28.8.20:8776/v3/5145855bb46c4f129073172fb982660e/volumes/dffd4456-29b5-41e4-b3b7-7b00a1b3a313/action \
-H "User-Agent: python-cinderclient" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "X-Auth-Token: gAAAAABial5wnniQjH-iM8Y10H1li5r0GzyzEJXo4iSuDHYc4S82cuunjyKmFCZJZw3uLzEvtFGGMZ77QMkAZMKNWyq1NVFY3Lr9QgZXrh6PetBWAMCN4YMt7fLDt-IUXKx-1dWFvIZLwVvpC8Ky4S9vuMTMRT7NTM3WwkJtDE5bPLgaRixuZXc" \
-d '{"os-detach": {"attachment_id": "8edfc42e-eb4e-4405-b0c4-f35cf2c00bfe"}}'
# 卸載成功后刪除卷
openstack volume delete dffd4456-29b5-41e4-b3b7-7b00a1b3a313
14、VMware Workstation
# vmware-hgfsclient 查看共享的文件夾
# vmhgfs-fuse 掛載共享文件夾
vmhgfs-fuse .host:/OpenStack /mnt -o subtype=vmhgfs-fuse,allow_other
echo ".host:/OpenStack /mnt/hgfs fuse.vmhgfs-fuse allow_other,defaults 0 0" >>/etc/fstab
# 使能dhcp重新分配ip,分別刪除服務端和客戶端leases文件
find / -type f -name "dhclient-*.lease" -exec rm -f {} \;
del C:\ProgramData\VMware\vmnetdhcp.leases
15、使用growpart工具擴容分區并擴容邏輯卷
# 邏輯卷所在物理磁盤如果劃了分區。若動態增加磁盤大小,有兩種方式擴容邏輯卷:
# 第一是新建一個分區,將新分區擴容至邏輯卷。第二是擴容最后一個分區,再擴容邏輯卷。
# 下面介紹第二種方式:
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install cloud-utils-growpart xfsprogs -y
# 擴容第幾個分區 2表示sda的第2個分區
growpart /dev/sda 2
partprobe
# 錯誤提示:unexpected output in sfdisk --version [sfdisk,來自 util-linux 2.23.2]
# 解決方案:
export LC_ALL=en_US.UTF-8
# 錯誤提示:no tools available to resize disk with 'gpt'
# FAILED: failed to get a resizer for id ''
# 解決方案:
yum install -y gdisk
pvresize /dev/sda2
lvextend -l +100%FREE /dev/centos/root
xfs_growfs /dev/centos/root
# resize2fs /dev/centos/root
# btrfs filesystem resize max /
16、虛擬磁盤修復
列出虛擬機磁盤及加載nbd模塊
virsh domblklist <instance_UUID>
rmmod nbd
modprobe nbd max_part=16
情況1:虛擬磁盤是qcow2格式的鏡像文件
qemu-img check <path_to_qcow2_file>
qemu-nbd -c /dev/nbd0 <path_to_qcow2_file>
lsblk -f
xfs_repair /dev/nbd0p1
qemu-nbd -d /dev/nbd0
情況2:虛擬磁盤是Ceph RBD
mon_host=$(grep mon_host /etc/ceph/ceph.conf | awk -F= '{print $2}' | tr -d ' ' | awk -F, '{print $1}')
echo $mon_host
qemu-nbd -c /dev/nbd0 -f raw rbd:<pool_name>/<image_name>:mon_host=1.1.1.1:id=cinder:keyring=/etc/ceph/ceph.client.cinder.keyring
lsblk -f
xfs_repair /dev/nbd0p1
qemu-nbd -d /dev/nbd0
情況3:操作虛擬機文件使用guestmount掛載虛擬機磁盤文件系統
yum install libguestfs-tools -y
virsh destroy <instance_UUID>
guestmount -d instance-xxx -m /dev/sda1 /mnt
umount /mnt
17、計算節點console連接到虛擬機上沒有顯示
# 配置grub內核啟動參數
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8"
# 啟動串口登錄服務
systemctl enable serial-getty@ttyS0.service
systemctl start serial-getty@ttyS0.service
Linux Serial Console 設置的方法 systemd(systemctl) 跟 grub 的差異:
https://jiruiwu.pixnet.net/blog/post/357336576(復制粘貼到瀏覽器地址欄訪問)
18、vnc界面鍵盤無法輸入
虛擬機的xml文件有以下字段
<input type='keyboard' bus='usb'>
<address type='usb' bus='0' port='2'/>
</input>
就會出現登錄vnc無法登錄的情況,去掉就可以了
構造該字段的代碼位于:nova.virt.libvirt.driver.LibvirtDriver._guest_add_keyboard_device
19、解決Openstack創建windows虛擬機只有兩個CPU的問題
方案1:通過修改該虛擬機的xml文件解決。
<cpu mode='host-model' check='partial'>
<topology sockets='2' cores='2' threads='1'/>
</cpu>
方案2:通過修改虛擬機關聯的flavor元數據解決。
openstack flavor set <FLAVOR_NAME> \
--property hw:cpu_sockets=2 \
--property hw:cpu_cores=2 \
--property hw:cpu_threads=1
方案3:通過修改image元數據解決(對已經建好的虛擬機不生效)。
glance image-create \
--name Windows2012R2 \
--file Windows2012R2.raw \
--disk-format raw \
--container-format bare \
--visibility public \
--protected True \
--property hw_qemu_guest_agent=yes \
--property os_type=windows \
--property os_distro=windows \
--property os_version=2008 \
--property hw_vif_multiqueue_enabled=true \
--property ctcm_enabled=true \
--property hw_cpu_sockets=2 \
--property hw_cpu_cores=2 \
--property hw_cpu_threads=1 \
--progress
20、虛擬機CPU被偷取,top命令查看,發現st占比100%
解決方案:在計算節點綁核,通過將虛擬機的CPU核心綁定到物理主機的特定核心上,可以減少CPU資源的爭用,提高虛擬機的性能,減少抖動。
例如將虛擬機的第一個虛擬CPU綁定到物理主機的第3個核心,第二個虛擬CPU綁定到物理主機的第4個核心。綁核后的虛擬機xml如下:
<cputune>
<vcpupin vcpu='0' cpuset='3'/>
<vcpupin vcpu='1' cpuset='4'/>
</cputune>
作者:wanghongwei
版權聲明:本作品遵循<CC BY-NC-ND 4.0>版權協議,商業轉載請聯系作者獲得授權,非商業轉載請附上原文出處鏈接及本聲明。

浙公網安備 33010602011771號