Điểm:0

openstack create instance stuck at scheduling and took long time to fail with ERROR

lá cờ us

anyone help please, i m unable to create an instance.

Build Environment

Based Hypevisory is ESXI (single Node)

Two VM Created with:

OS Centos 8 Steam with latest update
Openstack Yoga
Network = Flat Bridged
Hypervisor qemu on compute node

Compute Node: 8CPU ( Expose hardware assisted virtualization to the guest OS enabled) 50GBRAM 50GB for OS and 400GBvdisk for Cinder Volume ( Thick provisioned, eagerly zeroed )

Compute Node: 8CPU ( Expose hardware assisted virtualization to the guest OS enabled) 12GBRAM 50GBvdisk (Thick provisioned, eagerly zeroed)

both node DNS name is resolvable from DNS server and pingable.

Create instance using dashboard and command line is same result

ERROR

BlockquoteTraceback (most recent call last): File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 441, in get return self._queues[msg_id].get(block=True, timeout=timeout) File "/usr/lib/python3.6/site-packages/eventlet/queue.py", line 322, in get return waiter.wait() File "/usr/lib/python3.6/site-packages/eventlet/queue.py", line 141, in wait return get_hub().switch() File "/usr/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 313, in switch return self.greenlet.switch() queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1549, in schedule_and_build_instances instance_uuids, return_alternates=True) File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 910, in _schedule_instances return_alternates=return_alternates) File "/usr/lib/python3.6/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations instance_uuids, return_objects, return_alternates) File "/usr/lib/python3.6/site-packages/nova/scheduler/rpcapi.py", line 160, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 192, in call retry=self.retry, transport_options=self.transport_options) File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send transport_options=transport_options) File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 691, in send transport_options=transport_options) File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 679, in _send call_monitor_timeout) File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 567, in wait message = self.waiters.get(msg_id, timeout=timeout) File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 445, in get 'to message ID %s' % msg_id) oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID c4216d4a20dd47b5860e15b52d5e99c2 Blockquote

from controller node:

journalctl -f SYSLOG_IDENTIFIER=nova-scheduler | grep -E "DEBUG|WARNING|ERROR"

instances from host 'nova-compute01'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:957 May 04 08:22:21 openstack.ad.local nova-scheduler[3142]: 2022-05-04 08:22:21.777 3142 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3144]: 2022-05-04 08:22:21.777 3144 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3139]: 2022-05-04 08:22:21.777 3139 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3137]: 2022-05-04 08:22:21.777 3137 DEBUG nova.scheduler.host_manager [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Successfully synced instances from host 'nova-compute01'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:957 May 04 08:22:21 openstack.ad.local nova-scheduler[3138]: 2022-05-04 08:22:21.777 3138 DEBUG nova.scheduler.host_manager [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Successfully synced instances from host 'nova-compute01'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:957 May 04 08:22:21 openstack.ad.local nova-scheduler[3136]: 2022-05-04 08:22:21.777 3136 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" acquired by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:390 May 04 08:22:21 openstack.ad.local nova-scheduler[3143]: 2022-05-04 08:22:21.777 3143 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3137]: 2022-05-04 08:22:21.777 3137 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3138]: 2022-05-04 08:22:21.777 3138 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3136]: 2022-05-04 08:22:21.777 3136 DEBUG nova.scheduler.host_manager [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Successfully synced instances from host 'nova-compute01'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:957 May 04 08:22:21 openstack.ad.local nova-scheduler[3136]: 2022-05-04 08:22:21.777 3136 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405

From Compute Node

journalctl -f SYSLOG_IDENTIFIER=nova-compute | grep -E "DEBUG|WARNING|ERROR"

May 04 08:26:21 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:21.661 2137 DEBUG oslo_concurrency.lockutils [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.031s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:26:26 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:26.657 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:26:37 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:37.567 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:26:37 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:37.567 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9444 May 04 08:26:37 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:37.567 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9448 May 04 08:26:37 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:37.573 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9530 May 04 08:27:04 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:04.566 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:27:07 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:07.568 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:27:07 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:07.568 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9444 May 04 08:27:07 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:07.569 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9448 May 04 08:27:07 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:07.574 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9530 May 04 08:27:08 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:08.568 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:27:09 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:09.568 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211

lá cờ us
Rabbitmq có hoạt động không? Thông báo hết thời gian có thể liên quan đến điều đó. Các dịch vụ mạng có hoạt động không? `danh sách tác nhân mạng openstack`, `danh sách dịch vụ điện toán openstack` có thể cung cấp cho chúng tôi thêm một số thông tin. `nova-conductor.log` chứa gì, có gợi ý nào không?

Đăng câu trả lời

Hầu hết mọi người không hiểu rằng việc đặt nhiều câu hỏi sẽ mở ra cơ hội học hỏi và cải thiện mối quan hệ giữa các cá nhân. Ví dụ, trong các nghiên cứu của Alison, mặc dù mọi người có thể nhớ chính xác có bao nhiêu câu hỏi đã được đặt ra trong các cuộc trò chuyện của họ, nhưng họ không trực giác nhận ra mối liên hệ giữa câu hỏi và sự yêu thích. Qua bốn nghiên cứu, trong đó những người tham gia tự tham gia vào các cuộc trò chuyện hoặc đọc bản ghi lại các cuộc trò chuyện của người khác, mọi người có xu hướng không nhận ra rằng việc đặt câu hỏi sẽ ảnh hưởng—hoặc đã ảnh hưởng—mức độ thân thiện giữa những người đối thoại.