Score:0

openstack create instance stuck at scheduling and took long time to fail with ERROR

us flag

anyone help please, i m unable to create an instance.

Build Environment

Based Hypevisory is ESXI (single Node)

Two VM Created with:

OS Centos 8 Steam with latest update
Openstack Yoga
Network = Flat Bridged
Hypervisor qemu on compute node

Compute Node: 8CPU ( Expose hardware assisted virtualization to the guest OS enabled) 50GBRAM 50GB for OS and 400GBvdisk for Cinder Volume ( Thick provisioned, eagerly zeroed )

Compute Node: 8CPU ( Expose hardware assisted virtualization to the guest OS enabled) 12GBRAM 50GBvdisk (Thick provisioned, eagerly zeroed)

both node DNS name is resolvable from DNS server and pingable.

Create instance using dashboard and command line is same result

ERROR

BlockquoteTraceback (most recent call last): File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 441, in get return self._queues[msg_id].get(block=True, timeout=timeout) File "/usr/lib/python3.6/site-packages/eventlet/queue.py", line 322, in get return waiter.wait() File "/usr/lib/python3.6/site-packages/eventlet/queue.py", line 141, in wait return get_hub().switch() File "/usr/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 313, in switch return self.greenlet.switch() queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1549, in schedule_and_build_instances instance_uuids, return_alternates=True) File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 910, in _schedule_instances return_alternates=return_alternates) File "/usr/lib/python3.6/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations instance_uuids, return_objects, return_alternates) File "/usr/lib/python3.6/site-packages/nova/scheduler/rpcapi.py", line 160, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 192, in call retry=self.retry, transport_options=self.transport_options) File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send transport_options=transport_options) File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 691, in send transport_options=transport_options) File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 679, in _send call_monitor_timeout) File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 567, in wait message = self.waiters.get(msg_id, timeout=timeout) File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 445, in get 'to message ID %s' % msg_id) oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID c4216d4a20dd47b5860e15b52d5e99c2 Blockquote

from controller node:

journalctl -f SYSLOG_IDENTIFIER=nova-scheduler | grep -E "DEBUG|WARNING|ERROR"

instances from host 'nova-compute01'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:957 May 04 08:22:21 openstack.ad.local nova-scheduler[3142]: 2022-05-04 08:22:21.777 3142 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3144]: 2022-05-04 08:22:21.777 3144 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3139]: 2022-05-04 08:22:21.777 3139 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3137]: 2022-05-04 08:22:21.777 3137 DEBUG nova.scheduler.host_manager [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Successfully synced instances from host 'nova-compute01'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:957 May 04 08:22:21 openstack.ad.local nova-scheduler[3138]: 2022-05-04 08:22:21.777 3138 DEBUG nova.scheduler.host_manager [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Successfully synced instances from host 'nova-compute01'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:957 May 04 08:22:21 openstack.ad.local nova-scheduler[3136]: 2022-05-04 08:22:21.777 3136 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" acquired by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:390 May 04 08:22:21 openstack.ad.local nova-scheduler[3143]: 2022-05-04 08:22:21.777 3143 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3137]: 2022-05-04 08:22:21.777 3137 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3138]: 2022-05-04 08:22:21.777 3138 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:22:21 openstack.ad.local nova-scheduler[3136]: 2022-05-04 08:22:21.777 3136 DEBUG nova.scheduler.host_manager [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Successfully synced instances from host 'nova-compute01'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:957 May 04 08:22:21 openstack.ad.local nova-scheduler[3136]: 2022-05-04 08:22:21.777 3136 DEBUG oslo_concurrency.lockutils [req-2402ad44-ecfa-4b7a-a8c1-8aff6b4d8c20 - - - - -] Lock "host_instance" "released" by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405

From Compute Node

journalctl -f SYSLOG_IDENTIFIER=nova-compute | grep -E "DEBUG|WARNING|ERROR"

May 04 08:26:21 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:21.661 2137 DEBUG oslo_concurrency.lockutils [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.031s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 May 04 08:26:26 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:26.657 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:26:37 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:37.567 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:26:37 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:37.567 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9444 May 04 08:26:37 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:37.567 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9448 May 04 08:26:37 nova-compute01 nova-compute[2137]: 2022-05-04 08:26:37.573 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9530 May 04 08:27:04 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:04.566 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:27:07 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:07.568 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:27:07 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:07.568 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9444 May 04 08:27:07 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:07.569 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9448 May 04 08:27:07 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:07.574 2137 DEBUG nova.compute.manager [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9530 May 04 08:27:08 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:08.568 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211 May 04 08:27:09 nova-compute01 nova-compute[2137]: 2022-05-04 08:27:09.568 2137 DEBUG oslo_service.periodic_task [req-584789e4-2699-48bc-81f6-1fa6f3e2045e - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211

us flag
Is rabbitmq up and running? The timeout message could be related to that. Are network services up and running? `openstack network agent list`, `openstack compute service list` could give us some more information. What does `nova-conductor.log` contain, any hints?
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.