Score:0

openstack oslo messaging exception in nova-conductor

us flag

I have set up opensatck yoga in ubuntu 22.04. I have gone through each verify step after install which all worked fine. I have 1x controller and 1x compute. In my controller i keep noticing this message.

==> /var/log/nova/nova-conductor.log <==
2022-11-28 08:35:58.338 76768 WARNING oslo_messaging._drivers.amqpdriver [req-9a1d29ba-756a-4a94-bef1-7c1caba6fb8d - - - - -] reply_284f5c12afcb4d0cb6504c70a01b458f doesn't exist, drop reply to 3357a913567c464fb48f7cfb47768a13: oslo_messaging.exceptions.MessageUndeliverable
2022-11-28 08:35:58.340 76768 ERROR oslo_messaging._drivers.amqpdriver [req-9a1d29ba-756a-4a94-bef1-7c1caba6fb8d - - - - -] The reply 3357a913567c464fb48f7cfb47768a13 failed to send after 60 seconds due to a missing queue (reply_284f5c12afcb4d0cb6504c70a01b458f). Abandoning...: oslo_messaging.exceptions.MessageUndeliverable

I am not sure how to troubleshoot this error.

this is my nova conf.

$ sudo egrep -v '^#|^$' /etc/nova/nova.conf
[DEFAULT]
log_dir = /var/log/nova
lock_path = /var/lock/nova
state_path = /var/lib/nova
my_ip = 10.0.0.154
transport_url = rabbit://openstack:openstack@controller1:5672/
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller1/nova_api
[barbican]
[barbican_service_user]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[cyborg]
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller1/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller1:9292
[guestfs]
[healthcheck]
[hyperv]
[image_cache]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
www_authenticate_uri = http://controller1:5000/
auth_url = http://controller1:5000/
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova
[libvirt]
[metrics]
[mks]
[neutron]
auth_url = http://controller1:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
[notifications]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller1:5000/v3
username = placement
password = placement
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[workarounds]
[wsgi]
[zvm]
[cells]
enable = False
[os_region_name]
openstack = 

Here is RabbitMQ status

$ sudo rabbitmqctl cluster_status
Cluster status of node rabbit@controller1 ...
Basics

Cluster name: rabbit@controller1

Disk Nodes

rabbit@controller1

Running Nodes

rabbit@controller1

Versions

rabbit@controller1: RabbitMQ 3.9.13 on Erlang 24.2.1

Maintenance status

Node: rabbit@controller1, status: not under maintenance

Alarms

(none)

Network Partitions

(none)

Listeners

Node: rabbit@controller1, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@controller1, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0

Feature flags

Flag: implicit_default_bindings, state: enabled
Flag: maintenance_mode_status, state: enabled
Flag: quorum_queue, state: enabled
Flag: stream_queue, state: enabled
Flag: user_limits, state: enabled
Flag: virtual_host_metadata, state: enabled

Here is policies

$ sudo rabbitmqctl list_policies
Listing policies for vhost "/" ...
  

Here is permissions

$ sudo rabbitmqctl list_permissions
Listing permissions for vhost "/" ...
user    configure   write   read
guest   .*  .*  .*
openstack   .*  .*  .*

I did a nova-conductor stop and then a start. the following is what gets logged

tail -f /var/log/rabbitmq/rabbit*.log /var/log/nova/nova-*.log

https://pastebin.com/mCaM7S5a

this is the log after restarting rabbitmq

https://pastebin.com/uVrStC3M

There is no active firewall. the services are running on same server

$ sudo ufw status
Status: inactive
us flag
Is rabbitmq up and running (`rabbitmqctl cluster_status`). What does rabbitmq log when nova-conductor tries to connect? What do the policies look like? `rabbitmqctl list_polcies`, `rabbitmqctl list_permissions`.
shorif2000 avatar
us flag
@eblock I added that extra output and the logs
us flag
It tries to connect automatically, as you see in your output (`user 'openstack' authenticated and granted access to vhost '/'`). Is there any firewall or apparmor blocking traffic? I would probably try a restart of rabbitmq and delete all queues just to be sure to have a fresh cluster. So either stop rabbitmq, drop the mnesia tables and start it back on. Then restart nova services just to make sure, and then see if there are any reply queues:`control01:~ # rabbitmqctl list_queues -p openstack | grep reply`
shorif2000 avatar
us flag
@eblock If i delete the queues. do they get created automatically? is there a series of steps i need to redo?
us flag
They are created automatically. Before you restart/delete anything, do you see reply queues right now?
shorif2000 avatar
us flag
I have managed to resolve it by reinstalling nova, neutron without ovn
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.