Score:0

CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Failed to fetch token data from identity server

cn flag

I am building openstack (Yoga version on Ubuntu 22.04) high availability using ssl configuration. I was able to get other services to work using https (except neutron, cinder and dashboard), but Nova throws the error in /var/log/nova/nova-api.log below:

CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Failed to fetch token data from identity server: keystonemiddleware.auth_token._exceptions.ServiceError: Failed to fetch token data from identity server

When I run the command below to get token for user "nova" I am able to get a token:

openstack --os-auth-url https://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name service --os-username nova --os-password token issue

Controller is the virtual hostname for all controllers (x3). I have all nodes (controller and Compute nodes) configured in /etc/hosts file.

My configuration is as follows: admin-openrc

export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=<admin-password>
export OS_AUTH_URL=https://controller:5000/v3
#export OS_SERVICE_TOKEN=
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

/etc/nova/nova.conf

[keystone_authtoken]

www_authenticate_uri = https://controller:5000
auth_url = https://controller:5000
memcached_servers = 192.168.120.11:11211,192.168.120.12:11211,192.168.120.13:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = <nova-passwd>

Your assistance is highly appreciated. Please let me know if you require more info.

Thank you

us flag
I'm not sure if that's actually required, but the `www_authenticate_uri` and `auth_url` are missing the `/v3` path which I have in all config files. If you compare it with neutron/glance/cinder etc., does the config differ?
Score:0
cn flag

Thank you for your assistance in advance. I discovered that when I use the config below, everything works fine

frontend glance-api-front
        bind 192.168.120.10:9292
        default_backend glance-api-back

backend glance-api-back
        balance source
        option  tcpka
        option  httpchk
#       option  tcplog
        server controller1 192.168.120.11:9292 check inter 2000 rise 2 fall 5
        server controller2 192.168.120.12:9292 check backup inter 2000 rise 2 fall 5
        server controller3 192.168.120.13:9292 check backup inter 2000 rise 2 fall 5

but when I try to simulate failure of active controller node, I get the error below:

"An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-8d4979ac-c0f0-4900-94b8-814b855c5853)"

not sure how to configure HA to failover to backup controller nodes

Thank you

us flag
You didn't really post an answer, I'd recommend to put that information into your initial question. So how do you manage HA? Apparently for the openstack services you're using HAProxy, but who manages the virtual IP (192.168.120.10)? If you stop the controller node, does the virtual IP "move" to a different host automatically? We use pacemaker to manage that IP. If one control node fails or is shut down for maintenance pacemaker moves that resource to a different control node and the APIs are still responsive.
gbayi_omo avatar
cn flag
Thank you eblock and apologies for adding a new question to the answer section. I should have added it to the initial question. I was able to fix the error by creating fernet-setup and credential-setup on the 2nd and 3rd controllers. What I did was copy the folders (fernet and credentials) to the other 2 controllers.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.