Score:0

Openstack Victoria SPICE configuration not working

us flag

Running openstack on Ubuntu 20.04 with a controller node and four compute nodes. I am reading the instructions on configuring SPICE here. I will admit, I found out about which packages to install from other sites, as the instructions at the link do not actually list any software that needs to be installed.

  • On the compute nodes and controller, I installed nova-spiceproxy
  • On the controller I installed nova-spiceproxy and spice-html5

I followed the instructions in the link on configuring /etc/nova/nova.conf on the controller and all compute nodes.

In a nutshell, when you click on console in the dashboard, it simply says:

Something went wrong!

An unexpected error has occurred. Try refreshing the page. If that doesn't help, contact your local administrator.

I have parsed every log in /var/log on the compute and controller nodes and found nothing that would indicate it is failing.

On the compute node, I can do a ps aux and see that spice is running on the instance I am trying to connect to:

/usr/bin/qemu-system-x86_64 -name guest=instance-00000076,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-42-instance-00000076/master-key.aes -machine pc-i440fx-4.2,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,ibpb=on,amd-stibp=on,amd-ssbd=on,skip-l1dfl-vmentry=on,pschange-mc-no=on -m 2000 -overcommit mem-lock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 09a30478-01b3-4692-ba54-a7290f92a5c9 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=22.2.0,serial=09a30478-01b3-4692-ba54-a7290f92a5c9,uuid=09a30478-01b3-4692-ba54-a7290f92a5c9,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=39,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -blockdev {"driver":"file","filename":"/var/lib/nova/instances/_base/a7e6bc0f2d1c0963744ee4633bda2b725c84cdce","node-name":"libvirt-4-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-4-format","read-only":true,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-4-storage"} -blockdev {"driver":"file","filename":"/var/lib/nova/instances/09a30478-01b3-4692-ba54-a7290f92a5c9/disk","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-2-storage","backing":"libvirt-4-format"} -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=libvirt-2-format,id=virtio-disk0,bootindex=1,write-cache=on -blockdev {"driver":"file","filename":"/var/lib/nova/instances/_base/swap_100","node-name":"libvirt-3-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-3-format","read-only":true,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"} -blockdev {"driver":"file","filename":"/var/lib/nova/instances/09a30478-01b3-4692-ba54-a7290f92a5c9/disk.swap","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-3-format"} -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=libvirt-1-format,id=virtio-disk1,write-cache=on -netdev tap,fd=43,id=hostnet0,vhost=on,vhostfd=44 -device virtio-net-pci,host_mtu=1450,netdev=hostnet0,id=net0,mac=fa:16:3e:2f:86:de,bus=pci.0,addr=0x3 -add-fd set=3,fd=46 -chardev pty,id=charserial0,logfile=/dev/fdset/3,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -spice port=5908,addr=0.0.0.0,disable-ticketing,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x8 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on

I am really not sure where to go. I have tried VNC and noVNC also to no avail.

My relevant config items in /etc/nova/nova.conf from a compute node are:

[DEFAULT]
vnc_eanabled = false
# yes, I know it is redundant, I am following the docs...

[vnc]
enabled = false

[spice]
enabled = true
agent_enabled = true
html5proxy_base_url = http://10.131.39.40:6082/spice_auto.html
# the above IP is the IP of the controller.
server_listen = 0.0.0.0
server_proxyclient_address = 10.131.29.42
# the above IP is the IP of the controller that I pulled this config from
html5proxy_host = 0.0.0.0
html5proxy_port = 6082

Back on the controller node, in /etc/nova/nova.conf I have:

[spice]
enabled = true
agent_enabled = false
html5proxy_host = 0.0.0.0
html5proxy_port = 6082
keymap = en_us

I also have vnc false, as it is on the compute nodes. I will admit further, I have tried at least 20 iterations of this, just leaving off proxy ports, or thinking for a moment that the proxyclientaddress was the controller node, but none of them work. I do restart nova on the controller and all compute nodes after every change.

Any ideas or directions would be appreciated. I wish there were logs, I am perhaps missing them or need to turn on debugging of some kind.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.