Short answer: don't bother doing that. Simply give your guest how many vCPUs it needs and let the Linux host manage them.
Long answer: dynamically adding and removing vCPUs is supported via the virsh setvcpus
command. However, its use is difficult and impractical for the general case.
The first issue is that you can't add more vCPUs than what configured as maximum in your domain xml file. Take, for example, a guest configured with 1 current/maximum vCPU. Running the command virsh setvcpus MyGuest 2
return with the following error:
error: invalid argument: requested vcpus is greater than max allowable vcpus for the live domain: 2 > 1
So you need to adjust (raising) your maximum vCPU count before adding any more CPU at runtime. However, changing the maximum vCPU count requires a full domain restart (ie: shutdown - config edit - start).
That said, one can be tempted to "oversubscribe" your maximum vCPUs while leaving the current vCPUs at more sane level (ie: set 4 max vCPUs but with only 1 currently active). Doing that is indeed sufficient to manage CPU at runtime, as now virsh setvcpus MyGuest 2
returns without errors. Peeking into the guest you can see the newly added CPU:
# dmesg
[ 39.814166] CPU1 has been hot-added
# lscpu
CPU(s): 2
On-line CPU(s) list: 0
Off-line CPU(s) list: 1
However, the newly added CPU is offline, meaning that the guest will not use it. One had to enable the new CPU via chcpu -e 1
. Alternatively, you can pass the "--guest" parameter to your virsh
command - running virsh setvcpus MyGuest 2 --guest
, but this requires a working connection to the qemu-agent in the guest, otherwise you will receive the following error:
error: argument unsupported: QEMU guest agent is not configured
Removing a CPU is easier, as running virsh setvcpus MyGuest 1
will offline and remove the vCPU from your guest even with no agent connection.