Score:0

Strange behaviour with Hyper-V Server 2019 and vCPU

cn flag

I'm not sure when it started, but I believe it wasn't always the case.

Dell R550 Server 2019 Standard with Hyper-V in a Cluster, 2 node. 2 x Intel Xeon Silver 4309Y 8C/16T, so in Windows it shows with 16C/32T. Started making new Server 2019 Standard guest VMs, and things seemed to operate normally. Copied over and imported some Server 2012 R2 VMs (v5.0) in-place, from Server 2012 and 2012 R2, and 2019. Everything running fine, or so I thought.

Started noticing issues where any guest VM on Server 2019 would only operate on half the cores that have been assigned to it in Hyper-V Manager.

From some research online, it could be the VM version, so I upgraded them all to v9.0. But the problem persists. It could be a VM setting HwThreadCountPerCore, but they're all set to 0. I looked at NUMA (for some reason, maybe some capacity issue). 2 NUMA nodes due to 2 sockets, NumaSpanningEnabled is true, other settings look normal to me.

If a VM has 4 vCPU assigned, then task manager in that guest would show 2 cores. If it has 2, then it would show 1 core. Checking Device Manager in that guest would show 4 processors listed, but task manager would show 2 cores. I've checked a Server 2012 VM and they seem to be operating on all cores assigned. I have no Server 2012 R2, but most VMs are 2019. I've changed a VM from 4 vCPU to 2 and witnessed the cores in the guest to go down from 2 to 1, then would go back up to 2 when set to 4.

Is it Side Channel Attack Mitigations? From my checking all the settings we have in place should be fine.

  • Hyper-V Host with hyperthreading: FeatureSettingsOverride = 72, FeatureSettingsOverrideMask = 3, MinVmVersionForCpuBasedMitigations = "1.0"
  • Hyper-V Guest: FeatureSettingsOverride = 8264, FeatureSettingsOverrideMask = 3

Output of the Get-SpeculationControlSettings from both Host and Guest:

For more information about the output below, please refer to https://support.microsoft.com/help/4074629
Speculation control settings for CVE-2017-5715 [branch target injection]
  Hardware support for branch target injection mitigation is present: True
  Windows OS support for branch target injection mitigation is present: True
  Windows OS support for branch target injection mitigation is enabled: True
Speculation control settings for CVE-2017-5754 [rogue data cache load]
  Hardware is vulnerable to rogue data cache load: False
  Hardware requires kernel VA shadowing: True
  Windows OS support for kernel VA shadow is present: True
  Windows OS support for kernel VA shadow is enabled: True
  Windows OS support for PCID performance optimization is enabled: True [not required for security]
Speculation control settings for CVE-2018-3639 [speculative store bypass]
  Hardware is vulnerable to speculative store bypass: True
  Hardware support for speculative store bypass disable is present: True
  Windows OS support for speculative store bypass disable is present: True
  Windows OS support for speculative store bypass disable is enabled system-wide: True
Speculation control settings for CVE-2018-3620 [L1 terminal fault]
  Hardware is vulnerable to L1 terminal fault: False
Speculation control settings for MDS [microarchitectural data sampling]
  Windows OS support for MDS mitigation is present: True
  Hardware is vulnerable to MDS: False
Speculation control settings for SBDR [shared buffers data read]
  Windows OS support for SBDR mitigation is present: True
  Hardware is vulnerable to SBDR: True
  Windows OS support for SBDR mitigation is enabled: True
Speculation control settings for FBSDP [fill buffer stale data propagator]
  Windows OS support for FBSDP mitigation is present: True
  Hardware is vulnerable to FBSDP: True
  Windows OS support for FBSDP mitigation is enabled: True
Speculation control settings for PSDP [primary stale data propagator]
  Windows OS support for PSDP mitigation is present: True
  Hardware is vulnerable to PSDP: True
  Windows OS support for PSDP mitigation is enabled: True

BTIHardwarePresent                  : True
BTIWindowsSupportPresent            : True
BTIWindowsSupportEnabled            : True
BTIDisabledBySystemPolicy           : False
BTIDisabledByNoHardwareSupport      : False
BTIKernelRetpolineEnabled           : False
BTIKernelImportOptimizationEnabled  : True
RdclHardwareProtectedReported       : True
RdclHardwareProtected               : True
KVAShadowRequired                   : True
KVAShadowWindowsSupportPresent      : True
KVAShadowWindowsSupportEnabled      : True
KVAShadowPcidEnabled                : True
SSBDWindowsSupportPresent           : True
SSBDHardwareVulnerable              : True
SSBDHardwarePresent                 : True
SSBDWindowsSupportEnabledSystemWide : True
L1TFHardwareVulnerable              : False
L1TFWindowsSupportPresent           : True
L1TFWindowsSupportEnabled           : False
L1TFInvalidPteBit                   : 0
L1DFlushSupported                   : True
HvL1tfStatusAvailable               : False
HvL1tfProcessorNotAffected          : False
MDSWindowsSupportPresent            : True
MDSHardwareVulnerable               : False
MDSWindowsSupportEnabled            : True
FBClearWindowsSupportPresent        : True
SBDRSSDPHardwareVulnerable          : True
FBSDPHardwareVulnerable             : True
PSDPHardwareVulnerable              : True
FBClearWindowsSupportEnabled        : True

For a Server 2019 VM, that has 4 vCPU assigned, and is showing 2 virtual processors in task manager, the NUMA page shows:

  • Processors: 4
  • NUMA nodes: 1
  • Sockets: 1
  • Hardware threads per core: 2

Hardware threads per core is set to 0 meaning it inherits. In the guest, running wmic cpu get NumberOfCores,NumberOfLogicalProcessors /Format:List shows:

wmic cpu get NumberOfCores,NumberOfLogicalProcessors /Format:List
NumberOfCores=2
NumberOfLogicalProcessors=2

Also Hyper-V is currently running the Core Scheduler mode.

I'm stuck thinking what it could be. Appreciate any help.

cn flag
`Is it Side Channel Attack Mitigations?` What is the output of the SpeculationControl settings PowerShell script on the host and guests? Usually this is an issue is when the settings are enabled but the hardware has not been updated.
TheManInOz avatar
cn flag
Thanks. From when I look at the results it all looks OK with all areas protected. The output for both host and guest I have added to the OP.
TheManInOz avatar
cn flag
I have been reading this article: https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/manage-hyper-v-scheduler-types Am I running on the Core Scheduler and is my issue summed up by this sentence: "Guest VPs are constrained to run on underlying physical core pairs, isolating a VM to processor core boundaries, thus reducing vulnerability to side-channel snooping attacks from malicious VMs."
TheManInOz avatar
cn flag
After I've spent this time with more reading and troubleshooting, I think this is my theory so far: With a guest VM having FeatureSettingsOverride of 8264, which is the value when Hyper-Threading is Not Present, running on a Server 2019 Hyper-V host which comes with the new Core Scheduler, it enables SMT and so Hyper-Threading is now taking place inside the guest, so the value of FeatureSettingsOverride should be 72. I tested changing this value and all cores assigned now appeared for that guest. I'm going to change this value in my policy to affect all computers and continue monitoring.
Score:0
cn flag

Providing this answer from my own findings and testing. This seems to be the resolution for my environment.

As mentioned above, historically our Side Channel Attack mitigation for VMs was to set the values to a Hyper-Threading Disabled value, as there is no HT in Hyper-V VMs.

Until now, Server 2016 and above they introduced the Core Scheduler, which also introduces sharing SMT (Hyper-Threading) with Hyper-V, meaning each VM now operates SMT.

Once we changed our Side Channel Attack mitigation value to match the Hyper-Threading Enabled value, and rebooted the VM, all the allocated vCPU returned.

Has anyone seen this before? Where the value of FeatureSettingsOverride would affect how many cores are presented and used?

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.