Score:0

ESXi v in v setup - distributed switch kills all interhost communication

mx flag

On all distributed port groups I disabled all security settings and set everything to promiscuous.

I've got a setup I built where I have three virtualized ESXi hosts. Their management vmkernels were on a standard vswitch - everything worked like a champ. They could ping each other, they could ping out to my physical default gateway. No problems.

I migrated their managament vmkernels to a distributed switch that each of them have, they can still ping out to the default gateway, but they can no longer ping each other. I'm still working at it but at a complete loss as to how they could be able to ping the default gateway but not be able to ping each other.

Score:0
mx flag

I used the quick configure feature for vSAN to deploy the DVS. I still have no idea why, but I made my own switch manually, migrated the vmkernel nics to it, and it works great. Everything connected immediately.

I went line by line and compared the one VMWare's autodeploy made and mine trying to figure out the difference and while I haven't yet used PowerCLI to pull more detailed settings and compare, I have found nothing.

I have no idea why the autobuilt switch doesn't work but remaking one manually resolved the problem.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.