Score:-1

How to set machine state: poweron with community.vmware.vmware_guest_powerstate task?

uz flag

I'm pretty new with Ansible so I might configured things wrong
[I have a Docker container running Ansible service in CentOS8
I have an Ansible repository that include the Ansible files (this is a .Git repository]

My will was to automatically revert each lab (the lab is composed of 8 vms, 5 windows server 2016 and 3 windows 10. The DC include policy to enable winrm in those machines) in vCenter server to a specific snapshot. But first I'm trying to: once power-off the lab's machines when they're turned-on and once power-on the lab's machines when they're turned-off

So, I (with the help of ansible-roles-explained-with-examples guide):

  • Created a role with ansible-galaxy init command name vcenter (see directory tree below)
  • Created some vcenter tasks files inside tasks folder (see directory tree below). Here is an example of poweroff.yml and poweron.yml tasks files:
- name: Set the state of a virtual machine to poweroff
  community.vmware.vmware_guest_powerstate:
    hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    folder: "/{{ datacenter_name }}/vm/{{ folder }}"
    name: "{{ ansible_hostname }}"
    # name: "{{ guest_name }}"
    validate_certs: no
    state: powered-off
    force: yes
  delegate_to: localhost
  register: deploy
- name: Set the state of a virtual machine to poweron using MoID
  community.vmware.vmware_guest_powerstate:
    hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    folder: "/{{ datacenter_name }}/vm/{{ folder }}"
    name: "{{ ansible_hostname }}"
    # moid: vm-42
    validate_certs: no
    state: powered-on
  delegate_to: localhost
  register: deploy
  • Supplied vCenter credentials in vcenter\vars\main.yml file, like this:
# vars file for vcenter
vcenter_hostname: vcenter.foo.com
vcenter_username: [email protected]
vcenter_password: f#0$o#1$0o
datacenter_name: FOO_Fighters
# datastore_name: 
cluster_name: FOO
folder: '/FOO/PRODUCT/DOMAIN.COM/' 
  • Included the tasks in tasks\main.yml file with import-task key, like this:
---
# tasks file for roles/vcenter
- import_tasks: poweroff.yml
# - import_tasks: poweron.yml
# - import_tasks: revert.yml
# - import_tasks: shutdown.yml
  • Created a all.yml inside group_vars folder in inventories library (i don't know if its a professional way to do like that) that include all winrm details like this:
---
#WinRM Protocol Details
ansible_user: DOMAIN\user
ansible_password: f#0$o#1$0o
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_scheme: http
ansible_winrm_server_cert_validation: ignore
ansible_winrm_transport: ntlm
ansible_winrm_read_timeout_sec: 60
ansible_winrm_operation_timeout_sec: 58
  • Created a revert_lab.yml playbook that include the role, like this
---
- name: revert an onpremis lab
  hosts: all
  roles:
  - vcenter

My ansible.cfg is like this:

[defaults]
inventory = /ansible/inventories
roles_path = ./roles:..~/ansible/roles

I executed the playbook successfully to poweroff all the machines in the lab, then I "turned on" the poweron task in the role, like that:

---
# tasks file for roles/vcenter
# - import_tasks: poweroff.yml
- import_tasks: poweron.yml
# - import_tasks: revert.yml
# - import_tasks: shutdown.yml

Now that the all lab's machines are shutdown, executing the playbook, give the following error:

PLAY [revert vmware vcenter lab] *************************************************
TASK [Gathering Facts] ***********************************************************
fatal: [vm1.domain.com]: UNREACHABLE! => {"changed": false, "msg": "ntlm: 
HTTPConnectionPool(host='vm1.domain.com', port=5985): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7ae4908d0>: Failed to establish a new connection: [Errno 111] Connection refused',))", "unreachable": true}
fatal: [vm2.domain.com]: UNREACHABLE! => {"changed": false, "msg": "ntlm: HTTPConnectionPool(host='vm2.domain.com', port=5985): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7ae487b00>: Failed to establish a new connection: [Errno 111] Connection refused',))", "unreachable": true}
fatal: [vm3.domain.com]: UNREACHABLE! => {"changed": false, "msg": "ntlm: HTTPConnectionPool(host='vm3.domain.com', port=5985): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7ae48acc0>: Failed to establish a new connection: [Errno 111] Connection refused',))", "unreachable": true}
fatal: [vm4.domain.com]: UNREACHABLE! => {"changed": false, "msg": "ntlm: HTTPConnectionPool(host='vm4.domain.com', port=5985): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7ae48de80>: Failed to establish a new connection: [Errno 111] Connection refused',))", "unreachable": true}
fatal: [vm5.domain.com]: UNREACHABLE! => {"changed": false, "msg": "ntlm: 
HTTPConnectionPool(host='vm5.domain.com', port=5985): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7ae41f080>: Failed to establish a new connection: [Errno 111] Connection refused',))", "unreachable": true}
fatal: [vm6.domain.com]: UNREACHABLE! => {"changed": false, "msg": "ntlm: HTTPConnectionPool(host='vm6.domain.com', port=5985): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7ae41d7f0>: Failed to establish a new connection: [Errno 111] Connection refused',))", "unreachable": true}
fatal: [vm7.domain.com]: UNREACHABLE! => {"changed": false, "msg": "ntlm: HTTPConnectionPool(host='vm7.domain.com', port=5985): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7ae428048>: Failed to establish a new connection: [Errno 111] Connection refused',))", "unreachable": true}
fatal: [vm8.domain.com]: UNREACHABLE! => {"changed": false, "msg": "ntlm: HTTPConnectionPool(host='vm8.domain.com', port=5985): Max retries exceeded with url: /wsman (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7ae425588>: Failed to establish a new connection: [Errno 111] Connection refused',))", "unreachable": true}

PLAY RECAP ***********************************************************************
vm1.domain.com    : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
vm2.domain.com    : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
vm3.domain.com    : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
vm4.domain.com    : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
vm5.domain.com   : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
vm6.domain.com   : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
vm7.domain.com     : ok=0    changed=0    unreachable=1    failed=0    skipped=0   rescued=0    ignored=0
vm8.domain.com     : ok=0    changed=0    unreachable=1    failed=0    skipped=0   rescued=0    ignored=0

Why the poweroff task works OK and poweron doesn't? How can I fix this issue?

My repository:

C:.
├───ansible
│   │   ansible.cfg
│   ├───inventories
│   │   └───test
│   │       ├───cloud
│   │       └───onpremis
│   │           └───domain.com
│   │               │   lab_j.yml
│   │               │   lab_r.yml
│   │               └───group_vars
│   │                       all.yml
│   ├───playbooks
│   │       revert_lab.yml
│   └───roles
│       └───vcenter
│           ├───tasks
│           │       main.yml
│           │       poweroff.yml
│           │       poweron.yml
│           │       revert.yml
│           │       shutdown.yml
│           └───vars
│                   main.yml

My inventory lab_r.yml - this is a partial schema

---
all:
  children:
    root:
      children:
        center:
          children:
            appservers:
              hosts:
                vm1.domain.com:
            qservers:
              hosts:
                vm2.domain.com:
            dbservers:
              hosts:
                vm3.domain.com:
in flag
Is the machine you are starting the ansible playbooks a windows machine set up for WinRM? My guess would be that ansible tries to connect with WinRM to localhost, since you configured this in `all.yml`, which fails. localhost should be configured with `ansible_connection: local`. this should be the default when localhost is not explicitly specified in the inventory, but who knows ...
Zeitounator avatar
fr flag
You don't have to declare `localhost` in your inventory: it is [implicit](https://docs.ansible.com/ansible/latest/inventory/implicit_localhost.html) and you usually want it to stay that way so it does not match the `all` group target. Meanwhile, as reported in the doc link, it still reads vars from `group_vars/all.yml` which is the problem as reported by @GeraldSchneider above. Just move the file to `group_vars/center.yml` so that values are applied to the relevant group only.
uz flag
@Zeitounator, Please clarify, why ansible isn't read the ```roles\vcenter\vars\main.yml``` file that include the vCenter access details?
uz flag
@GeraldSchneider change to ```ansible_connection: local``` and run the playbook give the following results: TASK [Gathering Facts] all the machines are recognized, then performing the task shows the following error for all the machine list: ```"msg": "Unable to set power state for non-existing virtual machine : 'ansible'"```
Zeitounator avatar
fr flag
Your vms are off. You need to turn off facts gathering in your play in that case (with `gather_facts: false,`) else ansible tries to connect to them to get info before you play the task that turns them on
uz flag
@Zeitounator, I set the play with ```gather_facts: false```. still same results - vms are shutdown. Change ```ansible_connection``` to ```local``` results: ```"msg": "The task includes an option with an undefined variable. The error was: 'ansible_hostname'...```
Zeitounator avatar
fr flag
It's not the same result... now you get an error because you are trying to use a variable that is defined only when you gather facts... which is not possible because your vm is off. You have to find the solution to get the machine name without having to connect to the vm.
uz flag
@Zeitounator, I tried. In ```lab_r.yml``` file I added additional variable ```vm_name``` with the names of the machines like this: ```vm_name: VM1``` under each fqdn filed (as you can see in the example in the post). I ```poweron.yml``` task I change to ```name: {{ vm_name }}```. I set the play with ```gather_facts: no```. After execution I got: ```"msg": "Unable to set power state for non-existing virtual machine : 'VM1'"```
Zeitounator avatar
fr flag
You seem to be more interested in chatting about your debugging session than trying to get your problem fixed. Are you expecting people reading this to know which name your vm has inside your vmware installation? No one can guess from your actual post since you did not tell us how you created the vms and from which variables. Use a name that actually exists and it will be powered on. Good luck.
Score:0
uz flag

Problem was solved by:
I set folder key value to "/{{ datacenter_name }}/"
I add the poweron task to additional task as revert - meaning poweron task worked for me only if it was part of a sequence of tasks
Unfortunately, the poweron task didn't worked for me as a standalone task

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.