Unraid VM Snapshot Automation with Ansible: Part 2 - Restoring Snapshots

Unraid VM Snapshot Automation with Ansible: Part 2 - Restoring Snapshots

Ansible Playbooks to Restore Unraid VM Disks From Local or Remote Snapshots

Intro

Hello again, and welcome to the second post in my Unraid snapshot automation series!

In my first post, we explored how to use Ansible to automate the creation of VM snapshots on Unraid, simplifying the backup process for home lab setups or even more advanced environments. Now, it's time to complete the picture by diving into snapshot restoration. In this post, I'll show you how to leverage those snapshots we created earlier to quickly and efficiently roll back VMs to a previous state.

Whether you're testing, troubleshooting, or simply maintaining a reliable baseline for your VMs, automated snapshot restoration will save you time and effort. Like before, this is designed with the home lab community in mind, but the process can easily be adapted for other Linux-based systems.

The first post can be found here:
https://thenerdylyonsden.hashnode.dev/unraid-vm-snapshot-automation-with-ansible-part-1

Let’s get started!

Scenario and Requirements

This section largely mirrors the previous post. I'll be using the snapshot files created earlier—both the remote .img and local .tar files. The setup remains the same: I'll use the Ubuntu Ansible host, the Unraid server for local snapshots, and the Synology DiskStation for remote storage. For local restores, the Unraid server will act as both the source and destination. No additional packages or configurations are required on any of the systems.

Let's Automate!

Overview and Setup

Let's review the playbook directory structure from the previous post. It looks like this:


├── README.md
├── create-snapshot-pb.yml
├── defaults
│   └── inventory.yml
├── files
│   ├── backup-playbook-old.yml
│   └── snapshot-creation-unused.yml
├── handlers
├── meta
├── restore-from-local-tar-pb.yml
├── restore-from-snapshot-pb.yml
├── tasks
│   ├── shutdown-vm.yml
│   └── start-vm.yml
├── templates
├── tests
│   ├── debug-tests-pb.yml
│   └── simple-debugs.yml
└── vars
    ├── snapshot-creation-vars.yml
    └── snapshot-restore-vars.yml

Most of this was covered in the previous post. I will cover the new files here:

  • vars/snapshot-restore-vars.yml Similar to the create file, this file is where users specify the list of VMs and their corresponding disks for snapshot restoration. It primarily consists of a dictionary outlining the VMs and the disks to be restored. Additionally, it includes variables for configuring the connection to the destination NAS device.

  • restore-from-snapshot-pb.yml This playbook manages the restoration process from the remote snapshot repository and is composed of three plays. The first play serves two functions: it verifies the targeted Unraid VMs and disks, and builds additional data structures along with dynamic host groups. The second play locates the correct snapshots, transfers them to the Unraid server, and handles file comparison, VM shutdown, and replacing the original disk with the snapshot. The third play restarts the VMs once all other tasks are completed.

  • restore-from-local-tar-pb.yml Same as above. Does everything local to the Unraid server using .tar files instead of remote snapshots.

Inventory - defaults/inventory.yml

Covered in Part 1. Shown here again for reference:

---
nodes:
  hosts:
    diskstation:
      ansible_host: "{{ lookup('env', 'DISKSTATION_IP_ADDRESS') }}"
      ansible_user: "{{ lookup('env', 'DISKSTATION_USER') }}"
      ansible_password: "{{ lookup('env', 'DISKSTATION_PASS') }}"
    unraid:
      ansible_host: "{{ lookup('env', 'UNRAID_IP_ADDRESS') }}"
      ansible_user: "{{ lookup('env', 'UNRAID_USER') }}"
      ansible_password: "{{ lookup('env', 'UNRAID_PASS') }}"

Defines the connection variables for the unraid and diskstation hosts.

Variables - vars/snapshot-restore-vars.yml

Much like the snapshot creation automation, this playbook relies on a single variable file that serves as the primary point of interaction for the user. In this file, you’ll list the VMs, specify the disks to be restored for each VM, provide the path to the existing disk .img file, and indicate the snapshot you wish to restore from. If a snapshot name is not specified, the playbook will automatically search for and restore from the most recent snapshot associated with the disk.

---
snapshot_repository_base_directory: volume1/Home\ Media/Backup
repository_user: unraid

snapshot_restore_list:
  - vm_name: Rocky9-TESTNode
    disks_to_restore:
      - vm_disk_to_restore: vdisk1.img
        vm_disk_directory: /mnt/cache/domains
        snapshot_to_restore_from: test-snapshot
      - vm_disk_to_restore: vdisk2.img
        vm_disk_directory: /mnt/disk1/domains
  - vm_name: Rocky9-LabNode3
    disks_to_restore:
      - vm_disk_to_restore: vdisk1.img
        vm_disk_directory: /mnt/nvme_cache/domains
        snapshot_to_restore_from: kubernetes-baseline

Let's examine this file. It's similar to the one used for creation. I used a little more descriptive key names this time:

  • snapshot_restore_list - the main data structure for defining your list of VMs and disks. Within this there are two main variables vm_name and disks_to_restore

  • vm_name - used to define your the name of your VM. Must coincide with the name of the VM used within the Unraid system itself.

  • disks_to_restore - a per VM list consisting of the disks that will be restored. This list requires two variables—vm_disk_to_restore and vm_disk_directory, with snapshot_to_restore_from as an ‘optional’ third variable.

  • vm_disk_to_restore - consists the existing .img file name for that VM disk, i.e vdisk1.img

  • vm_disk_directory - consists of the absolute directory root path where the per VM files are stored. An example of a full path to an .img file within Unraid would be: /mnt/cache/domains/Rocky9-TESTNode/vdisk1.img

  • snapshot_to_restore_from - is an optional attribute that allows the user to specify the name of the snapshot for restoration. If this attribute is not provided, the playbook will automatically search for and use the latest snapshot that matches the disk.

  • snapshot_repository_base_directory and repository_user are used within the playbook's rsync task. These variables offer flexibility, allowing the user to specify their own remote user and target destination for the rsync operation. These are used only if the snapshots are being sent to remote location upon creation.

Following the provided example you can define your VMs, disk names, locations , and restoration snapshot name when running the playbook.

Playbooks

Two distinct playbooks were created to manage disk restoration. The restore-from-snapshot-pb.yml playbook handles restoration from the remote repository (DiskStation) using rsync. Meanwhile, local restoration is managed by restore-from-local-tar-pb.yml. Combining these processes proved to be too complex and unwieldy, so it was simpler and more manageable to build, test, and understand them separately.

NOTE: Snapshot restoration is much trickier to automate than creation. There are a lot more tasks/conditionals related to error handling in these playbooks.

restore-from-snapshot-pb.yml

Restore Snapshot Preparation Play

- name: Restore Snapshot Preparation
  hosts: unraid
  gather_facts: no
  vars_files:
    - ./vars/snapshot-restore-vars.yml

  tasks:
    - name: Retrieve List of All Existing VMs on UnRAID Hypervisor
      shell: virsh list --all | tail -n +3 | awk '{ print $2}'
      register: hypervisor_existing_vm_list

    - name: Generate VM and Disk Lists for Validated VMs in User Inputted Data
      set_fact: 
        vms_map: "{{ snapshot_restore_list | map(attribute='vm_name') }}"
        disks_map: "{{ snapshot_restore_list | map(attribute='disks_to_restore') }}"
      when: item.vm_name in hypervisor_existing_vm_list.stdout_lines
      with_items: "{{ snapshot_restore_list }}"

    - name: Build Data Structure for Snapshot Restoration
      set_fact: 
        snapshot_data_map: "{{ dict(vms_map | zip(disks_map)) | dict2items(key_name='vm_name', value_name='disks_to_restore') | subelements('disks_to_restore') }}"

    - name: Verify Snapshot Data is Available for Restoration
      assert:
        that:
          - snapshot_data_map
        fail_msg: "Restore operation failed. Not enough data to proceed."

    - name: Dynamically Create Host Group for Disks to be Restored
      ansible.builtin.add_host:
        name: "{{ item[0]['vm_name'] }}-{{ item[1]['vm_disk_to_restore'][:-4] }}"
        groups: disks
        vm_name: "{{ item[0]['vm_name'] }}"
        disk_name: "{{ item[1]['vm_disk_to_restore'] }}"
        source_directory: "{{ item[1]['vm_disk_directory'] }}"
        snapshot_to_restore_from: "{{ item[1]['snapshot_to_restore_from'] | default('latest') }}"
      loop: "{{ snapshot_data_map }}"

Purpose:
Designed to prepare for the restoration of VM snapshots on an Unraid hypervisor. It gathers information about existing VMs, validates user input, structures the data for restoration, and dynamically creates host groups for managing the restore process.

Hosts:
Targets the unraid host.

Variables File:
Loads additional variables from ./vars/snapshot_restore_vars.yml. Mainly the user's modified snapshot_restore_list.

Tasks:

  1. Retrieve List of All Existing VMs on Unraid Hypervisor:
    Executes a shell command to list all VMs on the Unraid hypervisor and registers the result. It extracts VM names using virsh and formats the output for further use.

  2. Generate VM and Disk Lists for Validated VMs in User Inputted Data:
    Constructs lists of VM names and disks to restore from the user input data, but only includes those VMs that exist on the hypervisor. Only runs if user inputted variable data matches at least 1 existing VM name. Otherwise playbook fails due to lack of data.

    • vms_map: List of VM names.

    • disks_map: List of disks to restore.

These lists are then used to create the larger snapshot_data_map.

  1. Build Data Structure for Snapshot Restoration:
    Creates a nested data structure that maps each VM to its corresponding disks to restore, preparing it for subsequent tasks.

    • snapshot_data_map: Merges the VM and disk maps into a more structured data format, making it easier to access and manage the VM/disk information programmatically. My goal was to keep the inventory files simple for users to understand and modify. However, this approach didn’t work well with the looping logic I needed, so I created this new data map for better flexibility and control.
  2. Verify Snapshot Data is Available for Restoration:

    Checks that snapshot_data_map has been populated correctly and ensures that there is enough data to proceed with the restoration. If not, it triggers a failure message to indicate insufficient data and halts the playbook.

  3. Dynamically Create Host Group for Disks to be Restored:

    Creates dynamic host entries for each disk that needs to be restored. Each host is added to the disks group with relevant information about the VM, disk, and optional snapshot name.

Disk Restore From Snapshot Play

- name: Disk Restore From Snapshot
  hosts: disks
  gather_facts: no
  vars_files:
    - ./vars/snapshot-restore-vars.yml

  tasks:

    - name: Find files in the VM folder containing the target VM disk name
      find:
        paths: "/{{ snapshot_repository_base_directory | regex_replace('\\\\', '')}}/{{ vm_name }}/"
        patterns: "*{{ disk_name[:-4] }}*"
        recurse: yes
      register: found_files 
      delegate_to: diskstation

    - name: Ensure that files were found
      assert:
        that:
          - found_files.matched > 0
        fail_msg: "No files found matching disk {{ disk_name[:-4] }} for VM {{ vm_name }}."

    - name: Create a file list from the target VM folder with only file names
      set_fact: 
        file_list: "{{ found_files.files | map(attribute='path') | map('regex_replace','^.*/(.*)$','\\1') | list }}"

    - name: Stitch together full snapshot name. Replace dashes and remove special characters
      set_fact: 
        full_snapshot_name: "{{ disk_name[:-4] }}.{{ snapshot_to_restore_from | regex_replace('\\-', '_') | regex_replace('\\W', '') }}.img"

    - name: Find and set correct snapshot if file found in snapshot folder
      set_fact: 
        found_snapshot: "{{ full_snapshot_name }}"
      when: full_snapshot_name in file_list

    - name: Find and set snapshot to latest if undefined or error handle block
      block:

        - name: Sort found files by modification time (newest first) - LATEST Block
          set_fact:
            sorted_files: "{{ found_files.files | sort(attribute='mtime', reverse=True) | map(attribute='path') | map('regex_replace','^.*/(.*)$','\\1') | list  }}"

        - name: Find and set correct snapshot for newest found .img file - LATEST Block
          set_fact: 
            found_snapshot: "{{ sorted_files | first }}"

      when: found_snapshot is undefined or found_snapshot == None  

    - name: Ensure that the desired snapshot file was found
      assert:
        that:
          - found_snapshot is defined and found_snapshot != None
        fail_msg: "The snapshot to restore was not found. May not exist or user date was entered incorrectly."
        success_msg: "Snapshot found! Will begin restore process NOW."

    - name: Transfer snapshots to VM hypervisor server via rsync
      command: rsync {{ repository_user }}@{{ hostvars['diskstation']['ansible_host'] }}:/{{ snapshot_repository_base_directory }}/{{ vm_name }}/{{ found_snapshot }} {{ found_snapshot }}
      args:
        chdir: "{{ source_directory }}/{{ vm_name }}"
      delegate_to: unraid

    - name: Get attributes of original stored snapshot .img file
      stat:
        path: "/{{ snapshot_repository_base_directory | regex_replace('\\\\', '')}}/{{ vm_name }}/{{ found_snapshot }}"
        get_checksum: false
      register: file1
      delegate_to: diskstation

    - name: Get attributes of newly transfered snapshot .img file
      stat:
        path: "{{ source_directory }}/{{ vm_name }}/{{ found_snapshot }}"
        get_checksum: false
      register: file2
      delegate_to: unraid

    - name: Ensure original and tranferred file sizes are the same
      assert:
        that:
          - file1.stat.size == file2.stat.size
        fail_msg: "Files failed size comparison post transfer. Aborting operation for {{ inventory_hostname }}"
        success_msg: File size comparison passed.

    - name: Shutdown VM(s)
      include_tasks: ./tasks/shutdown-vm.yml
      loop: "{{ hostvars['unraid']['vms_map'] }}"

    - name: Delete {{ disk_name }} for VM {{ vm_name }}
      ansible.builtin.file:
        path: "{{ source_directory }}/{{ vm_name }}/{{ disk_name }}"
        state: absent
      delegate_to: unraid

    - name: Transfer snapshots to VM hypervisor server via rsync
      command: mv {{ found_snapshot }} {{ disk_name }}
      args:
        chdir: "{{ source_directory }}/{{ vm_name }}"
      delegate_to: unraid

Purpose:
This play facilitates the restoration of VM disk snapshots on an Unraid server. It searches for the required snapshot, validates the snapshot file, and transfers it back to the hypervisor for restoration, ensuring the integrity of the restored disk.

Hosts:
Targets the disks host group.

Variables File:
Loads additional variables from ./vars/snapshot_restore_vars.yml. Mainly the user's modified snapshot_restore_list.

Tasks:

  1. Find Files in the VM Folder Containing the Target VM Disk Name:
    Searches the snapshot repository for files that match the target VM disk name i.e. vdisk1, vdisk2, etc , and recursively lists them. Stores in found_files variable.

  2. Ensure That Files Were Found:

    Verifies that at least one file matching the disk name was found. If no files are found, will produce failure message and playbook will fail for host disk.

  3. Create a File List From the Target VM Folder with only File Names:

    Extracts and stores only the file names from the found_files list.

  4. Stitch Together Full Snapshot Name:

    Constructs the full snapshot name by combining the disk name, user inputted snapshot name (if available), and “.img”. Also replaces dashes with underscores and removes any special characters.

  5. Find and Set the Correct Snapshot if File Found in Snapshot Folder:

    If the constructed snapshot name is found in the list of files, it sets found_snapshot to this name.

  6. Find and Set Snapshot to Latest if Undefined or Error Handling (Block):

    If no specific snapshot is found or defined, this block sorts the found files by modification time (newest first) and sets the snapshot to the latest available one.

  7. Ensure the Desired Snapshot File Was Found:

    Confirms that a snapshot was found and is ready for restoration. It will fail with an error message if not found. Playbook will also fail for the host disk.

  8. Transfer Snapshots to VM Hypervisor Server via rsync:

    Uses rsync to transfer the found snapshot from the remote DiskStation to the Unraid server, where the VM is located. Changes into the correct disk directory prior to transfer.

  9. Get Attributes of the Snapshot Files and Compare Size:

    The next three tasks retrieves the attributes of both the DiskStation snapshot followed by the newly transferred snapshot on the Unraid server. It then compares the file sizes of the original and transferred snapshots to ensure the transfer was successful. Playbook fails for host disk if sizes are not equal.

  10. Shutdown VMs:

    Shuts down the VMs in preparation for the restoration process by calling a separate task file (/tasks/shutdown-vm.yml). For more details on the shutdown tasks, refer to the previous post.

  11. Delete the Original Disk for the VM:

    Deletes the original disk file for the VM to so the snapshot file can be properly renamed to the correct disk name.

  12. Rename Snapshot to Proper Disk Name

    Renames the restored snapshot file to match the original disk file name, completing the restoration process.

Restart Affected VMs Play

- name: Restart Affected VMs
  hosts: unraid
  gather_facts: no
  vars_files:
    - ./vars/snapshot-restore-vars.yml

  tasks:
    - name: Start VM(s) back up
      include_tasks: ./tasks/start-vm.yml
      loop: "{{ snapshot_restore_list }}"

Purpose:
This play’s only purpose is to start the targeted VM’s after the restore process has completed for all disks.

Hosts:
Targets the unraid host.

Variables File:
Loads additional variables from ./vars/snapshot_restore_vars.yml. Mainly the user's modified snapshot_restore_list.

Tasks:

  1. Start VM(s) Back Up:
    Starts up the VMs once the restoration process has completed by calling a separate task file (/tasks/start-vm.yml). For more details on the startup tasks, refer to the previous post.

NOTE: This was intentionally made a separate play at the end of the playbook to ensure all disk restore operations are completed beforehand. By looping over the VMs using the snapshot_restore_list variable, only one start command per VM is sent, reducing the chance of errors.

restore-from-local-tar-pb.yml

NOTE: This playbook is quite similar to the restore-from-snapshot-pb.yml playbook, but focuses on local restoration using the .tar files. All tasks are executed either on the Ansible host or the Unraid server. In this breakdown, I'll only highlight the key task differences from the previous playbook

Restore Snapshot Preparation Play

Exactly the same as the restore-from-snapshot-pb.yml playbook. Nothing to do.

Disk Restore From TAR file Play

- name: Find files in the VM folder containing the target VM disk name
      find:
        paths: "/{{ source_directory | regex_replace('\\\\', '')}}/{{ vm_name }}/"
        patterns: "*{{ disk_name[:-4] }}*"
        recurse: yes
      register: found_files 
      delegate_to: unraid

    - name: Filter files matching patterns for .tar files
      set_fact:
        matched_tar_files: "{{ found_files.files | selectattr('path', 'search', '.*\\.(tar)$') | list }}"

Tasks:

  1. Find Files in the VM Folder Containing the Target VM Disk Name:

    Similar to the other playbook. This task searches through the VM's directory to locate any files that match the target disk name, regardless of file type (e.g., .img, .tar).

  2. Filter Files Matching Patterns for .tar Files

    After locating files in the previous task, this task filters out only the .tar files from the list of found files. Uses set_fact to store list in variable matched_tar_files.

Everything is the same until unzip task (below).

- name: Unzip .tar file
      command: tar -xf {{ found_snapshot }}
      args:
        chdir: "{{ source_directory }}/{{ vm_name }}"
      delegate_to: unraid

Pretty straight forward here. Just unzips the correct snapshot .tar file back to a usable .img file.

The remaining tasks follow the same process as the restore-from-snapshot-pb.yml playbook. They gather the attributes of both the original and newly unzipped files, verify that their sizes match, shut down the required VMs, delete the original disk file, rename the snapshot to the appropriate disk name, and finally, restart the VMs.

Restoring in Action (Running the Playbook)

Like the create playbook in the previous post, these playbooks are also very simple to run. Run them in the root of the playbook directory:

ansible-playbook restore-from-snapshot-pb.yml -i defaults/inventory.yml
ansible-playbook restore-from-local-tar-pb.yml -i defaults/inventory.yml

Below are the results of successful playbook runs, tested using a single 2GB disk for both local and remote restores. One run uses a static snapshot name, while the other demonstrates the process of finding the 'latest' snapshot when the name is not defined.

Restore from snapshot w/ finding the latest (omitting python version warnings):

PLAY [Restore Snapshot Preparation] ******************************************************************************************************************

TASK [Retrieve List of All Existing VMs on UnRAID Hypervisor] ****************************************************************************************
changed: [unraid]

TASK [Generate VM and Disk Lists for Validated VMs in User Inputted Data] ****************************************************************************
ok: [unraid] => (item={'vm_name': 'Rocky9-TESTNode', 'disks_to_restore': [{'vm_disk_to_restore': 'vdisk2.img', 'vm_disk_directory': '/mnt/disk1/domains'}]})

TASK [Build Data Structure for Snapshot Restoration] *************************************************************************************************
ok: [unraid]

TASK [Verify Snapshot Data is Available for Restoration] *********************************************************************************************
ok: [unraid] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Dynamically Create Host Group for Disks to be Restored] ****************************************************************************************
changed: [unraid] => (item=[{'vm_name': 'Rocky9-TESTNode', 'disks_to_restore': [{'vm_disk_to_restore': 'vdisk2.img', 'vm_disk_directory': '/mnt/disk1/domains'}]}, {'vm_disk_to_restore': 'vdisk2.img', 'vm_disk_directory': '/mnt/disk1/domains'}])

PLAY [Disk Restore From Snapshot] ********************************************************************************************************************

TASK [Find files in the VM folder containing the target VM disk name] ********************************************************************************
ok: [Rocky9-TESTNode-vdisk2 -> diskstation({{ lookup('env', 'DISKSTATION_IP_ADDRESS') }})]

TASK [Ensure that files were found] ******************************************************************************************************************
ok: [Rocky9-TESTNode-vdisk2] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Create a file list from the target VM folder with only file names] *****************************************************************************
ok: [Rocky9-TESTNode-vdisk2]

TASK [Stitch together full snapshot name. Replace dashes and remove special characters] **************************************************************
ok: [Rocky9-TESTNode-vdisk2]

TASK [Find and set correct snapshot if file found in snapshot folder] ********************************************************************************
skipping: [Rocky9-TESTNode-vdisk2]

TASK [Sort found files by modification time (newest first) - LATEST Block] ***************************************************************************
ok: [Rocky9-TESTNode-vdisk2]

TASK [Find and set correct snapshot for newest found .img file - LATEST Block] ***********************************************************************
ok: [Rocky9-TESTNode-vdisk2]

TASK [Ensure that the desired snapshot file was found] ***********************************************************************************************
ok: [Rocky9-TESTNode-vdisk2] => {
    "changed": false,
    "msg": "Snapshot found! Will begin restore process NOW."
}

TASK [Transfer snapshots to VM hypervisor server via rsync] ******************************************************************************************
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Get attributes of original stored snapshot .img file] ******************************************************************************************
ok: [Rocky9-TESTNode-vdisk2 -> diskstation({{ lookup('env', 'DISKSTATION_IP_ADDRESS') }})]

TASK [Get attributes of newly transfered snapshot .img file] *****************************************************************************************
ok: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Ensure original and tranferred file sizes are the same] ****************************************************************************************
ok: [Rocky9-TESTNode-vdisk2] => {
    "changed": false,
    "msg": "File size comparison passed."
}

TASK [Shutdown VM(s)] ********************************************************************************************************************************
included: /mnt/c/Dev/Git/unraid-vm-snapshots/tasks/shutdown-vm.yml for Rocky9-TESTNode-vdisk2 => (item=Rocky9-TESTNode)

TASK [Shutdown VM - Rocky9-TESTNode] *****************************************************************************************************************
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Get VM status - Rocky9-TESTNode] ***************************************************************************************************************
FAILED - RETRYING: [Rocky9-TESTNode-vdisk2 -> unraid]: Get VM status - Rocky9-TESTNode (5 retries left).
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Delete vdisk2.img for VM Rocky9-TESTNode] ******************************************************************************************************
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Rename snapshot to proper disk name] ***********************************************************************************************************
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

PLAY [Restart Affected VMs] **************************************************************************************************************************

TASK [Start VM(s) back up] ***************************************************************************************************************************
included: /mnt/c/Dev/Git/unraid-vm-snapshots/tasks/start-vm.yml for unraid => (item={'vm_name': 'Rocky9-TESTNode', 'disks_to_restore': [{'vm_disk_to_restore': 'vdisk2.img', 'vm_disk_directory': '/mnt/disk1/domains'}]})

TASK [Start VM - Rocky9-TESTNode] ********************************************************************************************************************
changed: [unraid]

TASK [Get VM status - Rocky9-TESTNode] ***************************************************************************************************************
changed: [unraid]

TASK [Ensure VM 'running' status] ********************************************************************************************************************
ok: [unraid] => {
    "changed": false,
    "msg": "Rocky9-TESTNode has successfully started. Restore from snapshot complete."
}

PLAY RECAP *******************************************************************************************************************************************
Rocky9-TESTNode-vdisk2     : ok=16   changed=5    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
unraid                     : ok=9    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Restore from local .tar using defined snapshot name (omitting python version warnings):

PLAY [Restore Snapshot Preparation] ******************************************************************************************************************

TASK [Retrieve List of All Existing VMs on UnRAID Hypervisor] ****************************************************************************************
changed: [unraid]

TASK [Generate VM and Disk Lists for Validated VMs in User Inputted Data] ****************************************************************************
ok: [unraid] => (item={'vm_name': 'Rocky9-TESTNode', 'disks_to_restore': [{'vm_disk_to_restore': 'vdisk2.img', 'vm_disk_directory': '/mnt/disk1/domains', 'snapshot_to_restore_from': 'test-snapshot'}]})

TASK [Build Data Structure for Snapshot Restoration] *************************************************************************************************
ok: [unraid]

TASK [Verify Snapshot Data is Available for Restoration] *********************************************************************************************
ok: [unraid] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Dynamically Create Host Group for Disks to be Restored] ****************************************************************************************
changed: [unraid] => (item=[{'vm_name': 'Rocky9-TESTNode', 'disks_to_restore': [{'vm_disk_to_restore': 'vdisk2.img', 'vm_disk_directory': '/mnt/disk1/domains', 'snapshot_to_restore_from': 'test-snapshot'}]}, {'vm_disk_to_restore': 'vdisk2.img', 'vm_disk_directory': '/mnt/disk1/domains', 'snapshot_to_restore_from': 'test-snapshot'}])

PLAY [Disk Restore From TAR file] ********************************************************************************************************************

TASK [Find files in the VM folder containing the target VM disk name] ********************************************************************************
ok: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Filter files matching patterns for .tar files] *************************************************************************************************
ok: [Rocky9-TESTNode-vdisk2]

TASK [Ensure that files were found] ******************************************************************************************************************
ok: [Rocky9-TESTNode-vdisk2] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Create a file list from the target VM folder with only file names] *****************************************************************************
ok: [Rocky9-TESTNode-vdisk2]

TASK [Stitch together full snapshot name. Replace dashes and remove special characters] **************************************************************
ok: [Rocky9-TESTNode-vdisk2]

TASK [Find and set correct snapshot if file found in snapshot folder] ********************************************************************************
skipping: [Rocky9-TESTNode-vdisk2]

TASK [Sort found files by modification time (newest first) - LATEST Block] ***************************************************************************
ok: [Rocky9-TESTNode-vdisk2]

TASK [Find and set correct snapshot for newest found .img file - LATEST Block] ***********************************************************************
ok: [Rocky9-TESTNode-vdisk2]

TASK [Ensure that the desired snapshot file was found] ***********************************************************************************************
ok: [Rocky9-TESTNode-vdisk2] => {
    "changed": false,
    "msg": "Snapshot found! Will begin restore process NOW."
}

TASK [Unzip .tar file] *******************************************************************************************************************************
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Get attributes of unzipped .img file] **********************************************************************************************************
ok: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Get attributes of original disk .img file] *****************************************************************************************************
ok: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Ensure original and unzipped .img file sizes are the same] *************************************************************************************
ok: [Rocky9-TESTNode-vdisk2] => {
    "changed": false,
    "msg": "File size comparison passed."
}

TASK [Shutdown VM(s)] ********************************************************************************************************************************
included: /mnt/c/Dev/Git/unraid-vm-snapshots/tasks/shutdown-vm.yml for Rocky9-TESTNode-vdisk2 => (item=Rocky9-TESTNode)

TASK [Shutdown VM - Rocky9-TESTNode] *****************************************************************************************************************
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Get VM status - Rocky9-TESTNode] ***************************************************************************************************************
FAILED - RETRYING: [Rocky9-TESTNode-vdisk2 -> unraid]: Get VM status - Rocky9-TESTNode (5 retries left).
FAILED - RETRYING: [Rocky9-TESTNode-vdisk2 -> unraid]: Get VM status - Rocky9-TESTNode (4 retries left).
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Delete vdisk2.img for VM Rocky9-TESTNode] ******************************************************************************************************
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

TASK [Rename unzipped snapshot to proper disk name] **************************************************************************************************
changed: [Rocky9-TESTNode-vdisk2 -> unraid({{ lookup('env', 'UNRAID_IP_ADDRESS') }})]

PLAY [Restart Affected VMs] **************************************************************************************************************************

TASK [Start VM(s) back up] ***************************************************************************************************************************
included: /mnt/c/Dev/Git/unraid-vm-snapshots/tasks/start-vm.yml for unraid => (item={'vm_name': 'Rocky9-TESTNode', 'disks_to_restore': [{'vm_disk_to_restore': 'vdisk2.img', 'vm_disk_directory': '/mnt/disk1/domains', 'snapshot_to_restore_from': 'test-snapshot'}]})

TASK [Start VM - Rocky9-TESTNode] ********************************************************************************************************************
changed: [unraid]

TASK [Get VM status - Rocky9-TESTNode] ***************************************************************************************************************
changed: [unraid]

TASK [Ensure VM 'running' status] ********************************************************************************************************************
ok: [unraid] => {
    "changed": false,
    "msg": "Rocky9-TESTNode has successfully started. Restore from snapshot complete."
}

PLAY RECAP *******************************************************************************************************************************************
Rocky9-TESTNode-vdisk2     : ok=17   changed=5    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
unraid                     : ok=9    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Closing Thoughts

an elderly woman is sitting in a hospital bed with her fingers hurt and says `` my fingers hurt '' .

Aside from the fingers hurting situation this was another enjoyable mini-project. With both snapshot creation and restoration now fully functional, it’s going to be incredibly useful. It will save a ton of time on larger projects I have planned, eliminating the need to manually roll back configurations.

What’s next?

I have one more piece planned for this series - Cleaning up old snapshots on your storage, whether the local .tar files or a remote repo .img files (DiskStation).

Some thoughts and drafts I have for future posts include Kubernetes, Containerlab, Network Automation testing, Nautobot, a few more. We’ll see!

You can find the code that goes along with this post here (Github).

Thoughts, questions, and comments are appreciated. Please follow me here at Hashnode or connect with me on Linkedin.

Thank you for reading fellow techies!