Skip to content

Instantly share code, notes, and snippets.

@bitsandbooks
Last active September 6, 2025 13:29
Show Gist options
  • Select an option

  • Save bitsandbooks/19ee48d1eab25ef41f85daf032b90ff0 to your computer and use it in GitHub Desktop.

Select an option

Save bitsandbooks/19ee48d1eab25ef41f85daf032b90ff0 to your computer and use it in GitHub Desktop.
Ansible-ready Debian VMs with Terraform, Proxmox, and Cloud-init

The Goal

To create a Debian VM that is ready for Ansible using Terraform/OpenTofu, Proxmox VE, and Cloud-init.

Requirements

You will need to:

  • have a running Proxmox server whose console you can reach, via either web GUI or SSH
  • have Snippets enabled on at least one storage location with your customized ci-user.yml and ci-net.yml file in it
  • set up a pool (here, mypool) on the Proxmox server
  • set up a token for your Proxmox user (by default, root@pam) and know its secret
  • choose a numerical identifier for the VM (here, 10000)
  • know the unique ID of the default Proxmox network interface (by default, vmbr0)
  • download a Debian cloud image of the genericcloud variety in .qcow2 or .raw format and save it somewhere on the Proxmox server

Steps

Using the Proxmox node's console, create a "bare minimum" template VM from which Terraform can clone further VMs that are ready for cloud-init. In particular, create no disks; instead, import only the Debian cloud image as scsi0.

qm create 10000 --name debian-template --template 1 --memory 4096 --cores 2 --cpu x86-64-v2-AES \
   --machine q35 --vga vmware --bios ovmf --ostype l26 --boot "order=scsi0;net0" \
   --net0 "virtio,bridge=vmbr0,firewall=1" --scsihw virtio-scsi-pci \
   --scsi0 "local-zfs:0,discard=on,import-from=/path/to/debian-13-genericcloud-amd64.qcow2"

This VM has 4 GB of memory, 2 x86_64/amd64 cores, UEFI, Secure Boot, VirtIO network and SCSI devices, and a VMWare-compatible display adapter (to avoid garbled console, which didn't happen with "default" on Debian 12). Note that UEFI and TPM disks have not yet been added, and the imported image has not been resized to a full-size disk (The Debian 13 "cloud image" is about 3 GB when imported and decompressed at time of writing).

Use Terraform/OpenTofu, Telmate's Proxmox provider, and main.tf to clone and customize the VM before launching it.

Create a terraform.tfvars file and override the default values in the variables file with values that match your own. When you run tofu apply, it creates a full clone of the template VM, so it inherits everything from its parent, then:

  • adds agent = 1, telling Proxmox that the guest agent is installed and running
  • adds a serial port (as most servers have for console access)
  • adds disks for UEFI, TPM, and cloud-init that are stored in local-zfs
  • resizes disk scsi0 (the cloud image we imported) to 32 GB
  • adds custom cloud-init user data and network config, using the YAML files in Snippets

You should get an IPv4 address (such as 192.168.12.34) as an output from Terraform when the VM is up and running. SSH into the VM using this address and the private half of the public key you put where it says <snip> in the ci-user.yml file:

ssh -i ~/.ssh/debian_vm_sshkey_ed25519 -o PreferredAuthentications=publickey [email protected]

If you connect and get an ansibleuser@vmdebian prompt: congratulations! You now have an Ansible-ready Debian VM that's ready for your inventory and playbooks.

Notes

If you have improvements or bug fixes, please feel free to share them.

For the cloud-init files: local:snippets/ci-user.yml resolved to /var/lib/vz/snippets/ci-user.yml on my Proxmox host.

As configured, ci-user.yml sets up access via SSH key pairs, but locks the user's password and disables SSH password authentication, so if you can't use a password to get in, that's why.

When the VM starts for the first time, it should ingest the cloud-init files and, after a short while, the VM's console should present you with a vmdebian login: prompt. It appeared to hang during first boot for me, and the QEMU agent took a couple of minutes to start, but give it a couple of minutes; if IP addresses show up in the VM's Summary tab after a short while, you'll know that cloud-init did its thing.

License

SPDX-License-Identifier: GPL-3.0-only

Copyright

Copyright © 2025 by R. Dumas. All rights not granted in the GPL version 3 reserved.

#cloud-config
# ci-net.yml
# SPDX-License-Identifier: GPL-3.0-only
network:
version: 2
ethernets:
enp6s18:
dhcp4: yes
dhcp6: yes
#cloud-config
# ci-user.yml
# vim: syntax=yaml
# Prepare machine for Ansible
# SPDX-License-Identifier: GPL-3.0-only
preserve_hostname: false
fqdn: vmdebian.zone.domain.tld
hostname: vmdebian
ssh_pwauth: false
users:
- name: ansibleuser
gecos: User for Ansible
groups: users,adm,wheel
lock_passwd: true
shell: /usr/bin/bash
ssh_authorized_keys:
# Public SSH keys
- "<snip>"
sudo: "ALL=(ALL) NOPASSWD:ALL"
package_update: true
packages:
# unique to distro family (Debian/Ubuntu vs. Fedora/RHEL)
- git
- openssh-server
- python3-pip # necessary for ansible to work?
- qemu-guest-agent # for VMs on Proxmox
- vim
runcmd:
# Make sure qemu-guest-agent is running
- [ systemctl, daemon-reload ]
- [ systemctl, enable, --now, qemu-guest-agent.service ]
# SPDX-License-Identifier: GPL-3.0-only
terraform {
required_version = ">= 1.0"
required_providers {
proxmox = {
source = "Telmate/proxmox"
version = "3.0.2-rc04"
}
}
}
variable "proxmox_api_url" {
type = string
description = "Proxmox API endpoint URL"
default = "https://pve.domain.tld:8006/api2/json"
sensitive = true
}
variable "proxmox_api_token_id" {
type = string
description = "Proxmox API token ID"
default = "root@pam!nameoftoken"
sensitive = true
}
variable "proxmox_api_token_secret" {
type = string
description = "Proxmox API token secret"
default = "12345678-90ab-cdef-1234-567890abcdef"
sensitive = true
}
provider "proxmox" {
pm_api_url = var.proxmox_api_url
pm_tls_insecure = true # By default Proxmox Virtual Environment uses self-signed certificates.
pm_api_token_id = var.proxmox_api_token_id
pm_api_token_secret = var.proxmox_api_token_secret
}
variable "namespace_prefix" {
description = <<EOT
General namespace prefix for all items. Defaults to "my", so a virtual \
machine that uses this might be called something "my-vm01". Change, if \
desired, to something unique and unlikely to collide with other names \
and system objects.
EOT
type = string
default = "my"
}
variable "proxmox_server_region" {
type = string
description = "Region in which the Proxmox server is located"
default = "chi"
}
variable "proxmox_server_node" {
type = string
description = "Proxmox server node on which to create VM"
default = "pve"
sensitive = true
}
variable "proxmox_pool" {
type = string
description = "Proxmox pool in which to create VM"
default = "mypool"
sensitive = true
}
variable "proxmox_vm_cicustom" {
type = string
description = <<EOT
Custom Cloud-init string for ingestion, in the form of a \
comma-separated string pointing to any number of the four cloud-init \
data files (user, network, meta, vendor) in a Snippets storage \
location. Example: \
"user=local:snippets/ci-user.yml,network=local:snippets/ci-net.yml"
EOT
default = "user=local:snippets/ci-user.yml,network=local:snippets/ci-net.yml"
}
variable "proxmox_default_network_bridge" {
type = string
description = "Default network bridge device"
default = "vmbr0"
}
variable "proxmox_storage_location" {
type = string
description = "Proxmox storage location in which to keep VM disks"
default = "local-zfs"
}
resource "proxmox_vm_qemu" "ansible_ready_vm" {
name = "${var.namespace_prefix}-${var.proxmox_server_region}-vm-debian"
target_node = var.proxmox_server_node
pool = var.proxmox_pool
clone_id = 10000
full_clone = true
bios = "ovmf"
boot = "order=scsi0;scsi1;net0"
agent = 1
os_type = "cloud-init"
cicustom = var.proxmox_vm_cicustom
vm_state = "running"
scsihw = "virtio-scsi-pci"
memory = 4096
cpu {
cores = 2
}
network {
id = 0
model = "virtio"
bridge = var.proxmox_default_network_bridge
firewall = true
}
efidisk {
efitype = "4m"
pre_enrolled_keys = true
storage = var.proxmox_storage_location
}
tpm_state {
storage = var.proxmox_storage_location
version = "v2.0"
}
disks {
scsi {
scsi0 {
disk {
size = "32G"
storage = var.proxmox_storage_location
}
}
scsi1 {
cloudinit {
storage = var.proxmox_storage_location
}
}
}
}
serial {
id = 0
type = "socket"
}
}
output "vm_ipv4_address" {
description = "Proxmox-hosted pool for resources"
value = proxmox_vm_qemu.ansible_ready_vm.default_ipv4_address
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment