Skip to content

Instantly share code, notes, and snippets.

@carlosedp
Last active December 2, 2025 20:47
Show Gist options
  • Select an option

  • Save carlosedp/bc12e3ecdce5b789bb60707bfb5751bb to your computer and use it in GitHub Desktop.

Select an option

Save carlosedp/bc12e3ecdce5b789bb60707bfb5751bb to your computer and use it in GitHub Desktop.
OpenShift on Proxmox

Deploying OpenShift on Proxmox with Terraform Automation

This gist is a companion to the blogpost published at: https://carlosedp.medium.com/deploying-openshift-on-proxmox-with-terraform-automation-86f888c8d483

Cluster Specifications

Component Masters Workers
CPU Cores 10 8
RAM 18 GB 16 GB
Main Disk 100 GB 100 GB
Extra Disk - 50 GB (for LVM storage for application data)

Network Layout

Hostname IP Address MAC Address Role
ocp-master-1 192.168.1.32 BC:24:11:44:22:32 Master
ocp-master-2 192.168.1.33 BC:24:11:44:22:33 Master
ocp-master-3 192.168.1.34 BC:24:11:44:22:34 Master
ocp-worker-1 192.168.1.35 BC:24:11:44:22:35 Worker
ocp-worker-2 192.168.1.36 BC:24:11:44:22:36 Worker
  • API VIP: 192.168.1.31
  • Ingress VIP: 192.168.1.30

DNS Configuration

Configure your DNS server with the following records pointing to your cluster:

Here I defined that my cluster will be published at the domain ocp.internal.example.com. Adjust accordingly to your domain.

; API and API-INT records (point to API VIP)
api.ocp.internal.example.com.     A    192.168.1.31
api-int.ocp.internal.example.com. A    192.168.1.31

; Wildcard for applications (point to Ingress VIP)
*.apps.ocp.internal.example.com.  A    192.168.1.30

; Individual node records
ocp-master-1.ocp.internal.example.com. A 192.168.1.32
ocp-master-2.ocp.internal.example.com. A 192.168.1.33
ocp-master-3.ocp.internal.example.com. A 192.168.1.34
ocp-worker-1.ocp.internal.example.com. A 192.168.1.35
ocp-worker-2.ocp.internal.example.com. A 192.168.1.36

Creating API Users for Terraform

Connect to your Proxmox host via SSH or console and create the required users and roles:

proxmox shell

# Create a role with necessary privileges for Terraform
pveum role add terraform-role -privs "VM.Allocate VM.Clone VM.Config.CDROM VM.Config.CPU VM.Config.Cloudinit VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Audit VM.PowerMgmt Datastore.AllocateSpace Datastore.Audit Sys.Audit Pool.Allocate Sys.Console Sys.Modify VM.Migrate SDN.Use VM.GuestAgent.Audit VM.GuestAgent.Unrestricted Pool.Audit"

# Create the Terraform user and assign the role
pveum user add terraform@pve
pveum aclmod / -user terraform@pve -role terraform-role

# Create an API token for Terraform (save the output!)
pveum user token add terraform@pve terraform-token --privsep=0

Deployment

Step 1: Initialize and Apply Terraform

# Set your install directory
export INSTALL_DIR=ocp-proxmox

# Initialize Terraform
terraform init

# Review the plan
terraform plan -out plan

# Apply the configuration
terraform apply plan

Terraform will:

  1. Check that all required tools are installed
  2. Generate the agent ISO from your configuration
  3. Upload the ISO to Proxmox
  4. Create and start all VMs

Step 2: Monitor Installation Progress

In a separate terminal, monitor the installation so the installer can do the cluster setup and finalization:

export KUBECONFIG=$INSTALL_DIR/auth/kubeconfig
./openshift-install agent wait-for install-complete --dir=$INSTALL_DIR

Make Masters Schedulable (Optional)

If you want to run workloads on master nodes (useful for smaller clusters):

oc patch schedulers.config.openshift.io cluster --type merge \
  --patch '{"spec":{"mastersSchedulable": true}}'
---
apiVersion: v1alpha1
kind: AgentConfig
metadata:
name: ocp
rendezvousIP: 192.168.1.32
hosts:
- hostname: ocp-master-1
role: master
rootDeviceHints:
deviceName: /dev/sda
interfaces:
- name: enp6s18
macAddress: BC:24:11:44:22:32
networkConfig:
interfaces:
- name: enp6s18
type: ethernet
state: up
mac-address: BC:24:11:44:22:32
ipv4:
enabled: true
address:
- ip: 192.168.1.32
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.1.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.1.1
next-hop-interface: enp6s18
table-id: 254
- hostname: ocp-master-2
role: master
rootDeviceHints:
deviceName: /dev/sda
interfaces:
- name: enp6s18
macAddress: BC:24:11:44:22:33
networkConfig:
interfaces:
- name: enp6s18
type: ethernet
state: up
mac-address: BC:24:11:44:22:33
ipv4:
enabled: true
address:
- ip: 192.168.1.33
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.1.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.1.1
next-hop-interface: enp6s18
table-id: 254
- hostname: ocp-master-3
role: master
rootDeviceHints:
deviceName: /dev/sda
interfaces:
- name: enp6s18
macAddress: BC:24:11:44:22:34
networkConfig:
interfaces:
- name: enp6s18
type: ethernet
state: up
mac-address: BC:24:11:44:22:34
ipv4:
enabled: true
address:
- ip: 192.168.1.34
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.1.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.1.1
next-hop-interface: enp6s18
table-id: 254
- hostname: ocp-worker-1
role: worker
rootDeviceHints:
deviceName: /dev/sda
interfaces:
- name: enp6s18
macAddress: BC:24:11:44:22:35
networkConfig:
interfaces:
- name: enp6s18
type: ethernet
state: up
mac-address: BC:24:11:44:22:35
ipv4:
enabled: true
address:
- ip: 192.168.1.35
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.1.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.1.1
next-hop-interface: enp6s18
table-id: 254
- hostname: ocp-worker-2
role: worker
rootDeviceHints:
deviceName: /dev/sda
interfaces:
- name: enp6s18
macAddress: BC:24:11:44:22:36
networkConfig:
interfaces:
- name: enp6s18
type: ethernet
state: up
mac-address: BC:24:11:44:22:36
ipv4:
enabled: true
address:
- ip: 192.168.1.36
prefix-length: 24
dhcp: false
dns-resolver:
config:
server:
- 192.168.1.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-address: 192.168.1.1
next-hop-interface: enp6s18
table-id: 254
---
apiVersion: v1
baseDomain: internal.example.com
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
replicas: 2
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: ocp
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 192.168.1.0/24
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
baremetal:
apiVIPs:
- "192.168.1.31"
ingressVIPs:
- "192.168.1.30"
pullSecret: '<your-pull-secret-here>'
sshKey: "<your-ssh-public-key>"
apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged
name: openshift-lvm-storage
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-storage-operatorgroup
namespace: openshift-lvm-storage
spec:
targetNamespaces:
- openshift-storage
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: lvms
namespace: openshift-lvm-storage
spec:
installPlanApproval: Automatic
name: lvms-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
---
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: my-lvmcluster
namespace: openshift-lvm-storage
spec:
storage:
deviceClasses:
- name: vg1
default: true
deviceSelector:
forceWipeDevicesAndDestroyAllData: true
paths:
- /dev/disk/by-path/pci-0000:01:02.0-scsi-0:0:0:1
fstype: xfs
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ocp-worker-1
- ocp-worker-2
thinPoolConfig:
chunkSizeCalculationPolicy: Static
metadataSizeCalculationPolicy: Host
name: thin-pool-1
overprovisionRatio: 10
sizePercent: 90
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "3.0.2-rc06"
}
}
required_version = ">= 1.0.0"
}
# -------------------------------------------------------------------
# Variables
# -------------------------------------------------------------------
variable "install_dir" {
description = "Directory where OCP installation files will be created"
type = string
default = "ocp-proxmox"
}
variable "proxmox_host" {
description = "Proxmox host URL for API calls"
type = string
default = "proxmox.example.com"
}
variable "proxmox_api_token_id" {
description = "API token ID for ISO upload (format: user@realm!token-name)"
type = string
default = "terraform@pve!terraform-token"
}
variable "proxmox_api_token_secret" {
description = "API token secret for ISO upload"
type = string
sensitive = true
default = "your-token-secret-here"
}
variable "iso_storage" {
description = "Proxmox storage for ISO files"
type = string
default = "local"
}
variable "target_node" {
description = "Proxmox cluster node where VMs will be created"
type = string
default = "proxmox"
}
variable "masters_cpu" {
description = "CPU allocation for master nodes"
type = number
default = 10
}
variable "workers_cpu" {
description = "CPU allocation for worker nodes"
type = number
default = 8
}
variable "masters_ram" {
description = "RAM allocation for master nodes (MB)"
type = number
default = 18432
}
variable "workers_ram" {
description = "RAM allocation for worker nodes (MB)"
type = number
default = 16384
}
variable "num_storages" {
description = "Number of storages to alternate between when creating VMs"
type = number
default = 1
}
variable "vm_storage1" {
description = "Primary storage for VM disks"
type = string
default = "local-lvm"
}
variable "vm_storage2" {
description = "Secondary storage for VM disks (for alternating)"
type = string
default = "local-lvm"
}
variable "vm_maindisk_size" {
description = "Main disk size for VMs"
type = string
default = "100G"
}
variable "vm_extradisk_size" {
description = "Extra disk size for worker VMs (LVM storage)"
type = string
default = "50G"
}
variable "bridge" {
description = "Network bridge for VM interfaces"
type = string
default = "vmbr0"
}
variable "ssh_pubkey" {
description = "SSH public key to inject"
type = string
}
# Node definitions
locals {
nodes = [
{ name = "ocp-master-1", mac = "BC:24:11:44:22:32", extra_disk = false },
{ name = "ocp-master-2", mac = "BC:24:11:44:22:33", extra_disk = false },
{ name = "ocp-master-3", mac = "BC:24:11:44:22:34", extra_disk = false },
{ name = "ocp-worker-1", mac = "BC:24:11:44:22:35", extra_disk = true },
{ name = "ocp-worker-2", mac = "BC:24:11:44:22:36", extra_disk = true },
]
}
# -------------------------------------------------------------------
# Check required tools are installed
# -------------------------------------------------------------------
resource "null_resource" "check_prerequisites" {
provisioner "local-exec" {
command = <<-EOT
set -e
MISSING_TOOLS=""
if ! command -v ./openshift-install &> /dev/null && ! command -v openshift-install &> /dev/null; then
MISSING_TOOLS="$MISSING_TOOLS openshift-install"
fi
if ! command -v ./oc &> /dev/null && ! command -v oc &> /dev/null; then
MISSING_TOOLS="$MISSING_TOOLS oc"
fi
if ! command -v nmstatectl &> /dev/null; then
MISSING_TOOLS="$MISSING_TOOLS nmstatectl"
fi
if [ -n "$MISSING_TOOLS" ]; then
echo "ERROR: The following required tools are missing:$MISSING_TOOLS"
echo ""
echo "Please install them before running Terraform:"
echo " - openshift-install: Download from https://console.redhat.com/openshift/install"
echo " - oc: Download from https://console.redhat.com/openshift/install"
echo " - nmstatectl: Install via 'dnf install nmstate' or 'apt install nmstate'"
exit 1
fi
echo "All required tools are available."
EOT
}
}
# -------------------------------------------------------------------
# Generate OCP Agent ISO
# -------------------------------------------------------------------
resource "null_resource" "generate_agent_iso" {
depends_on = [null_resource.check_prerequisites]
triggers = {
install_config_hash = filemd5("${path.module}/install-config.yaml")
agent_config_hash = filemd5("${path.module}/agent-config.yaml")
}
provisioner "local-exec" {
command = <<-EOT
set -e
mkdir -p ${var.install_dir}
cp install-config.yaml agent-config.yaml ${var.install_dir}/
./openshift-install agent create image --dir=${var.install_dir}
EOT
}
}
# -------------------------------------------------------------------
# Upload ISO to Proxmox
# -------------------------------------------------------------------
resource "null_resource" "upload_iso_to_proxmox" {
depends_on = [null_resource.generate_agent_iso]
triggers = {
iso_generated = null_resource.generate_agent_iso.id
}
provisioner "local-exec" {
command = <<-EOT
set -e
# Delete existing ISO if present (ignore errors if it doesn't exist)
curl -k -X DELETE \
"https://${var.proxmox_host}:8006/api2/json/nodes/${var.target_node}/storage/${var.iso_storage}/content/${var.iso_storage}:iso/agent.x86_64.iso" \
-H "Authorization: PVEAPIToken=${var.proxmox_api_token_id}=${var.proxmox_api_token_secret}" \
|| true
# Upload the new ISO
curl -k -X POST \
"https://${var.proxmox_host}:8006/api2/json/nodes/${var.target_node}/storage/${var.iso_storage}/upload" \
-H "Authorization: PVEAPIToken=${var.proxmox_api_token_id}=${var.proxmox_api_token_secret}" \
-F "content=iso" \
-F "filename=@${var.install_dir}/agent.x86_64.iso" \
-F "node=${var.target_node}" \
-F "storage=${var.iso_storage}"
EOT
}
}
# -------------------------------------------------------------------
# Proxmox Provider Configuration
# -------------------------------------------------------------------
provider "proxmox" {
pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
pm_api_token_id = var.proxmox_api_token_id
pm_api_token_secret = var.proxmox_api_token_secret
pm_tls_insecure = true
pm_parallel = 10
}
# -------------------------------------------------------------------
# Create VMs
# -------------------------------------------------------------------
resource "proxmox_vm_qemu" "nodes" {
for_each = { for idx, val in local.nodes : idx => val }
depends_on = [null_resource.upload_iso_to_proxmox]
name = each.value.name
target_node = var.target_node
agent = 1
agent_timeout = 1
skip_ipv6 = true
cpu {
cores = can(regex("master", each.value.name)) ? var.masters_cpu : var.workers_cpu
sockets = 1
type = "host"
}
memory = can(regex("master", each.value.name)) ? var.masters_ram : var.workers_ram
balloon = 1
scsihw = "virtio-scsi-single"
tags = "ocp"
# Primary disk
disk {
slot = "scsi0"
size = var.vm_maindisk_size
type = "disk"
storage = var.num_storages > 1 ? (tonumber(each.key) % 2 == 1 ? var.vm_storage1 : var.vm_storage2) : var.vm_storage1
emulatessd = true
discard = true
iothread = true
}
# Boot ISO
disk {
slot = "ide2"
type = "cdrom"
iso = "local:iso/agent.x86_64.iso"
}
# Extra disk for workers (LVM storage)
dynamic "disk" {
for_each = each.value.extra_disk ? [1] : []
content {
slot = "scsi1"
size = var.vm_extradisk_size
type = "disk"
storage = var.num_storages > 1 ? (tonumber(each.key) % 2 == 1 ? var.vm_storage1 : var.vm_storage2) : var.vm_storage1
emulatessd = true
discard = true
iothread = true
}
}
network {
id = 0
model = "virtio"
bridge = var.bridge
macaddr = each.value.mac
}
sshkeys = var.ssh_pubkey
lifecycle {
create_before_destroy = true
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment