Kubernetes auf Proxmox mit Ansible und Terraform (Teil 1)

Table of Contents

Einleitung

In ersten Teil werden wir ein cloud-init Template erstellen, welches wir dann mit Terraform nutzen um die K8S VMs zu erzeugen.

Template Erstellen

Damit wir mit Terraform unsere Kubernetes Nodes deployen können, brauchen wir ein Template. Das kann man entweder bereits mit Ansible machen, sieh dir diesen Blog Artikel an. Was mit daran aber nicht gefällt ist folgendes:

The playbook should be run as root on the PVE host itself.

Echt jetzt? als root direkt auf dem Proxmox Host - d.h. zuerst noch Ansible auf Proxmox installieren? mhhh, nee danke.

Deshalb habe ich mich entschieden das Template lieber von Hand zu erstellen, da ich will den Ansible overhead nicht auf meinem Proxmox will und schon gar nicht als root ein Playboook ausführen.

Logge dich mit deinem Admin User (nicht root - man arbeitet nie mit root) auf einer deiner Proxmox Hosts ein und führe folgende Befehle aus, um das Ubuntu Template für unsere K8s Nodes zu erstellen:

wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img -P /tmp/
mv /tmp/focal-server-cloudimg-amd64.img /tmp/ubuntu-2004-server-amd64.qcow2
qm create 1001 --name ubuntu-2004-cloudinit-template --memory 4096 --net0 virtio,bridge=vmbr0
qm importdisk 1001 /tmp/ubuntu-2004-server-amd64.qcow2 local-zfs
qm set 1001 --scsihw virtio-scsi-pci --scsi0 local-zfs:vm-1001-disk-0
qm set 1001 --ide2 local-zfs:cloudinit
qm set 1001 --boot c --bootdisk scsi0
qm set 1001 --serial0 socket --vga serial0
qm template 1001

Terraform IaC

Nun bereiten wir den Terraform Infrastructure-as-Code vor. Dazu erstellen wir zuerst eine neue Datei Namens main.tf mit dem folgenden Inhalt:

resource "proxmox_vm_qemu" "control_plane" {
  count             = 1
  name              = "control-plane-${count.index}.k8s.cluster"
  target_node       = "${var.pm_node}"

  clone             = "ubuntu-2004-cloudinit-template"

  os_type           = "cloud-init"
  cores             = 4
  sockets           = "1"
  cpu               = "host"
  memory            = 2048
  scsihw            = "virtio-scsi-pci"
  bootdisk          = "scsi0"

  disk {
    size            = "20G"
    type            = "scsi"
    storage         = "local-lvm"
    iothread        = 1
  }

  network {
    model           = "virtio"
    bridge          = "vmbr0"
  }

  # cloud-init settings
  # adjust the ip and gateway addresses as needed
  ipconfig0         = "ip=192.168.0.11${count.index}/24,gw=192.168.0.1"
  sshkeys = file("${var.ssh_key_file}")
}

resource "proxmox_vm_qemu" "worker_nodes" {
  count             = 3
  name              = "worker-${count.index}.k8s.cluster"
  target_node       = "${var.pm_node}"

  clone             = "ubuntu-2004-cloudinit-template"

  os_type           = "cloud-init"
  cores             = 4
  sockets           = "1"
  cpu               = "host"
  memory            = 4098
  scsihw            = "virtio-scsi-pci"
  bootdisk          = "scsi0"

  disk {
    size            = "20G"
    type            = "scsi"
    storage         = "local-lvm"
    iothread        = 1
  }

  network {
    model           = "virtio"
    bridge          = "vmbr0"
  }

  # cloud-init settings
  # adjust the ip and gateway addresses as needed
  ipconfig0         = "ip=192.168.0.12${count.index}/24,gw=192.168.0.1"
  sshkeys = file("${var.ssh_key_file}")
}
  • clone muss der eindeutige Name der Template VM sein, welche wir im letzten Schritt erstellen haben
  • ipconfig0 muss das Subnetz sein in welchem die VMs laufen sollen. In diesem Fall weisen wir den VMs statische IPs zu in einem realen, externen Netzwerk, sodass diese Hosts ohne NAT Routing auskommen und aus Sicht der anderen Hosts "ganz normal" aussehen.
  • storage musst du evt. anpassen, je nachdem was du bei dir für ein Dateisystem hast. Bei ZFS musst du local-zfs angeben usw.

Weiter benötigen wir eine Datei mit den Variabeln, variables.tf mit dem Folgenden Inhalt:

variable "pm_api_url" {
  default = "https://proxmox.lab.local/api2/json"
}

variable "pm_node" {
  default = "pve"
}

# variable "pm_user" {
#   default = ""
# }

# variable "pm_password" {
#   default = ""
# }

variable "ssh_key_file" {
  default = "~/.ssh/id_rsa.pub"
}
  • pm_api_url muss deine Proxmox URL sein, wenn du kein DNS und kein reverse proxy hast, muss du die IP verwenden und Port 8006.
  • pm_node ist der Node deines Proxmox Clusters, auf welchem du dieses Terrafrom plan ausführen willst.
  • ssh_key_file dein ssh key mit welchem du dann auf die K8S VMs verbinden kannst.

Die letzte Datei welche wir noch brauchen ist die provider.tf mit diesem Inhalt:

terraform {
  required_providers {
    proxmox = {
      source = "Telmate/proxmox"
      version = "2.9.10"
    }
  }
}

provider "proxmox" {
  pm_parallel       = 1
  pm_tls_insecure   = false
  pm_api_url        = var.pm_api_url
#   pm_password       = var.pm_password
#   pm_user           = var.pm_user
}

Proxmox vorbereiten für Terraform

Damit wir Proxmox mit Terraform steuern können, müssen wir zuerst noch ein paar Anpassungen machen. Dazu gehört folgendes (direkt auf dem Poxmox Host via CLI ausführen):

  • Neue Rolle erstellen für den zukünftigen Terraform-Benutzer
  • Benutzer „terraform-prov@pve“ erstellen
  • Dem Benutzer terraform-prov die Rolle TERRAFORM-PROV hinzufügen
pveum role add TerraformProv -privs "VM.Allocate VM.Clone VM.Config.CDROM VM.Config.CPU VM.Config.Cloudinit VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Monitor VM.Audit VM.PowerMgmt Datastore.AllocateSpace Datastore.Audit"
pveum user add terraform-prov@pve
pveum aclmod / -user terraform-prov@pve -role TerraformProv
pveum passwd terraform-prov@pve

Terraform ausführen

Wenn du dich fragst, wieso ich beim Letzten Schritt pm_user und pm_password auskommentiert habe: man sollte nie Secrets nie in eine Datei schreiben! Wir lösen das, indem wir die folgenden env. Variabeln definieren (lokal auf deinem Gerät ausführen, auf welchem du Terraform verwenden willst):

#~/github/homelab/terraform$ export PM_USER="terraform-prov@pve"
#~/github/homelab/terraform$ export PM_PASS="password"

Zuerst machen wir ein terraform init, um alle benötigten Komponenten zu installieren und das neue Terraform Repo zu initieren (lokal auf deinem Gerät ausführen, auf welchem du Terraform verwenden willst):

#~/github/homelab/terraform$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding telmate/proxmox versions matching "2.9.10"...
- Installing telmate/proxmox v2.9.10...
- Installed telmate/proxmox v2.9.10 (self-signed, key ID xxxx)

Partner and community providers are signed by their developers.
If youd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Als nächster Schritt erstellen mit terraform plan, einen terraform Plan und schreiben diese in eine Datei, (lokal auf deinem Gerät ausführen, auf welchem du Terraform verwenden willst):

#~/github/homelab/terraform$ terraform plan -out plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # proxmox_vm_qemu.control_plane[0] will be created
  + resource "proxmox_vm_qemu" "control_plane" {
      + additional_wait           = 0
      + agent                     = 0
      + automatic_reboot          = true
      + balloon                   = 0
      + bios                      = "seabios"
      + boot                      = "c"
      + bootdisk                  = "scsi0"
      + clone                     = "ubuntu-2004-cloudinit-template"
      + clone_wait                = 0
      + cores                     = 4
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.0.0/24,gw=192.168.0.254"
      + kvm                       = true
      + memory                    = 2048
      + name                      = "control-plane-0.k8s.cluster"
      + nameserver                = (known after apply)
      + numa                      = false
      + onboot                    = false
      + oncreate                  = true
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = "virtio-scsi-pci"
      + searchdomain              = (known after apply)
      + sockets                   = 1
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + sshkeys                   = <<-EOT
            ssh-rsa xxxx
        EOT
      + tablet                    = true
      + target_node               = "pve"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vmid                      = (known after apply)

      + disk {
          + backup       = 0
          + cache        = "none"
          + file         = (known after apply)
          + format       = (known after apply)
          + iothread     = 1
          + mbps         = 0
          + mbps_rd      = 0
          + mbps_rd_max  = 0
          + mbps_wr      = 0
          + mbps_wr_max  = 0
          + media        = (known after apply)
          + replicate    = 0
          + size         = "20G"
          + slot         = (known after apply)
          + ssd          = 0
          + storage      = "local-zfs"
          + storage_type = (known after apply)
          + type         = "scsi"
          + volume       = (known after apply)
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = 30
        }
    }

  # proxmox_vm_qemu.worker_nodes[0] will be created
  + resource "proxmox_vm_qemu" "worker_nodes" {
      + additional_wait           = 0
      + agent                     = 0
      + automatic_reboot          = true
      + balloon                   = 0
      + bios                      = "seabios"
      + boot                      = "c"
      + bootdisk                  = "scsi0"
      + clone                     = "ubuntu-2004-cloudinit-template"
      + clone_wait                = 0
      + cores                     = 4
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.0.0/24,gw=192.168.0.254"
      + kvm                       = true
      + memory                    = 4098
      + name                      = "worker-0.k8s.cluster"
      + nameserver                = (known after apply)
      + numa                      = false
      + onboot                    = false
      + oncreate                  = true
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = "virtio-scsi-pci"
      + searchdomain              = (known after apply)
      + sockets                   = 1
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + sshkeys                   = <<-EOT
            ssh-rsa xxx
        EOT
      + tablet                    = true
      + target_node               = "pve"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vmid                      = (known after apply)

      + disk {
          + backup       = 0
          + cache        = "none"
          + file         = (known after apply)
          + format       = (known after apply)
          + iothread     = 1
          + mbps         = 0
          + mbps_rd      = 0
          + mbps_rd_max  = 0
          + mbps_wr      = 0
          + mbps_wr_max  = 0
          + media        = (known after apply)
          + replicate    = 0
          + size         = "20G"
          + slot         = (known after apply)
          + ssd          = 0
          + storage      = "local-zfs"
          + storage_type = (known after apply)
          + type         = "scsi"
          + volume       = (known after apply)
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = 30
        }
    }

  # proxmox_vm_qemu.worker_nodes[1] will be created
  + resource "proxmox_vm_qemu" "worker_nodes" {
      + additional_wait           = 0
      + agent                     = 0
      + automatic_reboot          = true
      + balloon                   = 0
      + bios                      = "seabios"
      + boot                      = "c"
      + bootdisk                  = "scsi0"
      + clone                     = "ubuntu-2004-cloudinit-template"
      + clone_wait                = 0
      + cores                     = 4
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.0.1/24,gw=192.168.0.254"
      + kvm                       = true
      + memory                    = 4098
      + name                      = "worker-1.k8s.cluster"
      + nameserver                = (known after apply)
      + numa                      = false
      + onboot                    = false
      + oncreate                  = true
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = "virtio-scsi-pci"
      + searchdomain              = (known after apply)
      + sockets                   = 1
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + sshkeys                   = <<-EOT
            ssh-rsa xxx
        EOT
      + tablet                    = true
      + target_node               = "pve"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vmid                      = (known after apply)

      + disk {
          + backup       = 0
          + cache        = "none"
          + file         = (known after apply)
          + format       = (known after apply)
          + iothread     = 1
          + mbps         = 0
          + mbps_rd      = 0
          + mbps_rd_max  = 0
          + mbps_wr      = 0
          + mbps_wr_max  = 0
          + media        = (known after apply)
          + replicate    = 0
          + size         = "20G"
          + slot         = (known after apply)
          + ssd          = 0
          + storage      = "local-zfs"
          + storage_type = (known after apply)
          + type         = "scsi"
          + volume       = (known after apply)
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = 30
        }
    }

  # proxmox_vm_qemu.worker_nodes[2] will be created
  + resource "proxmox_vm_qemu" "worker_nodes" {
      + additional_wait           = 0
      + agent                     = 0
      + automatic_reboot          = true
      + balloon                   = 0
      + bios                      = "seabios"
      + boot                      = "c"
      + bootdisk                  = "scsi0"
      + clone                     = "ubuntu-2004-cloudinit-template"
      + clone_wait                = 0
      + cores                     = 4
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.0.2/24,gw=192.168.0.254"
      + kvm                       = true
      + memory                    = 4098
      + name                      = "worker-2.k8s.cluster"
      + nameserver                = (known after apply)
      + numa                      = false
      + onboot                    = false
      + oncreate                  = true
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = "virtio-scsi-pci"
      + searchdomain              = (known after apply)
      + sockets                   = 1
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + sshkeys                   = <<-EOT
            ssh-rsa xxx
        EOT
      + tablet                    = true
      + target_node               = "pve"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vmid                      = (known after apply)

      + disk {
          + backup       = 0
          + cache        = "none"
          + file         = (known after apply)
          + format       = (known after apply)
          + iothread     = 1
          + mbps         = 0
          + mbps_rd      = 0
          + mbps_rd_max  = 0
          + mbps_wr      = 0
          + mbps_wr_max  = 0
          + media        = (known after apply)
          + replicate    = 0
          + size         = "20G"
          + slot         = (known after apply)
          + ssd          = 0
          + storage      = "local-zfs"
          + storage_type = (known after apply)
          + type         = "scsi"
          + volume       = (known after apply)
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = 30
        }
    }

Plan: 4 to add, 0 to change, 0 to destroy.

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Saved the plan to: plan

To perform exactly these actions, run the following command to apply:
    terraform apply "plan"

Wir führen nun den Plan aus, dabei werden die Ressourcen wie geplant angelegt, (lokal auf deinem Gerät ausführen, auf welchem du Terraform verwenden willst):

#~/github/homelab/terraform$ terraform apply "plan"
proxmox_vm_qemu.worker_nodes[2]: Creating...
proxmox_vm_qemu.worker_nodes[1]: Creating...
proxmox_vm_qemu.worker_nodes[0]: Creating...
proxmox_vm_qemu.control_plane[0]: Creating...
proxmox_vm_qemu.worker_nodes[0]: Still creating... [10s elapsed]
proxmox_vm_qemu.worker_nodes[1]: Still creating... [10s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [10s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [10s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [20s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [20s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [20s elapsed]
proxmox_vm_qemu.worker_nodes[1]: Still creating... [20s elapsed]
proxmox_vm_qemu.worker_nodes[1]: Still creating... [30s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [30s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [30s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [30s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [40s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [40s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [40s elapsed]
proxmox_vm_qemu.worker_nodes[1]: Still creating... [40s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [50s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [50s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [50s elapsed]
proxmox_vm_qemu.worker_nodes[1]: Still creating... [50s elapsed]
proxmox_vm_qemu.worker_nodes[1]: Creation complete after 59s [id=mrprmx/qemu/105]
proxmox_vm_qemu.control_plane[0]: Still creating... [1m0s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [1m0s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [1m0s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [1m10s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [1m10s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [1m10s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [1m20s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [1m20s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [1m20s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [1m30s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [1m30s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [1m30s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [1m40s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [1m40s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [1m40s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Still creating... [1m50s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [1m50s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [1m50s elapsed]
proxmox_vm_qemu.worker_nodes[0]: Creation complete after 1m57s [id=mrprmx/qemu/106]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [2m0s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [2m0s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [2m10s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [2m10s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [2m20s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [2m20s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [2m30s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [2m30s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [2m40s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [2m40s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Still creating... [2m50s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [2m50s elapsed]
proxmox_vm_qemu.worker_nodes[2]: Creation complete after 2m56s [id=mrprmx/qemu/107]
proxmox_vm_qemu.control_plane[0]: Still creating... [3m0s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [3m10s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [3m20s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [3m30s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [3m40s elapsed]
proxmox_vm_qemu.control_plane[0]: Still creating... [3m50s elapsed]
proxmox_vm_qemu.control_plane[0]: Creation complete after 3m54s [id=mrprmx/qemu/108]

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.