De proxmox API à terraform

Cloud Claude Dioudonnat

Avril 2019 - Août 2020 => Limos / Isima
depuis be ys Cloud

Virtualisation

qemu, virtualbox ...

Cluster

Cloud

Mettre à disposition facilement de la puissance de calcul !

Cloud Manager aka Promox

qemu, lxc, ceph, sdn, user, cluster ...

Cli

qm, pveum, pvesh ...

Web UI

Single Page Application

Utilisateurs

Groups

Privileges

Sys.*, Group.*, Pool.* Realm.*, Permissions.* User.* VM.*, Datastore.*

Roles

Administrator, NoAccess, PVEAdmin, PVEAuditor, PVEDatastoreAdmin PVEDatastoreUser, PVEPoolAdmin, PVESysAdmin, PVETemplateUser, PVEUserAdmin, PVEVMAdmin, PVEVMUser ...

Resource pool

Federation

ldap, openid ...

API Tokens

Quota :(

API REST

        GET http://server:8080/org/limos/resources
[
  { "id": 7, "name": "alpha" },
  { "id": 42, "name": "beta" }
]
GET http://server:8080/org/limos/resource/7
{
  "id": 7, "name": "alpha"
}
      

:8006/api2/json/

access/
cluster/
nodes/
pools/
storage/
version
pve.proxmox.com/pve-docs/api-viewer

/api2/json/cluster

acme, backup, ceph, config, firewall, ha, jobs, metrics, replication, sdn, status ...

/api2/json/nodes/{node}

apt, capabilities, ceph, disk, firewall, hardware, lxc, network, qemu, replication, sdn, services, storage, tasks, journal, status, stopall ...

/api2/json/pools/{poolid}

/api2/json/storage/{storage}

Tous faire ?

PVE API Proxy Daemon (pveproxy)

pearl, 8006

Load Balancer

Auth

          curl -H "Authorization: PVEAPIToken=USER@REALM!TOKENID=UUID" https://pve:8006/api2/json/
        

Problème #1

          GET /api2/json/cluster/nextid
          
        

Problème #2

Proxmox Cluster File System (pmxcfs)

Problème #3

          GET /api2/json/nodes/{node}
          
        

Problème #4

          GET /api2/json/nodes/{node}/qemu/{vmid}
          
        

client

pearl, python, powershell, ruby, nodejs, c#, php, java

Insurmontable ?

Automatisation !

#!/usr/bin/env bash

echo, jq, curl
Mettre à disposition facilement de la puissance de calcul et des services

Terraform

HashiCorp, MPL-2.0, golang
v0.1.0 28/07/2015
v1.2.2 01/06/2022

HCL

          resource "local_file" "my_file" {
  content  = upper("Coucou !")
  filename = "/tmp/yolo"
}
        

Infrastructure as Code (IaC)

git

CLI

terraform init
          
Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/local...
- Installing hashicorp/local v2.2.3...
- Installed hashicorp/local v2.2.3 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
          
        
terraform plan
          
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
  # local_file.my_file will be created
  + resource "local_file" "my_file" {
      + content              = "COUCOU !"
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "/tmp/yolo"
      + id                   = (known after apply)
    }
Plan: 1 to add, 0 to change, 0 to destroy.
          
        
terraform apply
          Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

local_file.my_file: Creating...
local_file.my_file: Creation complete after 0s [id=61bf7e5eb7db3168310718b0cadd56d712369f4f]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
          
        

State

          { "version": 4, "terraform_version": "1.1.9", "serial": 1, "lineage": "236c3b10-9af0-9e87-45ea-dbe89c79c9fd",
  "outputs": {},
  "resources": [ {
      "mode": "managed", "type": "local_file", "name": "my_file", 
      "provider": "provider[\"registry.terraform.io/hashicorp/local\"]",
      "instances": [{
          "schema_version": 0,
          "attributes": {
            "content": "COUCOU !", "content_base64": null, 
            "directory_permission": "0777", "file_permission": "0777",
            "filename": "/tmp/yolo", "id": "61bf7e5eb7db3168310718b0cadd56d712369f4f",
            "sensitive_content": null, "source": null
          },
          "sensitive_attributes": [], "private": "bnVsbA=="
}]}]}
        
terraform show
          # local_file.my_file:
resource "local_file" "my_file" {
    content              = "COUCOU !"
    directory_permission = "0777"
    file_permission      = "0777"
    filename             = "/tmp/yolo"
    id                   = "61bf7e5eb7db3168310718b0cadd56d712369f4f"
}
          
        
          resource "local_file" "my_file" {
  content  = upper("Coucou !")
  filename = "/tmp/yolo"
  file_permission = 0755
}
          
          local_file.my_file: Refreshing state... [id=61bf7e5eb7db3168310718b0cadd56d712369f4f]
Terraform used the selected providers to generate the following execution plan. 
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
  # local_file.my_file must be replaced
-/+ resource "local_file" "my_file" {
      ~ file_permission      = "0777" -> "755" # forces replacement
      ~ id                   = "61bf7e5eb7db3168310718b0cadd56d712369f4f" -> (known after apply)
        # (3 unchanged attributes hidden)
    }
Plan: 1 to add, 0 to change, 1 to destroy.
        
terraform destroy

Backends

local, consul (lock), etc, swift, http ...

Providers

openstack, ovh, active directroy, f5 ...

registry.terraform.io

Telmate/terraform-provider-proxmox

v2.9.10 27/04/2022
          provider "proxmox" {
  pm_proxy_server = "http://proxyurl:proxyport"
  pm_user = ""
  pm_password = ""
}
        
          provider "proxmox" {
  pm_proxy_server = "http://proxyurl:proxyport"
  pm_api_token_id = ""
  pm_api_token_secret = ""
}
        
          resource "proxmox_pool" "my_pools" {
}
        
          resource "proxmox_lxc" "my_container" {
    name        = "matrix"
    target_node = "pve3"
    # More
  }
        
          resource "proxmox_lxc_disk" "my_container_disk" {
}
        
          resource "proxmox_vm_qemu" "my_vm" {
    name        = "mail"
    target_node = "pve3"
    # More
  }
        
name, target_node vmid, desc define_connection_info, bios , onboot, oncreate, tablet, boot, bootdisk, agent, iso, pxe, clone, full_clone, hastate, hagroup, qemu_os, memory, sockets, cores, vcpus, cpu, numa, pool, tags, force_create, os_type, force_recreate_on_change_of, os_network_config, ssh_forward_ip, ssh_user, ssh_private_key, ci_wait, ciuser, cipassword, cicustom, cloudinit_cdrom_storage, searchdomain, nameserver, sshkeys, ipconfig0, automatic_reboot ...
          "name": {
  Type:        schema.TypeString,
  Optional:    true,
  Default:     "",
  Description: "The VM name",
},
        
          func resourceVmQemuCreate(d *schema.ResourceData, meta interface{}) error {
        
          func resourceVmQemuUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
        
          func resourceVmQemuRead(d *schema.ResourceData, meta interface{}) error {
        
          func resourceVmQemuDelete(d *schema.ResourceData, meta interface{}) error {
        

Telmate/proxmox-api-go

Client go pour l'API
          func resourceVmQemu() *schema.Resource {
thisResource = &schema.Resource{
  Create:        resourceVmQemuCreate,
  Read:          resourceVmQemuRead,
  UpdateContext: resourceVmQemuUpdate,
  Delete:        resourceVmQemuDelete,
},
        
          logger.Debug().Str("vmid", d.Id()).Msgf("Invoking VM create with resource data:  '%+v'", string(jsonString))

pconf := meta.(*providerConfiguration)
lock := pmParallelBegin(pconf)
        

Prêt pour la Prod ?

Merci