Skip to content

Instantly share code, notes, and snippets.

@bcdurden
Last active July 9, 2024 17:45
Show Gist options
  • Select an option

  • Save bcdurden/251b6880d3e5f738f1df7d8ec22d37fe to your computer and use it in GitHub Desktop.

Select an option

Save bcdurden/251b6880d3e5f738f1df7d8ec22d37fe to your computer and use it in GitHub Desktop.
Automation-Friendly Harvester Cloud Provider Integration

Generating Deterministic Names with Harvester Cloud Credentials in Rancher

One of the issues with automating much of Rancher MCM deployments that is commonly run into by prospective DevOps-focused folks is that Rancher lazily uses guid-based name generation for many of its resources.

When automating Rancher on Harvester, this is compounded a bit because of Harvester's nature: It is a Kubernetes cluster and so it's API is via the kube-apiserver. Rancher offers tight integration options with Harvester that isn't possible on other infrastructures. But the emergent issue is the cloud credential created in Rancher that references a specific Harvester cluster has a generated name. Any automation that depends on this name has to be updated in kind whenever that credential changes (or is initially created).

From an automation standpoint that's a show-stopper in my opinion. Thankfully, Harvester exposes this functionality through its own API. Below is the secret sauce to explain how to do this. I will also explain how to import a Harvester cluster initially, which suffers from similar problems. It's hard to automate if you don't know the secret sauce.

Steps for Sauce

Below I'll cover these steps but this assumes you have already installed Rancher UI and have configured it with an admin password and can now reach the dashboard/console. It also assumes you have the kubeconfig for your RKE2 cluster that is running Rancher as the default in your kube context!

  1. Grabbing the Harvester kubeconfig file if it is not already on your machine
  2. Importing Harvester for usage/management by Rancher. This injects RBAC controls and other LCM functionality into Harvester that it does not otherwise have. This is a must for production workloads as Harvester does not have integrated RBAC on its own.
  3. Creating a cloud-credential for doing downstream cluster management operations on the newly imported Harvester cluster.

Using the Harvester API to get an admin-level Harvester Kubeconfig

The Harvester API is very similar to the Rancher API though some of the paths are different. But it can be used to generate a service-account-based kubeconfig based around the credentials you provide. If you've already created this file at some other point in time, you can skip this step. Note that this file is different than the one that resides on the Harvester node itself. This one has an expiry and is service-account based.

Like the Rancher API, the Harvester API uses the same API token mechanism at a different endpoint. So I'm going to define a few environment variables here. I need my harvester VIP and the admin password:

export HARVESTER_VIP=86.75.30.9
export PASSWORD="mypassword"

Next I generate the API token with these values:

export TOKEN=$(curl -sk -X POST https://$HARVESTER_VIP/v3-public/localProviders/local?action=login -H 'content-type: application/json' -d '{"username":"admin","password":"'$PASSWORD'"}' | jq -r '.token')

With this token, I can make Harvester API calls, like requesting creation of a new kubeconfig for my use:

curl -sk https://$HARVESTER_VIP/v1/management.cattle.io.clusters/local?action=generateKubeconfig -H "Authorization: Bearer ${TOKEN}" -X POST -H 'content-type: application/json' | jq -r .config > harvester.yaml
chmod 600 harvester.yaml

Now I have a kubeconfig defined in harvester.yaml and can make kubectl commands using it later!

Importing Harvester via Automation-friendly Paths

Harvester exposes its API via a RESTful interface and uses an API token pattern for authentication. Rancher does the same thing and calls into Harvester via proxy connections. With this, we can do all operations and even generate our own kubeconfig if we need to.

First, let's set some environment variables. We need the Rancher MCM URL

export RANCHER_URL="rancher.mydomain.net"
export RANCHER_PASSWORD="myadminpassword"

With that info settled, we need to get the API token:

export BEARER_TOKEN=$(curl -sk -X POST https://${RANCHER_URL}/v3-public/localProviders/local?action=login -H 'content-type: application/json' -d '{"username":"admin","password":"'${RANCHER_PASSWORD}'"}' | jq -r '.token')

Creating the Import Cluster for Harvester

Now that we have the API token, we need to create a provisioning.cattle.io.cluster type cluster in Rancher that sits as a placeholder for importing Harvester. I'm attaching a yaml file with those base layer details. But here is what it looks like in yaml format. As an exercise, you can convert this to one-liner json and feed it inline. For simplicity I'm going to reference it as a file import.yaml :

---
type: provisioning.cattle.io.cluster
metadata:
  namespace: fleet-default
  name: myharvesterclustername
  labels:
    provider.cattle.io: harvester
cachedHarvesterClusterVersion: ''
spec:
  agentEnvVars: []

Let's create our cluster import point using the above:

curl -sk https://${RANCHER_URL}/v1/provisioning.cattle.io.clusters -H "Authorization: Bearer ${BEARER_TOKEN}" -X POST -H 'content-type: application/yaml' -d @import.yaml 

Please note, that there is a brief window in time where Rancher is creating back-end resources around this object. So if you do include these in automation, I recommend sleeping for a few seconds here to let Rancher respond and update (5 seconds has never failed for me, but 0 seconds sometimes does) otherwise you risk a race condition on the following steps!

Grabbing the Harvester Import Registration Installable

Now that the cluster exists, Rancher creates unique registration tokens and service accounts that need to be placed into Harvester for Rancher's agent to work its magic. Typically you see this step in the UI where you copy the yaml file and paste it into Harvester's cluster-registration-url field.

We need several pieces in order to do this. First we need the token link, a url that tells us where to grab the config we need.

export TOKEN_LINK=$(curl -sk https://${RANCHER_URL}/v3/clusters?name=fulcrum -H "Authorization: Bearer ${BEARER_TOKEN}" | jq -r .data[0].links.clusterRegistrationTokens)

Now we use that url to grab the yaml id defining the filename of the file being generated/hosted by Rancher as part of this process. This is also a stateful process, so there can be a bit of a race condition here while Rancher spins up the appropriate containers for this action. I use an until loop that waits until the yaml id returns me a valid result. Note here that the name of the yaml file itself is the combination of a token id and the clusterid of the Harvester cluster.

export YAML_ID=$(curl -sk $TOKEN_LINK -H "Authorization: Bearer ${BEARER_TOKEN}" | jq '[.data[0].token, .data[0].clusterId ] | join("_")' -r)
until [[ $YAML_ID != "_" ]]; export YAML_ID=$(curl -sk $TOKEN_LINK -H "Authorization: Bearer ${BEARER_TOKEN}" | jq '[.data[0].token, .data[0].clusterId ] | join("_")' -r); do sleep 3; done

After this succeeds, we now have the filename we need stored in YAML_ID. All we have to do is fetch this file and feed it into Harvester! In my example here, I'm using the kubeconfig I generated earlier:

curl -k https://${RANCHER_URL}/v3/import/${YAML_ID}.yaml | kubectl --kubeconfig harvester.yaml apply -f -

After applying to Harvester, it will pull the yaml file from Rancher and install the contents. New service accounts, bindings, and the cattle agent are installed with embedded tokens. It will then reach to Rancher MCM and report in. After some hand-shaking, Harvester is now fully imported in the cluster and should show as 'Active' in the Virtualization Management window.

Creating Cloud Credentials for Harvester

This step is a bit similar to the previous few steps. We need to use Rancher MCM to generate a kubeconfig inside Harvester (now that it is managing Harvester and has full control) and then pass the resulting kubeconfig to a new secret.

First we need the imported cluster ID of the harvester cluster. This is a generated name, but we can find the name via a simple search. I only need to know the name I gave the Harvester cluster in the previous steps. In my example, I used myharvesterclustername.

export CLUSTER_ID=$(curl -sk https://${RANCHER_URL}/v3/clusters?name=myharvesterclustername -H "Authorization: Bearer ${BEARER_TOKEN}" | jq -r .data[0].id)

Now I need to generate the kubeconfig for this cluster and do it via the Rancher MCM API (which is different than earlier):

export CREDENTIAL_KUBE=$(curl -sk https://${RANCHER_URL}/v3/clusters/${CLUSTER_ID}?action=generateKubeconfig -X POST -H "Authorization: Bearer ${BEARER_TOKEN}" | jq -r .config)

Now I need to use another template to create this secret as it will have multiple entries. You can likely do this from the command line but it will be a very long command that might impact maintainability. I'm going to use envsubst to render this to avoid dependencies on yq which is still unfortunately non-standard on many OSs.

harvester_credential_template.yaml:

---
apiVersion: v1
data:
  harvestercredentialConfig-clusterId: $CLUSTER_ID_B64
  harvestercredentialConfig-clusterType: aW1wb3J0ZWQ=
  harvestercredentialConfig-kubeconfigContent: $CREDENTIAL_KUBE_B64
kind: Secret
metadata:
  annotations:
    field.cattle.io/name: $CRED_NAME
    provisioning.cattle.io/driver: harvester
  labels:
    cattle.io/creator: norman
  name: $CRED_NAME
  namespace: cattle-global-data

Using the above template, I'll define my CRED_NAME env var, change my CLUSTER_ID and CREDENTIAL_KUBE vars to be base64 and then feed it all through envsubst:

export CRED_NAME=mycluster
export CLUSTER_ID_B64=$(echo $CLUSTER_ID | base64 -w0)
export CREDENTIAL_KUBE_B64=$(echo "${CREDENTIAL_KUBE}" | base64 -w0)
cat harvester_credential_template.yaml | envsubst | kubectl apply -f -

After this is run, there should be a cloud credential created inside of your Cluster Management console within Rancher MCM that can be used to deploy to your Harvester cluster. This is the credential secret used within the cluster template helmcharts.

Following this method allows you to have whatever name set on the credential that you wish instead of a generated one requiring manual changes in your GitOps flow everytime it is altered.

---
apiVersion: v1
data:
harvestercredentialConfig-clusterId: $CLUSTER_ID_B64
harvestercredentialConfig-clusterType: aW1wb3J0ZWQ=
harvestercredentialConfig-kubeconfigContent: $CREDENTIAL_KUBE_B64
kind: Secret
metadata:
annotations:
field.cattle.io/name: $CRED_NAME
provisioning.cattle.io/driver: harvester
labels:
cattle.io/creator: norman
name: $CRED_NAME
namespace: cattle-global-data

Harvester Cloud Provider Integration

The Harvester CSI provider is very easy to install with the Harvester provisioner within Rancher MCM. Within the UI, it will query the Harvester cluster you are targeting to create a kubeconfig tied to a service account, solely for its use. When automating cluster creation via HelmCharts, this requires you to create this kubeconfig manually.

I will provide an example below of a working sample and explain inbetween steps what is happening.

The Secret

Before creating the Helm release for your cluster, you need to create a secret with specific names and annotations that contains the kubeconfig the cloud provider will use for that specific cluster. You cannot use the same secret for all clusters, that is blocked for security reasons.

First create your bearer token within the Rancher MCM API. Here I am using the admin account since that always exists, but this will also work if you are using an RBAC-based account.

export RANCHER_URL="rancher.myurl.com"
export RANCHER_PASSWORD="adminsuperpassword"
export BEARER_TOKEN=$(curl -sk -X POST https://${RANCHER_URL}/v3-public/localProviders/local?action=login -H 'content-type: application/json' -d '{"username":"admin","password":"'${RANCHER_PASSWORD}'"}' | jq -r '.token')

Next, you need to acquire the Harvester cluster name as Rancher MCM sees it. This cluster name is not the same as the one you gave your Harvester cluster, it is the name Rancher gave it when it was imported. This command requires that you have a valid Rancher/RKE2 cluster kubeconfig at hand. This would have been created when you built your Rancher management cluster.

export HARVESTER_CLUSTER_NAME=$(kubectl get clusters.management.cattle.io -o yaml | yq e '.items[] | select(.metadata.labels."provider.cattle.io" == "harvester")'.metadata.name)

Using the above environment variables, now we can make the curl request to create a new service-account-backed kubeconfig. This command will create a local file called csi-kubeconfig with a service account name of your choosing (ensure you edit the command below in the json message body).

curl -sk -X POST https://${RANCHER_URL}/k8s/clusters/${HARVESTER_CLUSTER_NAME}/v1/harvester/kubeconfig \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer ${BEARER_TOKEN}" \
-d '{"clusterRoleName": "harvesterhci.io:cloudprovider", "namespace": "default", "serviceAccountName": "mydesiredserviceaccountname"}' | xargs | sed 's/\\n/\n/g' > csi-kubeconfig

Now we can create a secret containing this value. Pay special attention to the contents. I use kubectl here because it is easy, but this works equally well with a template yaml that leverages yq, ytt, or even envsubst. Substitute the names of your clusters as desired. This secret exists in the Rancher management cluster.

Note the second command annotates the secret and the contents of this annotation are very important. The value of the v2prov-secret-authorized-for-cluster annotation must be the exact name you will use for your downstream cluster. If these names do not match, Harvester will not be able to establish ownership of this secret to the downstream cluster's cloud provider instance. In this example, my cluster's name is myclustername. The namespace used here, fleet-default is also mandatory.

kubectl create secret generic myclustername-cloudprovider -n fleet-default --from-file=credential=${PWD}/csi-kubeconfig --dry-run=client -o yaml | kubectl apply -f -
kubectl annotate secret myclustername-cloudprovider -n fleet-default --overwrite v2prov-secret-authorized-for-cluster='myclustername'

The Helm Chart

The hard part is finished. You only need to feed this secret name to the Helm Cluster Template for your downstream cluster in Harvester and ensure the CSI options are enabled. My helmchart values are older and look like this, yours may differ a bit but it should be self-explanatory.

The style of how line 3 is used may differ, but the idea is this field is used to reference the secret. Note the namespace-based notation for the secret location.

Line 2 references the credential secret used to create the harvester cluster itself. This may take the form of a guid like credential-wxyz. I create my credentials via the Harvester API instead and do not have this limitation.

cloudprovider: harvester
cloudCredentialSecretName: cattle-global-data:fulcrum
cloudProviderConfigSecretName: secret://fleet-default:myclustername-cloudprovider
cluster:
  annotations: {}
  labels:
    environment: services
    cluster_name: shared
    location: deathstar
  name: myclustername

That's It

That's really all there is to it. The UI does this automatically behind the scenes, but when automating you just need these few extra steps.

---
type: provisioning.cattle.io.cluster
metadata:
namespace: fleet-default
name: myharvesterclustername
labels:
provider.cattle.io: harvester
cachedHarvesterClusterVersion: ''
spec:
agentEnvVars: []
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment