One of the issues with automating much of Rancher MCM deployments that is commonly run into by prospective DevOps-focused folks is that Rancher lazily uses guid-based name generation for many of its resources.
When automating Rancher on Harvester, this is compounded a bit because of Harvester's nature: It is a Kubernetes cluster and so it's API is via the kube-apiserver. Rancher offers tight integration options with Harvester that isn't possible on other infrastructures. But the emergent issue is the cloud credential created in Rancher that references a specific Harvester cluster has a generated name. Any automation that depends on this name has to be updated in kind whenever that credential changes (or is initially created).
From an automation standpoint that's a show-stopper in my opinion. Thankfully, Harvester exposes this functionality through its own API. Below is the secret sauce to explain how to do this. I will also explain how to import a Harvester cluster initially, which suffers from similar problems. It's hard to automate if you don't know the secret sauce.
Below I'll cover these steps but this assumes you have already installed Rancher UI and have configured it with an admin password and can now reach the dashboard/console. It also assumes you have the kubeconfig for your RKE2 cluster that is running Rancher as the default in your kube context!
- Grabbing the Harvester kubeconfig file if it is not already on your machine
- Importing Harvester for usage/management by Rancher. This injects RBAC controls and other LCM functionality into Harvester that it does not otherwise have. This is a must for production workloads as Harvester does not have integrated RBAC on its own.
- Creating a cloud-credential for doing downstream cluster management operations on the newly imported Harvester cluster.
The Harvester API is very similar to the Rancher API though some of the paths are different. But it can be used to generate a service-account-based kubeconfig based around the credentials you provide. If you've already created this file at some other point in time, you can skip this step. Note that this file is different than the one that resides on the Harvester node itself. This one has an expiry and is service-account based.
Like the Rancher API, the Harvester API uses the same API token mechanism at a different endpoint. So I'm going to define a few environment variables here. I need my harvester VIP and the admin password:
export HARVESTER_VIP=86.75.30.9
export PASSWORD="mypassword"Next I generate the API token with these values:
export TOKEN=$(curl -sk -X POST https://$HARVESTER_VIP/v3-public/localProviders/local?action=login -H 'content-type: application/json' -d '{"username":"admin","password":"'$PASSWORD'"}' | jq -r '.token')With this token, I can make Harvester API calls, like requesting creation of a new kubeconfig for my use:
curl -sk https://$HARVESTER_VIP/v1/management.cattle.io.clusters/local?action=generateKubeconfig -H "Authorization: Bearer ${TOKEN}" -X POST -H 'content-type: application/json' | jq -r .config > harvester.yaml
chmod 600 harvester.yamlNow I have a kubeconfig defined in harvester.yaml and can make kubectl commands using it later!
Harvester exposes its API via a RESTful interface and uses an API token pattern for authentication. Rancher does the same thing and calls into Harvester via proxy connections. With this, we can do all operations and even generate our own kubeconfig if we need to.
First, let's set some environment variables. We need the Rancher MCM URL
export RANCHER_URL="rancher.mydomain.net"
export RANCHER_PASSWORD="myadminpassword"With that info settled, we need to get the API token:
export BEARER_TOKEN=$(curl -sk -X POST https://${RANCHER_URL}/v3-public/localProviders/local?action=login -H 'content-type: application/json' -d '{"username":"admin","password":"'${RANCHER_PASSWORD}'"}' | jq -r '.token')Now that we have the API token, we need to create a provisioning.cattle.io.cluster type cluster in Rancher that sits as a placeholder for importing Harvester. I'm attaching a yaml file with those base layer details. But here is what it looks like in yaml format. As an exercise, you can convert this to one-liner json and feed it inline. For simplicity I'm going to reference it as a file
import.yaml :
---
type: provisioning.cattle.io.cluster
metadata:
namespace: fleet-default
name: myharvesterclustername
labels:
provider.cattle.io: harvester
cachedHarvesterClusterVersion: ''
spec:
agentEnvVars: []Let's create our cluster import point using the above:
curl -sk https://${RANCHER_URL}/v1/provisioning.cattle.io.clusters -H "Authorization: Bearer ${BEARER_TOKEN}" -X POST -H 'content-type: application/yaml' -d @import.yaml Please note, that there is a brief window in time where Rancher is creating back-end resources around this object. So if you do include these in automation, I recommend sleeping for a few seconds here to let Rancher respond and update (5 seconds has never failed for me, but 0 seconds sometimes does) otherwise you risk a race condition on the following steps!
Now that the cluster exists, Rancher creates unique registration tokens and service accounts that need to be placed into Harvester for Rancher's agent to work its magic. Typically you see this step in the UI where you copy the yaml file and paste it into Harvester's cluster-registration-url field.
We need several pieces in order to do this. First we need the token link, a url that tells us where to grab the config we need.
export TOKEN_LINK=$(curl -sk https://${RANCHER_URL}/v3/clusters?name=fulcrum -H "Authorization: Bearer ${BEARER_TOKEN}" | jq -r .data[0].links.clusterRegistrationTokens)Now we use that url to grab the yaml id defining the filename of the file being generated/hosted by Rancher as part of this process. This is also a stateful process, so there can be a bit of a race condition here while Rancher spins up the appropriate containers for this action. I use an until loop that waits until the yaml id returns me a valid result. Note here that the name of the yaml file itself is the combination of a token id and the clusterid of the Harvester cluster.
export YAML_ID=$(curl -sk $TOKEN_LINK -H "Authorization: Bearer ${BEARER_TOKEN}" | jq '[.data[0].token, .data[0].clusterId ] | join("_")' -r)
until [[ $YAML_ID != "_" ]]; export YAML_ID=$(curl -sk $TOKEN_LINK -H "Authorization: Bearer ${BEARER_TOKEN}" | jq '[.data[0].token, .data[0].clusterId ] | join("_")' -r); do sleep 3; doneAfter this succeeds, we now have the filename we need stored in YAML_ID. All we have to do is fetch this file and feed it into Harvester! In my example here, I'm using the kubeconfig I generated earlier:
curl -k https://${RANCHER_URL}/v3/import/${YAML_ID}.yaml | kubectl --kubeconfig harvester.yaml apply -f -After applying to Harvester, it will pull the yaml file from Rancher and install the contents. New service accounts, bindings, and the cattle agent are installed with embedded tokens. It will then reach to Rancher MCM and report in. After some hand-shaking, Harvester is now fully imported in the cluster and should show as 'Active' in the Virtualization Management window.
This step is a bit similar to the previous few steps. We need to use Rancher MCM to generate a kubeconfig inside Harvester (now that it is managing Harvester and has full control) and then pass the resulting kubeconfig to a new secret.
First we need the imported cluster ID of the harvester cluster. This is a generated name, but we can find the name via a simple search. I only need to know the name I gave the Harvester cluster in the previous steps. In my example, I used myharvesterclustername.
export CLUSTER_ID=$(curl -sk https://${RANCHER_URL}/v3/clusters?name=myharvesterclustername -H "Authorization: Bearer ${BEARER_TOKEN}" | jq -r .data[0].id)Now I need to generate the kubeconfig for this cluster and do it via the Rancher MCM API (which is different than earlier):
export CREDENTIAL_KUBE=$(curl -sk https://${RANCHER_URL}/v3/clusters/${CLUSTER_ID}?action=generateKubeconfig -X POST -H "Authorization: Bearer ${BEARER_TOKEN}" | jq -r .config)Now I need to use another template to create this secret as it will have multiple entries. You can likely do this from the command line but it will be a very long command that might impact maintainability. I'm going to use envsubst to render this to avoid dependencies on yq which is still unfortunately non-standard on many OSs.
harvester_credential_template.yaml:
---
apiVersion: v1
data:
harvestercredentialConfig-clusterId: $CLUSTER_ID_B64
harvestercredentialConfig-clusterType: aW1wb3J0ZWQ=
harvestercredentialConfig-kubeconfigContent: $CREDENTIAL_KUBE_B64
kind: Secret
metadata:
annotations:
field.cattle.io/name: $CRED_NAME
provisioning.cattle.io/driver: harvester
labels:
cattle.io/creator: norman
name: $CRED_NAME
namespace: cattle-global-dataUsing the above template, I'll define my CRED_NAME env var, change my CLUSTER_ID and CREDENTIAL_KUBE vars to be base64 and then feed it all through envsubst:
export CRED_NAME=mycluster
export CLUSTER_ID_B64=$(echo $CLUSTER_ID | base64 -w0)
export CREDENTIAL_KUBE_B64=$(echo "${CREDENTIAL_KUBE}" | base64 -w0)
cat harvester_credential_template.yaml | envsubst | kubectl apply -f -After this is run, there should be a cloud credential created inside of your Cluster Management console within Rancher MCM that can be used to deploy to your Harvester cluster. This is the credential secret used within the cluster template helmcharts.
Following this method allows you to have whatever name set on the credential that you wish instead of a generated one requiring manual changes in your GitOps flow everytime it is altered.