Cluster state:
1 master: master01.fqdn 2 workers: worker01.fqdn, worker01.fqdn
A couple of nginx pods are running.
Goal:
Add a new load balancer in order to add 2 masters.
The fqdn of the load balancer is lb.fqdn
In the file cluster/kubeadm-init.conf and in the ConfigMap kubesystem/kubeadm-config:
---
apiServer:
certSANs:
- master01.fqdnAdd fqdn and IP to the list:
---
apiServer:
certSANs:
- master01.fqdn
- lb.fqdn.fqdn
- IP.lbStep 2 - Regenerate certs on master with kubeadm so the apiserver can be contact with the fqdn of the LB
On master01.fqdn:
Check the current DNS configured in the certificates:
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt|grep DNSBackup current certificates:
mkdir /root/backup
mv /etc/kubernetes/pki/apiserver.{crt,key} /root/backup
Copy kubeadm-init.conf on the node, e.g /root/kubeadm-init.conf.
Renew certificates:
kubeadm init phase certs apiserver --config /root/kubeadm-init.yaml -v5Validate the new FQDN of the LB is now in the certificate.
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt|grep DNSRestart apiserver:
crictl ps|grep kube-apiserver| awk '{ print $1 }'
crictl stop f3293429ddc6a && crictl rm f3293429ddc6a
$ curl https://lb.fqdn.fqdn:6443
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}Step 4 - Update controlPlaneEndpoint in ClusterConfiguration with the fqdn of the LB, (probably somewhere as well in the skuba-config)
In the file cluster/kubeadm-init.conf and in the ConfigMap kube-system/kubeadm-config:
apiServer:
certSANs:
- master01.fqdn
- lb.fqdn.fqdn
- IP.lb
extraArgs:
oidc-issuer-url: https://master01.fqdn:32000
controlPlaneEndpoint: master01.fqdn:6443To:
apiServer:
certSANs:
- master01.fqdn
- lb.fqdn.fqdn
- IP.lb
extraArgs:
oidc-issuer-url: https://lb.fqdn:32000
controlPlaneEndpoint: lb.fqdn:6443In the following ConfigMaps, replace all occurence of master01.fqdn with lb.fqdn.fqdn:
kube-system/cluster-infokube-system/oidc-dex-configkube-system/oidc-gangway-configkube-system/kube-proxy
- Update workers
Drain node:
kubectl drain worker01 --ignore-daemonsetsConnect to the node worker01.
Replace master fqdn with lb fqdn in /etc/kubernetes/kubelet.conf
apiVersion: v1
clusters:
- cluster:
server: https://lb.fqdn.fqdn:6443Restart kubelet.
systemctl restart kubeletUncordon node:
kubectl uncordon worker01Remove occurences of master01.fqdn in cluster/kubeadm-init.conf and in the ConfigMap kube-system/kubeadm-config
Replace all occurences in the cluster file directory. e.g
find . -type f -exec sed -i 's/master01\.fqdn/lb\.fqdn\.fqdn/' {} +skuba node join --role master --user sles --sudo --targetRecreate kube-proxy pods so they can use the LB instead of contacting master01 directly. Deleting the pods will recreate the pod with the configmap we have edited previously.
kubectl -n kube-system rollout restart ds/kube-proxyRecreate gangway and dex pods so they can use the LB instead of contacting master01 directly. Deleting the pods will recreate the pod with the configmap we have edited previously.
kubectl -n kube-system rollout restart deploy/oidc-gangway
kubectl -n kube-system rollout restart deploy/oidc-dexUpdate the kubeconfig, ConfigMap, scripts, CI etc to use the new LB.
master01.fqdn directly.
Repeat the step 2, this will remove master01.fqdn from the apiserver certificate.
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt|grep DNS
--> fqdn of the master01 should not be in the list.
Remove occurences of master01.fqdn in /etc/kubernetes
find . -type f -exec sed -i 's/master01\.fqdn/lb\.fqdn\.fqdn/' {} +