Skip to content

Instantly share code, notes, and snippets.

@DejfCold
Last active August 16, 2023 22:19
Show Gist options
  • Select an option

  • Save DejfCold/f3dec5f4f5278ea02576a78331f2909b to your computer and use it in GitHub Desktop.

Select an option

Save DejfCold/f3dec5f4f5278ea02576a78331f2909b to your computer and use it in GitHub Desktop.
[DRAFT] Production k8s setup for dummies (like me)

Production k8s setup for dummies (like me)

For years now, I'm searching for a tutorial on how to deploy a production ready k8s cluster. I found some, but pretty much all of them have issues. Let's look at what I've found so far:

Production ready cluster ...

  1. on one node. That's great, but then I can use just docker compose instead and save myself all the hastle with k8s.
  2. with disabled firewall. I guess that's fine if you have your own DC/lab with a separate firewall. Though I'd still like to have firewall running on every machine in the network anyway.
  3. that just doesn't work. Maybe I'm stupid, but the most common issue is, that pods on 2 different nodes can't communicate with each other.

Let's summarize what I want:

  • During the setup, there can't be an unencripted communication between any nodes.
  • The nodes must have firewall enabled (prefferably firewalld since that's what's running everywhere by default)
  • The cluster must have at least 3 nodes
  • It has to work

My setup

Ok, with all of that out of the way, let's see what I have.

Because I'm cheap, I don't want to be vendor locked and want to be able to reproduce it locally, I'm using a cheap VPS provider. If I didn't have these constraints, I'd just use managed k8s and be done with it.

So what will I actually use? Because of the above constraints, I'm able to develop locally and then, when I'm done, can verify it on the VPS provider.

Local environment

My machine: Intel i7-8700 with 64GB RAM running Fedora 37

I'll use a QEMU/KVM to run 3 VMs: 2vCPU with 2 GB RAM running Rocky Linux 9

VPS provider

I'm using Contabo as my VPS provider with 3 of their CLOUD VPS S plans. Those specs are: 4 vCPU with 8 GB RAM running Rocky Linux 9

There's no private network between each VPS, though Contabo provides an option to buy it. For €3 per VPS. When I'm paying €6 for each VPS that would get me out of the "It's cheap!" range. I mean it would still be pretty cheap compared to AWS, but ... it would also lock me with them. Which is not what I want.

Setup

Ah finally, we get to the point where I'll have to do stuff. To make it really clear, instead of saying "go to X and do Y" I'll write all the commands. Not sure what will I do, when I'll edit a file. Would diff be clear enough? We'll see. what if the file's too long? Would cutting the irrelevant parts be ok? We'll also see. OK, let's get into it!

Initial setup

The one part I won't tell you how to do, is how to create the VMs.

  1. The VPS provider will create one for you
  2. I want this to work whether we're working on a VPS, VM or a bare metal machine. So if you really don't know how to create a VM, just get a trio of Raspberry PIs and use that. I'd hope it'll work just fine with those as well. Just make sure you're installing aarch64 or noarch packages instead of x86_64 ones as it has an ARM CPU.

But if you really want a hint on how to create the VMs, I'm using cockpit with the virtual machines extension.

# dnf install -y cockpit cockpit-machines
# systemctl enable --now cockpit.socket

Then open your web browser on https://localhost:9090 and you'll figure it out from there.

Anyway, let's start finally!

Oh noes!

Get the IP addresses of the machines somehow. It'll be written somewhere in the dashboard of the VPS provider or in the cockpit or some other gui for KVM/QEMU. Or you can just run sudo virsh net-dhcp-leases default. The default stands for a network name. Unless you changed it or created a new network it should be just that. If default shows no VMs and they are in fact running, you might want to run sudo virsh net-list to find out what network name you have. But again, that's outside the scope if this.

Anyway, do you have the IPs? Good! Mine are:

  • 192.168.100.116
  • 192.168.100.124
  • 192.168.100.48

Depending on how you created the VMs, you might or might not have created a separate user as well.

I goofed up, and didn't create a separate user. That means I can't connect via SSH, because by default, root login only with password is not allowed (at least on the VMs. It will probably be allowed on the VPSs). What I can do now is to either create the new user now or add public key. We want to do stuff the proper way, so I'll create new user. Now there's a slight problem - how do I create a new user when I can't access the machine via SSH? Well, I can use cockpit, but using command line in the browser is a bit weird so I'll use Virtual Machine Manager instead and it'll connect to the VM and provide me with a graphical console.

I'll do this for all the nodes:

Rocky Linux 9.1 (Blue Onyx)
Kernel 5.14.0-162.23.1.el9_1.x86_64 on an x86_64

Activate the web console with: systemctl enable --now cockpit.socket

k8s-1 login: root
Password:
Last login: Wed Apr 19 16:38:08 on tty1
[root@k8s-1 ~]# useradd -G wheel -m -U dejfcold
[root@k8s-1 ~]# passwd dejfcold
Changing password for user dejfcold.
New password:
Retype new password:
passwd: all authentication tokens updated succesfully.

Okay, let's explain the useradd command.

First, useradd and adduser are the same thing. Actually adduser is just a symlink to /usr/sbin/adduser. Let's explain the options now.

  • -G wheel this adds the user to the wheel group. According to the sudoers file, members of the wheel group can run any command. That's important, because we want to act as root
  • -m this just creates home directory. In this case /home/dejfcold
  • -U this creates a group with the same name as the user.

Okay, let's stop for a moment. I just went into each VM and created the same user everywhere. I also said we are building a production cluster. Isn't that weird? What if I want to add more users in the future? What if I add more servers? What if both?

It is correct that this is not ideal. In the ideal world we would have another cluster which would act as an IdM. Well, we don't live in ideal world and IdM isn't in our budget, so we'll just not have it for now. We can add it later, once we are able to deploy our apps and attract investors so we can pay for the IdM.

Setting up SSH and stuff

Now, that we have a user that can be used to access the machines via SSH, let's set it up properly.

First things first - you'll need an ssh key. I already have one, but in case you need to create it, here's the command to run on your computer (not in the VMs/VPS'):

$ ssh-keygen -t ed25519 -C "K8S prod demo"
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/dejfcold/.ssh/id_ed25519): /home/dejfcold/.ssh/k8s_vm
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/dejfcold/.ssh/k8s_vm
Your public key has been saved in /home/dejfcold/.ssh/k8s_vm.pub
The key fingerprint is:
SHA256:GvSdUc++6u+COfh+PzTsav7mWNlo273AlAzTFgQkoks K8S prod demo
The key's randomart image is:
+--[ED25519 256]--+
|       . ..o+o   |
|      . . ...o.  |
|     E.   .o oo  |
|    .... . o=..  |
|     .. S o  =.  |
|       o    o += |
|      .  . o =*..|
|        . + +==o.|
|         oo**XBo+|
+----[SHA256]-----+

Now that we have the SSH key and the machines we can connect to, let's copy the public key to the machines!

For each VM, issue the following command:

$ ssh-copy-id -i ~/.ssh/k8s_vm [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/dejfcold/.ssh/k8s_vm.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

Just to be on the safe side, create an ssh_config file (on your computer):

$ cat << EOF >> ~/.ssh/ssh_config
Host 192.168.100.116 192.168.100.124 192.168.100.48
	User dejfcold
	PreferredAuthentications publickey
	IdentityFile ~/.ssh/k8s_vm
EOF

This may solve your future issues one day, once you'll have too many SSH keys. Instead of trying every SSH key individually and then failing, because none of the other keys were acceptable by the server, it will use the correct one straight away. From now on, you may also omit the username in the ssh commands. So instead of ssh [email protected] you can do just ssh 192.168.100.116 and it will work.

Now that's done, let's do several things on all the VMs again. We will:

  1. Disable ssh password and root login
  2. Reload the ssh daemon to apply the configuration
  3. Try that we can still login (to be honest, you should run this in another session so you can still correct it if it's somehow wrong)
  4. Try that we can't login using password
$ ssh 192.168.100.116
Activate the web console with: systemctl enable --now cockpit.socket

Last login: Wed Apr 19 18:22:59 2023 from 192.168.100.1
[dejfcold@k8s-1 ~]$ sudo bash -c 'cat << EOF > /etc/ssh/sshd_config.d/70-custom.conf
PasswordAuthentication no
PermitRootLogin no
EOF'

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for dejfcold:
[dejfcold@k8s-1 ~]$ sudo systemctl reload sshd
[dejfcold@k8s-1 ~]$ exit
$ ssh 192.168.100.116
Activate the web console with: systemctl enable --now cockpit.socket

Last login: Wed Apr 19 18:23:42 2023 from 192.168.100.1
[dejfcold@k8s-1 ~]$ exit
$ ssh -o PubkeyAuthentication=no -o PreferredAuthentications=password 192.168.100.116
[email protected]: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

That's it! Well, at least for the SSH anyway.

Private networking

We estabilished that there's no private network. So let's create one using, (I don't know if) you guessed it, Wireguard!

Do we need it? I don't know. Will it complicate things? Sure! Is there an overhead compared to just having a private network? Yes! But it will cause traffic between the nodes to be encrypted, which is one of our conditions. And it might also be usefull for other stuff. For example, you'll learn how to configure Wireguard in mesh configuration! ... I think.

Ok, from now on, I'll skip the SSH commands and I'll just keep saying "do this for every VM" or something and you'll just replace the IP addresses I'm using with your IP addresses. OK? OK.

What we'll do here is

  • updating our system
  • enabling firewalld
  • installing the wireguard tools and
  • enabling wireguards ports in the firewall

for every machine. For that we'll do:

[dejfcold@k8s-1 ~]$ sudo dnf update -y
[sudo] password for dejfcold: 
Last metadata expiration check: 3:35:36 ago on Wed 19 Apr 2023 04:16:27 PM CEST.
Dependencies resolved.
Nothing to do.
Complete!
[dejfcold@k8s-1 ~]$ sudo systemctl enable firewalld --now
Created symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service → /usr/lib/systemd/system/firewalld.service.
Created symlink /etc/systemd/system/multi-user.target.wants/firewalld.service → /usr/lib/systemd/system/firewalld.service.
[dejfcold@k8s-1 ~]$ sudo dnf install -y wireguard-tools
Last metadata expiration check: 3:24:12 ago on Wed 19 Apr 2023 04:26:02 PM CEST.
Dependencies resolved.
==============================================================================================================================================================================================================================================
 Package                                                      Architecture                                       Version                                                          Repository                                             Size
==============================================================================================================================================================================================================================================
Installing:
 wireguard-tools                                              x86_64                                             1.0.20210914-2.el9                                               appstream                                             115 k
Installing dependencies:
 systemd-resolved                                             x86_64                                             250-12.el9_1.3                                                   baseos                                                336 k

Transaction Summary
==============================================================================================================================================================================================================================================
Install  2 Packages

Total download size: 452 k
Installed size: 1.0 M
Downloading Packages:
(1/2): wireguard-tools-1.0.20210914-2.el9.x86_64.rpm                                                                                                                                                          506 kB/s | 115 kB     00:00    
(2/2): systemd-resolved-250-12.el9_1.3.x86_64.rpm                                                                                                                                                             1.4 MB/s | 336 kB     00:00    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                          42 kB/s | 452 kB     00:10     
Rocky Linux 9 - BaseOS                                                                                                                                                                                        1.7 MB/s | 1.7 kB     00:00    
Importing GPG key 0x350D275D:
 Userid     : "Rocky Enterprise Software Foundation - Release key 2022 <[email protected]>"
 Fingerprint: 21CB 256A E16F C54C 6E65 2949 702D 426D 350D 275D
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                      1/1 
  Running scriptlet: systemd-resolved-250-12.el9_1.3.x86_64                                                                                                                                                                               1/2 
  Installing       : systemd-resolved-250-12.el9_1.3.x86_64                                                                                                                                                                               1/2 
  Running scriptlet: systemd-resolved-250-12.el9_1.3.x86_64                                                                                                                                                                               1/2 
  Installing       : wireguard-tools-1.0.20210914-2.el9.x86_64                                                                                                                                                                            2/2 
  Running scriptlet: wireguard-tools-1.0.20210914-2.el9.x86_64                                                                                                                                                                            2/2 
  Verifying        : systemd-resolved-250-12.el9_1.3.x86_64                                                                                                                                                                               1/2 
  Verifying        : wireguard-tools-1.0.20210914-2.el9.x86_64                                                                                                                                                                            2/2 

Installed:
  systemd-resolved-250-12.el9_1.3.x86_64                                                                               wireguard-tools-1.0.20210914-2.el9.x86_64                                                                              

Complete!
[dejfcold@k8s-1 ~]$ sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources: 
  services: cockpit dhcpv6-client ssh
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
[dejfcold@k8s-1 ~]$ sudo firewall-cmd --permanent --zone=public --add-service=wireguard
success
[dejfcold@k8s-1 ~]$ sudo firewall-cmd --reload
success
[dejfcold@k8s-1 ~]$ sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources: 
  services: cockpit dhcpv6-client ssh wireguard
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

So much text and we've barely done anything. Since we've already did quite a bit of work, it might be a good idea to create snapshots of our VMs before we accidentally mess up their networking.

Here's what we'll do next:

  • create a wireguard config file
  • create internal firewall zone
  • assign the wireguard interface to the internal zone
  • allow all traffic in the internal zone
  • enable IP forwarding
  • and finally start the wireguard service.
  • It might be a good idea to actually test that it works.

To actually create the config file, we'll need a few things first.

We need to:

  • decide how we'll assign IP addresses. They're usually in the form of 10.0.0.xxx so we'll just stick to that.
  • generate a private and public key pair for each VM
  • generate a preshared key for each edge of our network.

Let's start with something that isn't as tricky. By generating the key pairs. Again, for every VM do the following:

[dejfcold@k8s-1 ~]$ sudo su
[sudo] password for dejfcold: 
[root@k8s-1 dejfcold]# cd ~
[root@k8s-1 ~]# bash
[root@k8s-1 ~]# umask 077
[root@k8s-1 ~]# wg genkey > ~/wg-priv-key
[root@k8s-1 ~]# exit
exit
[root@k8s-1 ~]# wg pubkey < wg-priv-key > wg-pub-key

Obviously, most of the commands make sense in the context of what I said previously about what we're gonna do. Still, there are some explanations needed. Why do I start new bash, what's the umask for and why do I exit so soon again? Well, we need the new bash for the umask and we need the umask for file permissions and finally, we exit so soon because we simply don't need the umask anymore. Ok, so what's about the permissions? Well, we are generating a private key. If we ran it without the umask:

  1. wireguard will complain
[root@k8s-1 ~]# wg genkey > ~/wg-priv-key
Warning: writing to world accessible file.
Consider setting the umask to 077 and trying again.

and

  1. the private key will be readable by everyone (-rw-r--r--.) instead of only by root (-rw-------.)

Ok, onto the more complicated stuff. Sure, it's not that complicated, but it may get confusing at times.

Let's define the Wireguard's IPs. I'll create a table we can refer to later.

- k8s-1 k8s-2 k8s-3
public IP 192.168.100.116 192.168.100.124 192.168.100.48
WG IP 10.0.0.116 10.0.0.124 10.0.0.48

How did I came up with the WG IPs? Well, I took the standard format of 10.0.0.xxx and replaced the xxx with the last number part of the public IPs. There's no magic involved. It can be whatever you want!

Let's do the preshared key. This one is a bit tricky. Well, not really, but I run the risk of someone yelling at me that this way isn't really secure. But then again, this whole piece has the same risk.

Why is it tricky? Well, we already know how to generate a secret key securely, but this key is a symmetric key so we also need to copy it securely. We can do that in a number of ways. Using SSH for example. But we already disabled SSH for root. We could use our user instead. But then that user would know that key. That would be fine by me, but maybe some security experts would be against that. I'd probably be willing to open the preshared key in nano in one SSH window, copy it to clipboard and paste it into another SSH window, but I already hear people screaming.

So, let's over engineer it! I'll also teach you about GPG encryption.

First, let's define how we'll move the keys. You may notice, I named my VMs k8s-1, k8s-2 and k8s-3. Let's use that. We'll generate a preshared key on every machine and than copy it to the machine with a higher number in it's name and #3 will copy it to #1. So it'll be:

  • #1 -> #2
  • #2 -> #3
  • #3 -> #1

Let's generate the keys first:

[root@k8s-1 ~]# bash
[root@k8s-1 ~]# umask 077
[root@k8s-1 ~]# wg genpsk > wg-preshared-1-2
[root@k8s-1 ~]# exit
exit

And now let's move them. We'll need to:

  1. install pinentry first and
  2. generate yet another key. This time for the GPG. It'll also ask you for a passphrase but I have no way of showing it here.
  3. Next, we'll export the public key, which we'll use to encrypt the symmetric key.
  4. Then we'll need to import it to the machine from which we want to move the key.
  5. We'll encrypt the key and
  6. move it.
  7. Finally we'll decrypt the key

A word of caution! Pay close attention to the hostnames! We're encrypting with the key generated on the machine to which we want to move stuff!

[root@k8s-1 ~]# dnf install -y pinentry
Last metadata expiration check: 3:26:46 ago on Wed 19 Apr 2023 08:40:59 PM CEST.
Dependencies resolved.
==============================================================================================================================================================================================================================================
 Package                                                  Architecture                                          Version                                                        Repository                                                Size
==============================================================================================================================================================================================================================================
Installing:
 pinentry                                                 x86_64                                                1.1.1-8.el9                                                    appstream                                                 66 k
Installing dependencies:
 libsecret                                                x86_64                                                0.20.4-4.el9                                                   appstream                                                157 k

Transaction Summary
==============================================================================================================================================================================================================================================
Install  2 Packages

Total download size: 223 k
Installed size: 589 k
Downloading Packages:
(1/2): pinentry-1.1.1-8.el9.x86_64.rpm                                                                                                                                                                        335 kB/s |  66 kB     00:00    
(2/2): libsecret-0.20.4-4.el9.x86_64.rpm                                                                                                                                                                      707 kB/s | 157 kB     00:00    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                         368 kB/s | 223 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                      1/1 
  Installing       : libsecret-0.20.4-4.el9.x86_64                                                                                                                                                                                        1/2 
  Installing       : pinentry-1.1.1-8.el9.x86_64                                                                                                                                                                                          2/2 
  Running scriptlet: pinentry-1.1.1-8.el9.x86_64                                                                                                                                                                                          2/2 
  Verifying        : libsecret-0.20.4-4.el9.x86_64                                                                                                                                                                                        1/2 
  Verifying        : pinentry-1.1.1-8.el9.x86_64                                                                                                                                                                                          2/2 

Installed:
  libsecret-0.20.4-4.el9.x86_64                                                                                          pinentry-1.1.1-8.el9.x86_64                                                                                         

Complete!
[root@k8s-1 ~]# gpg --full-generate-key
gpg (GnuPG) 2.3.3; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
  (14) Existing key from card
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (3072) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 1
Key expires at Fri 21 Apr 2023 12:08:12 AM CEST
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: 
Email address: 
Comment: k8s-1
You selected this USER-ID:
    " (k8s-1)"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 8572E8401ACBDC99 marked as ultimately trusted
gpg: directory '/root/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/root/.gnupg/openpgp-revocs.d/C79110D17E5A7BA38F22D4B58572E8401ACBDC99.rev'
public and secret key created and signed.

pub   rsa4096 2023-04-19 [SC] [expires: 2023-04-20]
      C79110D17E5A7BA38F22D4B58572E8401ACBDC99
uid                       (k8s-1)
sub   rsa4096 2023-04-19 [E] [expires: 2023-04-20]

[root@k8s-1 ~]# gpg --list-keys
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2023-04-20
/root/.gnupg/pubring.kbx
------------------------
pub   rsa4096 2023-04-19 [SC] [expires: 2023-04-20]
      C79110D17E5A7BA38F22D4B58572E8401ACBDC99
uid           [ultimate]  (k8s-1)
sub   rsa4096 2023-04-19 [E] [expires: 2023-04-20]

[root@k8s-1 ~]# gpg --armor --export C79110D17E5A7BA38F22D4B58572E8401ACBDC99
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBGRAZlwBEAC+PSeC2TlLMcNrWxVTVywzs7ki49imYdHZqGEDto652Kz1klcK
9QERlQ9kq7f/SUUj6kfaedM8FNkC49ePH2SeQ9rS8Z53DrUltAHf6+z3rWVTjfw5
7bNij3UoO99ndqNmLl1w448uVYya47u1+sFgrqmZ1PJ8BNaPr5+p29zMcBqER5Jj
7EZBHrUhmYRbt5XOJ2mH7qzGFQXvyj89laxUhrCXDVeI+XZ6hwpAPqrlxS08nlx6
QVrgy1kNtZHzp0BwjEmUddzSr3k5n1y5ONPjA9UAM+a+Vg2+HpqMZAI/jNhJP53a
W4K8WgYTZexCwOhweyelNszBPY6BeL3Hbt0TucKmy71V/oLlvkJ7Ug3VhbMd+L3p
3qjK5noPRRf1EJp7CIL0j1jX3j7VruLX5JsBE9+0By2B/Zn9yzL3p8EEYeGufwzK
+3TKd7duj5Ey7wFfYuxL+G8EysdOI5xEJRnjOTP8jR/+EiFsmXEvyQjfdkPUVQa6
7y0fUHGYrhaVhMkQc5b3qXO6H5JK4TEV2aNxm9qVfYMIYZddKQ20AfzkXYeQjJiZ
ZYHanjlBBoecLPHsmAdYmnGkfNRrF4RH5RbrUIR9gm2R5ETjhkTezbRqvRuXQUHo
h8DDfAcOR/TYp4qZMAGzVoYUNJ1EwJMv06EJcsAjLqBusxvPMKTlULSKpwARAQAB
tAggKGs4cy0xKYkCWAQTAQgAQhYhBMeRENF+WnujjyLUtYVy6EAay9yZBQJkQGZc
AhsDBQkAAVGABQsJCAcCAyICAQYVCgkICwIEFgIDAQIeBwIXgAAKCRCFcuhAGsvc
mQ7nEAC8hr+7YtghsxIaz4WTXDR8DCqTYBRuzm3NZv7anwadtoQjRx8ABU8mZjP+
I8fZtPEUijRI9ISFlX7407Ywc1ZE9bvvLDtctFNFEZBGWY2FWi1NYU+ab/o4IYGY
RRWcR/KUIW6gMA1yL2N0GH7ZXSQ1ltkKtOwlSLueX2TUvwWXUYzZj/Qsn/9SO5Ay
0g3UpSVsTyrVmpsF6nolIm2zkOaZhIpgQUFobA+bLhnAqdMp2jt3jSuBdxAv5Is9
T+vqh//ziykhEzrFXIui1EpxFBKJ+DotKFlKkusSsttKvIDZqFZLJmFc6R6U62LC
6f6Cz6R6/k95MMHb0ZbiJb/8GYyHI4iUgqM7+LsqkZlIJtgkl9TiDLUT/yuClwN5
rs3qGzfTVxcS/yiqcd/Ewz4CR2imA8DV668D/qsR3j6epe5ebFmRm4B8mblq1Faq
4zNafRmIMx6yzHw84Hdi58k5R24M0Fp0ra++2HMt1WpylLxQF36NrH/22d2jsDP2
WFa6B3MOxY0N0llruEKkUWSls69YbxSQUPSWR+4yW9z2Xth/CA/JZCERKuipRzQD
9xT9U1+WywlXd26YGBU9e81pLkQUCwMUCOdf2H8xzRtpDvBqLHBWLoI75+d5HWyo
5Ev0bi3xhA07hxCHN7UdBawU0IoN7LW0qnUh0p1DCrIltv6AY7kCDQRkQGZcARAA
v+U7CLoNs+xkWiQgMOvqMwsLfiCL/LJbxhGpDKra91AHe7UyA5M7g03wMy4mMNLp
oTxKt9aojds414SQwWadJ/CO9ZuApUMbm090EDJ1/JziIky82RyAFF0L9ub0GS2O
zmP4OD3H/+AkBMThCvMFcQTORGlVnyYQW7s1FXaoZXWs98ZXChvx4cNo9ffvEvob
iEmZiva5YtCbfGR5GsHieVo/is0M9gOk7hMghq0wQC1vBwQlSwVpsi7aU1im9n6r
WohzPejw4C6vL6ZKJSIK9Eqm9sKwia1N25jQ+lwnleV6gU6sJKK/51fgZAwA542G
Np5g4WO7txymwIss3ifuvTNbuauBUDa2Y7xeVrLJpCKmlSZ/2tXs6UQEWaf90YzH
swb1R6N/rdjY5X/F7so+n7YI1uGSmMY54tYfY0lYQPlw2qEh+oHoPZrOSEiIP8/z
Asz0NYgnWgmIUO8TWrv8qMXakNkftYwdUQaabfWzWAc1uqsb+Dza8ycQw1/qFUD8
kD0En77TjTSBDdBexQ5BDd3kns9nODZcAuBuHZWpK1I1PJ/Lm+0t+BdkDgxB+aW3
E4U7RSJriB2ltnOmFRJSoyqYG6w9Y2ZGN+8Yip+1cQqVA94PcqM8k5Yuc4obpdhl
9eo2lg/JTeFg/dztkMOxEETDAsVLphLonzDqMOPBgT8AEQEAAYkCPAQYAQgAJhYh
BMeRENF+WnujjyLUtYVy6EAay9yZBQJkQGZcAhsMBQkAAVGAAAoJEIVy6EAay9yZ
G1sQAIx6zWKFT0r73HZmu3y6/AHwhVI4IWMxtjrFmXrwV33PqIFPBesQvalNEcLN
8Z04krpRmA+fme7UX+vJTdzksctrQK3Ne8Awxn03OZ1C6P6Hw1v0j8tngTgYQTM4
YIfyx5MMKdRSvRoob3Avnr50TAQ72mk/HjTkxeVyxpZrXGb9dW18EmfDgyDSW2aM
30D/y991aJG+20IfoKxeCzL/5w9Spl0Z4qLy96186GJ3eJBMwn9QZskAIF/eRpUh
c2Ln9wu5Gp8SB0kqvlfVai01wOaxfN9+uxo82iPq9CVPc/b424WADcq/WmP36mWd
O6A9h893pbUHz0D7WVp/i88LnEncW+QPY9nT0cHkw6/IpZ6fRbIia++xhv18k9Iw
Buk7AuLvmw6t2MGfD+OF/Eajbs92yDjyhVwWj+OqwhFCqURfbJ80AOiyhtgxH1yO
d0AmD1LgDy4gaT64hua2Wg+9N+wNkSg5aKgt9VjXOevXv40x6UFn9uAKj+knVe+6
xKe8CiHvPDJd5s6ONOj+LUAGq3tMqkDun84vTIvre8Jt86rnzeC7raN0wgTvwDvA
Qq8leljtixLvZA5Efe5suQOHK0jZh0FBACznSZZaDmABxHko1WQBV+WB2bnERy3h
L45bFjLjaA7CUcD3Au+HdOWyhTB+MSrFpQQ2xJXwQTrcQtOl
=Fr5i
-----END PGP PUBLIC KEY BLOCK-----

Now, we have exported the GPG public key. We'll import it to the machine with the smaller number in it's name. #1 wraps around to #3.

[root@k8s-3 ~]# cat << EOF > k8s-1.gpg.pub
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBGRAZlwBEAC+PSeC2TlLMcNrWxVTVywzs7ki49imYdHZqGEDto652Kz1klcK
9QERlQ9kq7f/SUUj6kfaedM8FNkC49ePH2SeQ9rS8Z53DrUltAHf6+z3rWVTjfw5
7bNij3UoO99ndqNmLl1w448uVYya47u1+sFgrqmZ1PJ8BNaPr5+p29zMcBqER5Jj
7EZBHrUhmYRbt5XOJ2mH7qzGFQXvyj89laxUhrCXDVeI+XZ6hwpAPqrlxS08nlx6
QVrgy1kNtZHzp0BwjEmUddzSr3k5n1y5ONPjA9UAM+a+Vg2+HpqMZAI/jNhJP53a
W4K8WgYTZexCwOhweyelNszBPY6BeL3Hbt0TucKmy71V/oLlvkJ7Ug3VhbMd+L3p
3qjK5noPRRf1EJp7CIL0j1jX3j7VruLX5JsBE9+0By2B/Zn9yzL3p8EEYeGufwzK
+3TKd7duj5Ey7wFfYuxL+G8EysdOI5xEJRnjOTP8jR/+EiFsmXEvyQjfdkPUVQa6
7y0fUHGYrhaVhMkQc5b3qXO6H5JK4TEV2aNxm9qVfYMIYZddKQ20AfzkXYeQjJiZ
ZYHanjlBBoecLPHsmAdYmnGkfNRrF4RH5RbrUIR9gm2R5ETjhkTezbRqvRuXQUHo
h8DDfAcOR/TYp4qZMAGzVoYUNJ1EwJMv06EJcsAjLqBusxvPMKTlULSKpwARAQAB
tAggKGs4cy0xKYkCWAQTAQgAQhYhBMeRENF+WnujjyLUtYVy6EAay9yZBQJkQGZc
AhsDBQkAAVGABQsJCAcCAyICAQYVCgkICwIEFgIDAQIeBwIXgAAKCRCFcuhAGsvc
mQ7nEAC8hr+7YtghsxIaz4WTXDR8DCqTYBRuzm3NZv7anwadtoQjRx8ABU8mZjP+
I8fZtPEUijRI9ISFlX7407Ywc1ZE9bvvLDtctFNFEZBGWY2FWi1NYU+ab/o4IYGY
RRWcR/KUIW6gMA1yL2N0GH7ZXSQ1ltkKtOwlSLueX2TUvwWXUYzZj/Qsn/9SO5Ay
0g3UpSVsTyrVmpsF6nolIm2zkOaZhIpgQUFobA+bLhnAqdMp2jt3jSuBdxAv5Is9
T+vqh//ziykhEzrFXIui1EpxFBKJ+DotKFlKkusSsttKvIDZqFZLJmFc6R6U62LC
6f6Cz6R6/k95MMHb0ZbiJb/8GYyHI4iUgqM7+LsqkZlIJtgkl9TiDLUT/yuClwN5
rs3qGzfTVxcS/yiqcd/Ewz4CR2imA8DV668D/qsR3j6epe5ebFmRm4B8mblq1Faq
4zNafRmIMx6yzHw84Hdi58k5R24M0Fp0ra++2HMt1WpylLxQF36NrH/22d2jsDP2
WFa6B3MOxY0N0llruEKkUWSls69YbxSQUPSWR+4yW9z2Xth/CA/JZCERKuipRzQD
9xT9U1+WywlXd26YGBU9e81pLkQUCwMUCOdf2H8xzRtpDvBqLHBWLoI75+d5HWyo
5Ev0bi3xhA07hxCHN7UdBawU0IoN7LW0qnUh0p1DCrIltv6AY7kCDQRkQGZcARAA
v+U7CLoNs+xkWiQgMOvqMwsLfiCL/LJbxhGpDKra91AHe7UyA5M7g03wMy4mMNLp
oTxKt9aojds414SQwWadJ/CO9ZuApUMbm090EDJ1/JziIky82RyAFF0L9ub0GS2O
zmP4OD3H/+AkBMThCvMFcQTORGlVnyYQW7s1FXaoZXWs98ZXChvx4cNo9ffvEvob
iEmZiva5YtCbfGR5GsHieVo/is0M9gOk7hMghq0wQC1vBwQlSwVpsi7aU1im9n6r
WohzPejw4C6vL6ZKJSIK9Eqm9sKwia1N25jQ+lwnleV6gU6sJKK/51fgZAwA542G
Np5g4WO7txymwIss3ifuvTNbuauBUDa2Y7xeVrLJpCKmlSZ/2tXs6UQEWaf90YzH
swb1R6N/rdjY5X/F7so+n7YI1uGSmMY54tYfY0lYQPlw2qEh+oHoPZrOSEiIP8/z
Asz0NYgnWgmIUO8TWrv8qMXakNkftYwdUQaabfWzWAc1uqsb+Dza8ycQw1/qFUD8
kD0En77TjTSBDdBexQ5BDd3kns9nODZcAuBuHZWpK1I1PJ/Lm+0t+BdkDgxB+aW3
E4U7RSJriB2ltnOmFRJSoyqYG6w9Y2ZGN+8Yip+1cQqVA94PcqM8k5Yuc4obpdhl
9eo2lg/JTeFg/dztkMOxEETDAsVLphLonzDqMOPBgT8AEQEAAYkCPAQYAQgAJhYh
BMeRENF+WnujjyLUtYVy6EAay9yZBQJkQGZcAhsMBQkAAVGAAAoJEIVy6EAay9yZ
G1sQAIx6zWKFT0r73HZmu3y6/AHwhVI4IWMxtjrFmXrwV33PqIFPBesQvalNEcLN
8Z04krpRmA+fme7UX+vJTdzksctrQK3Ne8Awxn03OZ1C6P6Hw1v0j8tngTgYQTM4
YIfyx5MMKdRSvRoob3Avnr50TAQ72mk/HjTkxeVyxpZrXGb9dW18EmfDgyDSW2aM
30D/y991aJG+20IfoKxeCzL/5w9Spl0Z4qLy96186GJ3eJBMwn9QZskAIF/eRpUh
c2Ln9wu5Gp8SB0kqvlfVai01wOaxfN9+uxo82iPq9CVPc/b424WADcq/WmP36mWd
O6A9h893pbUHz0D7WVp/i88LnEncW+QPY9nT0cHkw6/IpZ6fRbIia++xhv18k9Iw
Buk7AuLvmw6t2MGfD+OF/Eajbs92yDjyhVwWj+OqwhFCqURfbJ80AOiyhtgxH1yO
d0AmD1LgDy4gaT64hua2Wg+9N+wNkSg5aKgt9VjXOevXv40x6UFn9uAKj+knVe+6
xKe8CiHvPDJd5s6ONOj+LUAGq3tMqkDun84vTIvre8Jt86rnzeC7raN0wgTvwDvA
Qq8leljtixLvZA5Efe5suQOHK0jZh0FBACznSZZaDmABxHko1WQBV+WB2bnERy3h
L45bFjLjaA7CUcD3Au+HdOWyhTB+MSrFpQQ2xJXwQTrcQtOl
=Fr5i
-----END PGP PUBLIC KEY BLOCK-----
EOF
[root@k8s-3 ~]# gpg --import k8s-1.gpg.pub 
gpg: key 8572E8401ACBDC99: public key " (k8s-1)" imported
gpg: Total number processed: 1
gpg:               imported: 1
[root@k8s-3 ~]# gpg --encrypt --sign --armor -r 8572E8401ACBDC99 wg-preshared-3-1
gpg: 9D82F14A98464C6B: There is no assurance this key belongs to the named user

sub  rsa4096/9D82F14A98464C6B 2023-04-19  (k8s-1)
 Primary key fingerprint: C791 10D1 7E5A 7BA3 8F22  D4B5 8572 E840 1ACB DC99
      Subkey fingerprint: 9606 D666 8D09 582F 97EB  3ACF 9D82 F14A 9846 4C6B

It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.

Use this key anyway? (y/N) y
[root@k8s-3 ~]# cat wg-preshared-3-1.asc 
-----BEGIN PGP MESSAGE-----

hQIMA52C8UqYRkxrARAAoOqHpKxy4NHnYEI9Hj1NgZkXC6j37vInMlR8/LBhLTpx
TuAFBwNhnEOGAPipfAW+K4/kyi67k/ZtNmDY0Q7GUbP9PGfQqbPRTSAzh2jgHZwK
8pUqquhTYu/RUvIt+bbbcukw3s1gdDRoXrfhH6iYO1Ir6V2cZw5PVs4v3hzJiSCn
6nQkvXEY9wObycuHIWP4GbzxbiOUA7EXjzqJPlzeU0OXgoKzb9AfsHWWptnSzKLA
GHZlzXoz/LgDqz1rJmA0rfTYVvyAiwrANAru0KBulyE+g/BOGOjGj4TcgrG6Lxxd
ubdeK9xwiafoZhkps+CbazmGw1T9HujVK8GaHme5C1m9+8L3op2W9EkmmZ9wqYik
REgiVtr4UCB5RwCH/DNXWnlQyOO6Qnc1pVK4Ed94jihS/2NyHJJxlQQxzVaWJzud
KE6r8PFxgpPHsaXmHOaxyU/eGsuaxzflr901khMhCScHLye7JzAo34DePesBsH1J
0NKVvzNVt9mS1phBDin9pVLDt6SNSZ+rsEqwZtq24bLSTrBD8bRsWlnRQXagqKLF
sQr7lNRvuZKBHzKAA9rdlx6WyA4hZoucgkrfmMfh/0CsjsnBg8Do8prNF/cAi62J
GgTEfimOMPCtS+ANNMxEVCfgvGfSJkJzc10naj1iktviEBSftj8iWWeb51mdIuDU
6QEJAhAz+fwfZiO9rOyB1MDuBLVcVvE7DLqO7ze2lRDlxoFaOp+apDmNcVikr8jo
9pTJ+hzYG17fCf5gVTz+BRoqdYBVjyXU6/PXWGvg0hJjeg0f9dgfOPmJFbUGwEql
eHS+SPNOaZiQcb9JWU0QwC8Sxo4f0LtFSZ9HWXQOr3Akh6mFDQWvz0/seipwYBtE
SmNwG41oMVA1PFH+XqwdSePYUSDdQpzORfoYuB6q182Qj6rL6bwjGGMjOYz4w+bo
E6E8yHLS7tTrsuw4wE2sy36PzN24UQa7uQcw3f4BPQuv4G6L6Jp6XtTBxATNy9r5
P06+YsP3GJ5hD2Zto3peYG7GzUl3lE08cLt0PaioNTQ37YglvPVXfEQERkNUxxXT
K7K6wuPVpXQPRYqeSZ76KwXP2E5vLeFAO4tLDBUxT1NVVmUKU38wO69cluYAA3GU
dI2/Q6lTms8Z2a/Om3QByQtJCSh4ooBeVwqd02sHThSfQOaJ9RjQL66pdO7aYeoZ
NHR1H5Vz+JKcwLYIk4e4HSzqIiNVTBryKm67SYMuYH8tbvJKGSxzXTYoN1tCOON4
N8Bsv7RPPqvkdRwGOUdjhqKk349TNAx27OPvApFgJmbbrzypSLDasUvekuJ9tClx
tTuLP4c4UzVx35GPdFbsM1+vz2by51NgcbiaJ65jcE6IwAqGTdMezOiJRX4dnEC9
jpUSRLGCt52KQpLmNycjbUfO4467FUmdPoexGGTE4oVi8HkPSJIYB8ciEnvlONUK
/t8k7IedwUXo/7FEh7VVtYZKlQSE038EeJCr4vjsYfYNIeNdfeZ/m1NJJgfAo+Xv
2IgoPdpOOmkBud1eYCjJXQphxiKCqQDF5pa0L5ErZBrgrFTSdMNGDncaJwiqqDEP
9b3JNY2jbnai5/pCHwbUkJ8u7pQlLQWbdJ/x1qtAiG1DYzq/9QLDQohmbNL/
=+fsS
-----END PGP MESSAGE-----

Now we have encrypted the preshared key. Now we can copy paste it from one SSH session to another.

[root@k8s-1 ~]# cat << EOF > wg-preshared-3-1.asc
-----BEGIN PGP MESSAGE-----

hQIMA52C8UqYRkxrARAAoOqHpKxy4NHnYEI9Hj1NgZkXC6j37vInMlR8/LBhLTpx
TuAFBwNhnEOGAPipfAW+K4/kyi67k/ZtNmDY0Q7GUbP9PGfQqbPRTSAzh2jgHZwK
8pUqquhTYu/RUvIt+bbbcukw3s1gdDRoXrfhH6iYO1Ir6V2cZw5PVs4v3hzJiSCn
6nQkvXEY9wObycuHIWP4GbzxbiOUA7EXjzqJPlzeU0OXgoKzb9AfsHWWptnSzKLA
GHZlzXoz/LgDqz1rJmA0rfTYVvyAiwrANAru0KBulyE+g/BOGOjGj4TcgrG6Lxxd
ubdeK9xwiafoZhkps+CbazmGw1T9HujVK8GaHme5C1m9+8L3op2W9EkmmZ9wqYik
REgiVtr4UCB5RwCH/DNXWnlQyOO6Qnc1pVK4Ed94jihS/2NyHJJxlQQxzVaWJzud
KE6r8PFxgpPHsaXmHOaxyU/eGsuaxzflr901khMhCScHLye7JzAo34DePesBsH1J
0NKVvzNVt9mS1phBDin9pVLDt6SNSZ+rsEqwZtq24bLSTrBD8bRsWlnRQXagqKLF
sQr7lNRvuZKBHzKAA9rdlx6WyA4hZoucgkrfmMfh/0CsjsnBg8Do8prNF/cAi62J
GgTEfimOMPCtS+ANNMxEVCfgvGfSJkJzc10naj1iktviEBSftj8iWWeb51mdIuDU
6QEJAhAz+fwfZiO9rOyB1MDuBLVcVvE7DLqO7ze2lRDlxoFaOp+apDmNcVikr8jo
9pTJ+hzYG17fCf5gVTz+BRoqdYBVjyXU6/PXWGvg0hJjeg0f9dgfOPmJFbUGwEql
eHS+SPNOaZiQcb9JWU0QwC8Sxo4f0LtFSZ9HWXQOr3Akh6mFDQWvz0/seipwYBtE
SmNwG41oMVA1PFH+XqwdSePYUSDdQpzORfoYuB6q182Qj6rL6bwjGGMjOYz4w+bo
E6E8yHLS7tTrsuw4wE2sy36PzN24UQa7uQcw3f4BPQuv4G6L6Jp6XtTBxATNy9r5
P06+YsP3GJ5hD2Zto3peYG7GzUl3lE08cLt0PaioNTQ37YglvPVXfEQERkNUxxXT
K7K6wuPVpXQPRYqeSZ76KwXP2E5vLeFAO4tLDBUxT1NVVmUKU38wO69cluYAA3GU
dI2/Q6lTms8Z2a/Om3QByQtJCSh4ooBeVwqd02sHThSfQOaJ9RjQL66pdO7aYeoZ
NHR1H5Vz+JKcwLYIk4e4HSzqIiNVTBryKm67SYMuYH8tbvJKGSxzXTYoN1tCOON4
N8Bsv7RPPqvkdRwGOUdjhqKk349TNAx27OPvApFgJmbbrzypSLDasUvekuJ9tClx
tTuLP4c4UzVx35GPdFbsM1+vz2by51NgcbiaJ65jcE6IwAqGTdMezOiJRX4dnEC9
jpUSRLGCt52KQpLmNycjbUfO4467FUmdPoexGGTE4oVi8HkPSJIYB8ciEnvlONUK
/t8k7IedwUXo/7FEh7VVtYZKlQSE038EeJCr4vjsYfYNIeNdfeZ/m1NJJgfAo+Xv
2IgoPdpOOmkBud1eYCjJXQphxiKCqQDF5pa0L5ErZBrgrFTSdMNGDncaJwiqqDEP
9b3JNY2jbnai5/pCHwbUkJ8u7pQlLQWbdJ/x1qtAiG1DYzq/9QLDQohmbNL/
=+fsS
-----END PGP MESSAGE-----
EOF
[root@k8s-1 ~]# bash
[root@k8s-1 ~]# umask 077
[root@k8s-1 ~]# gpg --skip-verify --decrypt wg-preshared-3-1.asc > wg-preshared-3-1
[root@k8s-1 ~]# exit

And voila, it's done! If you've done it correctly, you should have all the secret keys in the correct VMs and nobody outside could see it!

Side note for the nitpickers: Yes, I'm aware, I did signed the export and then skipped the verification of the signature. It's because I didn't import the GPG public key and it would make it too long. Maybe I'll fix it in the future.

Just for completion, I'll show you what are the wireguard files that should be in the root's home directory in each host.

[root@k8s-1 ~]# ls -la wg*
-rw-------. 1 root root   45 Apr 19 23:57 wg-preshared-1-2
-rw-r--r--. 1 root root 1747 Apr 20 00:45 wg-preshared-1-2.asc
-rw-------. 1 root root   45 Apr 20 00:57 wg-preshared-3-1
-rw-r--r--. 1 root root 1747 Apr 20 00:43 wg-preshared-3-1.asc
-rw-------. 1 root root   45 Apr 19 23:02 wg-priv-key
-rw-r--r--. 1 root root   45 Apr 19 23:04 wg-pub-key

[root@k8s-2 ~]# ls -la wg*
-rw-------. 1 root root   45 Apr 20 01:18 wg-preshared-1-2
-rw-r--r--. 1 root root 1747 Apr 20 00:49 wg-preshared-1-2.asc
-rw-------. 1 root root   45 Apr 19 23:58 wg-preshared-2-3
-rw-r--r--. 1 root root 1747 Apr 20 00:45 wg-preshared-2-3.asc
-rw-------. 1 root root   45 Apr 19 23:07 wg-priv-key
-rw-r--r--. 1 root root   45 Apr 20 01:16 wg-pub-key

[root@k8s-3 ~]# ls -la wg*
-rw-------. 1 root root   45 Apr 20 01:18 wg-preshared-2-3
-rw-r--r--. 1 root root 1747 Apr 20 00:50 wg-preshared-2-3.asc
-rw-------. 1 root root   45 Apr 19 23:58 wg-preshared-3-1
-rw-r--r--. 1 root root 1747 Apr 20 00:37 wg-preshared-3-1.asc
-rw-------. 1 root root   45 Apr 19 23:07 wg-priv-key
-rw-r--r--. 1 root root   45 Apr 20 01:19 wg-pub-key

So, now that we have all the keys, are you ready to finally create the wireguard configuration file?

In the end, it will look like this. I'll just omit the private keys :)

[Interface]
Address = 10.0.0.116/24
PrivateKey = <omitted>
ListenPort = 51820

[Peer]
PublicKey = NVZhm6BUpHKb3x17BkwJlLFQ0vlz3uiTRF0pAIyyGHY=
PresharedKey = <omitted>
Endpoint = 192.168.100.124:51820
AllowedIPs = 10.0.0.124/32
PersistentKeepalive = 25

[Peer]
PublicKey = MzJrquaFbIqncUZOFXWPblNLEYk/kTK1a0jnG/lF5RE=
PresharedKey = <omitted>
Endpoint = 192.168.100.48:51820
AllowedIPs = 10.0.0.48/32
PersistentKeepalive = 25

Now the script

[root@k8s-1 ~]# bash
[root@k8s-1 ~]# cd /etc/wireguard/
[root@k8s-1 wireguard]# umask 077
[root@k8s-1 wireguard]# cat << EOF > wg0.conf
[Interface]
Address = 10.0.0.116/24
PrivateKey = $(cat ~/wg-priv-key)
ListenPort = 51820

[Peer]
PublicKey = NVZhm6BUpHKb3x17BkwJlLFQ0vlz3uiTRF0pAIyyGHY=
PresharedKey = $(cat ~/wg-preshared-1-2)
Endpoint = 192.168.100.124:51820
AllowedIPs = 10.0.0.124/32
PersistentKeepalive = 25

[Peer]
PublicKey = MzJrquaFbIqncUZOFXWPblNLEYk/kTK1a0jnG/lF5RE=
PresharedKey = $(cat ~/wg-preshared-3-1)
Endpoint = 192.168.100.48:51820
AllowedIPs = 10.0.0.48/32
PersistentKeepalive = 25
EOF
[root@k8s-1 wireguard]# exit
exit
[root@k8s-1 ~]# firewall-cmd --permanent --zone=internal --set-target=ACCEPT
success
[root@k8s-1 ~]# firewall-cmd --permanent --zone=internal --add-interface=wg0
success
[root@k8s-1 ~]# firewall-cmd --permanent --zone=internal --add-source=10.0.0.0/24
success
[root@k8s-1 ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.d/99-custom.conf
[root@k8s-1 ~]# sysctl -p /etc/sysctl.d/99-custom.conf
net.ipv4.ip_forward = 1
[root@k8s-1 ~]# systemctl enable --now [email protected]

That's it! That is wireguard setup. What's left to do is to check if it works. You can just ping the other hosts, check the wg command and look at the new interface.

[root@k8s-1 ~]# ping -c 5 10.0.0.48
PING 10.0.0.48 (10.0.0.48) 56(84) bytes of data.
64 bytes from 10.0.0.48: icmp_seq=1 ttl=64 time=0.613 ms
64 bytes from 10.0.0.48: icmp_seq=2 ttl=64 time=0.517 ms
64 bytes from 10.0.0.48: icmp_seq=3 ttl=64 time=0.636 ms
64 bytes from 10.0.0.48: icmp_seq=4 ttl=64 time=0.561 ms
64 bytes from 10.0.0.48: icmp_seq=5 ttl=64 time=0.466 ms

--- 10.0.0.48 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4093ms
rtt min/avg/max/mdev = 0.466/0.558/0.636/0.062 ms
[root@k8s-1 ~]# ping -c 5 10.0.0.124
PING 10.0.0.124 (10.0.0.124) 56(84) bytes of data.
64 bytes from 10.0.0.124: icmp_seq=1 ttl=64 time=0.580 ms
64 bytes from 10.0.0.124: icmp_seq=2 ttl=64 time=0.669 ms
64 bytes from 10.0.0.124: icmp_seq=3 ttl=64 time=0.423 ms
64 bytes from 10.0.0.124: icmp_seq=4 ttl=64 time=0.534 ms
64 bytes from 10.0.0.124: icmp_seq=5 ttl=64 time=0.681 ms

--- 10.0.0.124 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4107ms
rtt min/avg/max/mdev = 0.423/0.577/0.681/0.094 ms
[root@k8s-1 ~]# wg
interface: wg0
  public key: ArXgf5eUQ0RqqijbewSZ54aRNNaF38MNLJKGqMxtTjE=
  private key: (hidden)
  listening port: 51820

peer: MzJrquaFbIqncUZOFXWPblNLEYk/kTK1a0jnG/lF5RE=
  preshared key: (hidden)
  endpoint: 192.168.100.48:51820
  allowed ips: 10.0.0.48/32
  latest handshake: 1 minute, 36 seconds ago
  transfer: 1.64 KiB received, 2.42 KiB sent
  persistent keepalive: every 25 seconds

peer: NVZhm6BUpHKb3x17BkwJlLFQ0vlz3uiTRF0pAIyyGHY=
  preshared key: (hidden)
  endpoint: 192.168.100.124:51820
  allowed ips: 10.0.0.124/32
  latest handshake: 1 minute, 57 seconds ago
  transfer: 1.93 KiB received, 2.07 KiB sent
  persistent keepalive: every 25 seconds
[root@k8s-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:00:c2:25 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.116/24 brd 192.168.100.255 scope global dynamic noprefixroute enp1s0
       valid_lft 2569sec preferred_lft 2569sec
    inet6 fe80::5054:ff:fe00:c225/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.0.0.116/24 scope global wg0
       valid_lft forever preferred_lft forever

Now that we know that it works, it might be a good idea to create new snapshots, so that we won't need to repeat all this again if we mess up in the future. If it doesn't work for you, maybe try to revert to the last snapshot and try again?

Kubernetes finally? No, but it's prerequisites

Okay, so that was quite a long read and we haven't even started yet. So can we now start with what we all came here to do? Yes! ... maybe? I don't know, I'll have to read the docs first.

Aha! We'll need to install a container runtime first. That make sense since we didn't install any Docker or Podman or whatever.

I will not tell you what a container runtime is. There are better sources for that. I will tell you that we will not use Docker (or Docker Engine how it is called now) however. We will use containerd. Which is distributed by Docker (the company). Weird. Anyway, let's do this for all the VMs!

[dejfcold@k8s-1 ~]$ sudo yum install -y yum-utils
Last metadata expiration check: 0:05:07 ago on Fri 21 Apr 2023 12:26:24 AM CEST.
Dependencies resolved.
==============================================================================================================================================================================================================================================
 Package                                                   Architecture                                           Version                                                        Repository                                              Size
==============================================================================================================================================================================================================================================
Installing:
 yum-utils                                                 noarch                                                 4.1.0-3.el9                                                    baseos                                                  36 k

Transaction Summary
==============================================================================================================================================================================================================================================
Install  1 Package

Total download size: 36 k
Installed size: 23 k
Downloading Packages:
yum-utils-4.1.0-3.el9.noarch.rpm                                                                                                                                                                              242 kB/s |  36 kB     00:00    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                          89 kB/s |  36 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                      1/1 
  Installing       : yum-utils-4.1.0-3.el9.noarch                                                                                                                                                                                         1/1 
  Running scriptlet: yum-utils-4.1.0-3.el9.noarch                                                                                                                                                                                         1/1 
  Verifying        : yum-utils-4.1.0-3.el9.noarch                                                                                                                                                                                         1/1 

Installed:
  yum-utils-4.1.0-3.el9.noarch                                                                                                                                                                                                                

Complete!
[dejfcold@k8s-1 ~]$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
[dejfcold@k8s-1 ~]$ sudo yum install -y containerd.io
Docker CE Stable - x86_64                                                                                                                                                                                      74 kB/s |  22 kB     00:00    
Dependencies resolved.
==============================================================================================================================================================================================================================================
 Package                                                      Architecture                                      Version                                                     Repository                                                   Size
==============================================================================================================================================================================================================================================
Installing:
 containerd.io                                                x86_64                                            1.6.20-3.1.el9                                              docker-ce-stable                                             33 M
Installing dependencies:
 container-selinux                                            noarch                                            3:2.189.0-1.el9                                             appstream                                                    47 k

Transaction Summary
==============================================================================================================================================================================================================================================
Install  2 Packages

Total download size: 33 M
Installed size: 114 M
Downloading Packages:
(1/2): container-selinux-2.189.0-1.el9.noarch.rpm                                                                                                                                                             248 kB/s |  47 kB     00:00    
(2/2): containerd.io-1.6.20-3.1.el9.x86_64.rpm                                                                                                                                                                 45 MB/s |  33 MB     00:00    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                          34 MB/s |  33 MB     00:00     
Docker CE Stable - x86_64                                                                                                                                                                                      15 kB/s | 1.6 kB     00:00    
Importing GPG key 0x621E9F35:
 Userid     : "Docker Release (CE rpm) <[email protected]>"
 Fingerprint: 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35
 From       : https://download.docker.com/linux/centos/gpg
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                      1/1 
  Running scriptlet: container-selinux-3:2.189.0-1.el9.noarch                                                                                                                                                                             1/2 
  Installing       : container-selinux-3:2.189.0-1.el9.noarch                                                                                                                                                                             1/2 
  Running scriptlet: container-selinux-3:2.189.0-1.el9.noarch                                                                                                                                                                             1/2 
  Installing       : containerd.io-1.6.20-3.1.el9.x86_64                                                                                                                                                                                  2/2 
  Running scriptlet: containerd.io-1.6.20-3.1.el9.x86_64                                                                                                                                                                                  2/2 
  Running scriptlet: container-selinux-3:2.189.0-1.el9.noarch                                                                                                                                                                             2/2 
  Running scriptlet: containerd.io-1.6.20-3.1.el9.x86_64                                                                                                                                                                                  2/2 
  Verifying        : containerd.io-1.6.20-3.1.el9.x86_64                                                                                                                                                                                  1/2 
  Verifying        : container-selinux-3:2.189.0-1.el9.noarch                                                                                                                                                                             2/2 

Installed:
  container-selinux-3:2.189.0-1.el9.noarch                                                                                 containerd.io-1.6.20-3.1.el9.x86_64                                                                                

Complete!
[dejfcold@k8s-1 ~]$ containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
[dejfcold@k8s-1 ~]$ sudo systemctl daemon-reload
[dejfcold@k8s-1 ~]$ sudo systemctl enable --now containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.

Ok, now that we have container runtime, let's test that it works.

[dejfcold@k8s-1 ~]$ sudo ctr images pull docker.io/library/hello-world:latest
docker.io/library/hello-world:latest:                                             resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:4e83453afed1b4fa1a3500525091dbfca6ce1e66903fd4c01ff015dbcb1ba33e:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:f54a58bc1aac5ea1a25d796ae155dc228b3f0e11d046ae276b39c4bf2f13d8c4: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:2db29710123e3e53a794f2694094b9b4338aa9ee5c40b930cb8063a1be392c54:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 2.5 s                                                                    total:  4.4 Ki (1.8 KiB/s)                                       
unpacking linux/amd64 sha256:4e83453afed1b4fa1a3500525091dbfca6ce1e66903fd4c01ff015dbcb1ba33e...
done: 82.482764ms	
[dejfcold@k8s-1 ~]$ sudo ctr run docker.io/library/hello-world:latest hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Well, would you look at that, it works!

Ok, the container speaks as if it was docker even if it isn't. What would you expect? It's a hello-world image built for docker by docker.

Now here comes the thing nobody will tell you about. Ok, to be fair, they do mention it a lot of times, but they won't tell you how to do that. They'll just tell you that you that need that and point you to a GH repo. That repo also won't tell you how to install it, it'll just tell you what it contains. Here's the big secret:

[root@k8s-1 ~]$ sudo dnf install -y containernetworking-plugins
Last metadata expiration check: 0:31:21 ago on Fri 21 Apr 2023 12:28:04 AM CEST.
Dependencies resolved.
==============================================================================================================================================================================================================================================
 Package                                                                Architecture                                      Version                                                  Repository                                            Size
==============================================================================================================================================================================================================================================
Installing:
 containernetworking-plugins                                            x86_64                                            1:1.1.1-3.el9                                            appstream                                            7.5 M

Transaction Summary
==============================================================================================================================================================================================================================================
Install  1 Package

Total download size: 7.5 M
Installed size: 50 M
Downloading Packages:
containernetworking-plugins-1.1.1-3.el9.x86_64.rpm                                                                                                                                                             23 MB/s | 7.5 MB     00:00    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                          13 MB/s | 7.5 MB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                      1/1 
  Installing       : containernetworking-plugins-1:1.1.1-3.el9.x86_64                                                                                                                                                                     1/1 
  Running scriptlet: containernetworking-plugins-1:1.1.1-3.el9.x86_64                                                                                                                                                                     1/1 
  Verifying        : containernetworking-plugins-1:1.1.1-3.el9.x86_64                                                                                                                                                                     1/1 

Installed:
  containernetworking-plugins-1:1.1.1-3.el9.x86_64                                                                                                                                                                                            

Complete!

Although there is an issue filed with Fedora (basically a predecessor of all RHELs) where the containerd expects the CNI plugins in different directory than containernetworking-plugins is installed in.

We can work around that. Either we can install it manually instead of using package manager, symlink it to the correct place, or fix it in containerd config file. I'll choose the last option although it will break whenever it is fixed. If you don't agree with me, you do you, that's fine. I just think that we're not in stone age anymore to install things manually.

You'll need to manually update /etc/containerd/config.toml. The diff between the old and the new file should look like this:

@@ -67,11 +67,14 @@
     systemd_cgroup = false
     tolerate_missing_hugetlb_controller = true
     unset_seccomp_profile = ""
 
     [plugins."io.containerd.grpc.v1.cri".cni]
-      bin_dir = "/opt/cni/bin"
+      # Workaround for mismatched CNI location expectation
+      bin_dir = "/usr/libexec/cni/"
+      #bin_dir = "/opt/cni/bin"
+
       conf_dir = "/etc/cni/net.d"
       conf_template = ""
       ip_pref = ""
       max_conf_num = 1

You'll also need to restart the containerd service.

[dejfcold@k8s-1 ~]$ sudo systemctl restart containerd

So what exactly is the issue? Well, containerd expects the CNI plugins to be in /opt/cni/bin while containernetworking-plugins installs them into /usr/libexec/cni/.

Let's continue messing around with the containerd config file. According to kubernetes.io, if we're using a system with systemd, which we are, we should use systemd cgroup driver. So let's do that. Here's the diff:

@@ -117,23 +117,23 @@
           [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
             BinaryName = ""
             CriuImagePath = ""
             CriuPath = ""
             CriuWorkPath = ""
             IoGid = 0
             IoUid = 0
             NoNewKeyring = false
             NoPivotRoot = false
             Root = ""
             ShimCgroup = ""
-            SystemdCgroup = false
+            SystemdCgroup = true
 
       [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
         base_runtime_spec = ""
         cni_conf_dir = ""
         cni_max_conf_num = 0
         container_annotations = []
         pod_annotations = []
         privileged_without_host_devices = false
         runtime_engine = ""
         runtime_path = ""
         runtime_root = ""

And again, restart the containerd service.

[dejfcold@k8s-1 ~]$ sudo systemctl restart containerd

Now that we have all this, we'll need to disable swap. Not sure why, but the docs say we have to.

Just run this on all the machines

[root@k8s-1 ~]# swapoff -a

and also edit the /etc/fstab files like so, so the swap isn't turned on again on restart:

@@ -11,4 +11,6 @@
 #
 /dev/mapper/rl_k8s--1-root /                       xfs     defaults        0 0
 UUID=803aef67-d743-42b9-abc2-9b9ad1814492 /boot                   xfs     defaults        0 0
-/dev/mapper/rl_k8s--1-swap none                    swap    defaults        0 0
+
+# Disable swap for k8s
+# /dev/mapper/rl_k8s--1-swap none                    swap    defaults        0 0

Check it works by running the free command and seeing that the Swap line has all zeros:

[root@k8s-1 ~]# free
               total        used        free      shared  buff/cache   available
Mem:         1813324      532636      825588       37936      669728     1280688
Swap:              0           0           0

Now that swap is off, we should verify that the MAC addresses and product_uuids are all different.

Let's check the MAC addresses first.

[root@k8s-1 ~]# cat /sys/class/net/*/address
52:54:00:00:c2:25
00:00:00:00:00:00

[root@k8s-2 ~]# cat /sys/class/net/*/address
52:54:00:a0:24:f0
00:00:00:00:00:00

[root@k8s-3 ~]# cat /sys/class/net/*/address
52:54:00:b6:71:fc
00:00:00:00:00:00

They're all different. Good! (I have no idea what you should do if they are not different). Don't worry about the 00:00:00:00:00:00. That's just the MAC address of the loopback interface.

Let's check the product_ids now:

[root@k8s-1 ~]# cat /sys/class/dmi/id/product_uuid
8a1efa2b-8ba4-4936-9f3c-406d302d6c47

[root@k8s-2 ~]# cat /sys/class/dmi/id/product_uuid
6c21cea7-1e78-4aae-b9cd-134980cc697f

[root@k8s-3 ~]# cat /sys/class/dmi/id/product_uuid
eab6c861-20b0-4b1c-b1eb-78c48135ee29

They're different as well. Great! (I also have no idea what should you do if they're the same).

We're also supposed to check if some ports are open. Specifically these: 6443, 2379-2380, 10250, 10259, 10257 (for control plane) and 10250, 30000-32767 (for worker nodes). This is a bit tricky. Sure, we can check that the ports are not used by any process yet, but we can't tell if it'll be blocked by firewall. So ... we'll just skip that and deal with it later. It suggests using netcat, but since netcat comes from nmap nowadays and this version doesn't support port ranges, it would take us too long. If you're adventurous, you can try to use some other tool. Like netstat.

This is how netstat looks for me on all machines, so I guess we're safe here:

[root@k8s-1 ~]# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1104/sshd: /usr/sbi 
tcp        0      0 127.0.0.1:46739         0.0.0.0:*               LISTEN      36396/containerd    
tcp6       0      0 :::22                   :::*                    LISTEN      1104/sshd: /usr/sbi 
udp        0      0 127.0.0.1:323           0.0.0.0:*                           739/chronyd         
udp        0      0 0.0.0.0:51820           0.0.0.0:*                           -                   
udp6       0      0 ::1:323                 :::*                                739/chronyd         
udp6       0      0 :::51820

Finally, let's check our containerd linux socket exists:

[root@k8s-1 ~]# ls -la /var/run/containerd/containerd.sock
srw-rw----. 1 root root 0 Apr 21 01:59 /var/run/containerd/containerd.sock

[root@k8s-2 ~]# ls -la /var/run/containerd/containerd.sock
srw-rw----. 1 root root 0 Apr 21 01:59 /var/run/containerd/containerd.sock

[root@k8s-3 ~]# ls -la /var/run/containerd/containerd.sock
srw-rw----. 1 root root 0 Apr 21 01:59 /var/run/containerd/containerd.sock

Kubernetes - check system stuff

Ok, so now that we checked everything, we'll finally install kubeadm. Let's just use their script:

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

Before running it though, let's explain it a bit.

  • cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo ... will create a new YUM repository
  • sudo setenforce 0 and sudo sed -i 's/^SELINUX=enforcing... will permanently disable SELINUX. I don't really like this behavior, but there's too much setting up to do. We can enable it later though. Let's just make sure it works, then we can play with it later.
  • sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes that'll install the k8s tools it needs. It'll also disable auto updating k8s, because it needs special care.
  • sudo systemctl enable --now kubelet that'll basically start k8s on our system.

OK, so let's run it on all the machines!

[root@k8s-1 ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
[root@k8s-1 ~]# sudo setenforce 0
[root@k8s-1 ~]# sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
[root@k8s-1 ~]# sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Kubernetes                                                                                                                                                                                                    181 kB/s | 175 kB     00:00    
Dependencies resolved.
==============================================================================================================================================================================================================================================
 Package                                                            Architecture                                       Version                                                   Repository                                              Size
==============================================================================================================================================================================================================================================
Installing:
 kubeadm                                                            x86_64                                             1.27.3-0                                                  kubernetes                                              11 M
 kubectl                                                            x86_64                                             1.27.3-0                                                  kubernetes                                              11 M
 kubelet                                                            x86_64                                             1.27.3-0                                                  kubernetes                                              20 M
Upgrading:
 libnetfilter_conntrack                                             x86_64                                             1.0.9-1.el9                                               baseos                                                  58 k
Installing dependencies:
 conntrack-tools                                                    x86_64                                             1.4.7-2.el9                                               appstream                                              221 k
 cri-tools                                                          x86_64                                             1.26.0-0                                                  kubernetes                                             8.6 M
 kubernetes-cni                                                     x86_64                                             1.2.0-0                                                   kubernetes                                              17 M
 libnetfilter_cthelper                                              x86_64                                             1.0.0-22.el9                                              appstream                                               23 k
 libnetfilter_cttimeout                                             x86_64                                             1.0.0-19.el9                                              appstream                                               23 k
 libnetfilter_queue                                                 x86_64                                             1.0.5-1.el9                                               appstream                                               28 k
 socat                                                              x86_64                                             1.7.4.1-5.el9                                             appstream                                              300 k

Transaction Summary
==============================================================================================================================================================================================================================================
Install  10 Packages
Upgrade   1 Package

Total download size: 67 M
Downloading Packages:
(1/11): 693f3c83140151a953a420772ddb9e4b7510df8ae49a79cbd7af48e82e7ad48e-kubectl-1.27.3-0.x86_64.rpm                                                                                                           19 MB/s |  11 MB     00:00    
(2/11): 3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm                                                                                                         13 MB/s | 8.6 MB     00:00    
(3/11): 413f2a94a2f6981b36bf46ee01ade9638508fcace668d6a57b64e5cfc1731ce2-kubeadm-1.27.3-0.x86_64.rpm                                                                                                           13 MB/s |  11 MB     00:00    
(4/11): conntrack-tools-1.4.7-2.el9.x86_64.rpm                                                                                                                                                                1.1 MB/s | 221 kB     00:00    
(5/11): libnetfilter_cttimeout-1.0.0-19.el9.x86_64.rpm                                                                                                                                                        1.0 MB/s |  23 kB     00:00    
(6/11): socat-1.7.4.1-5.el9.x86_64.rpm                                                                                                                                                                        5.8 MB/s | 300 kB     00:00    
(7/11): libnetfilter_cthelper-1.0.0-22.el9.x86_64.rpm                                                                                                                                                         684 kB/s |  23 kB     00:00    
(8/11): libnetfilter_queue-1.0.5-1.el9.x86_64.rpm                                                                                                                                                             1.5 MB/s |  28 kB     00:00    
(9/11): libnetfilter_conntrack-1.0.9-1.el9.x86_64.rpm                                                                                                                                                         2.5 MB/s |  58 kB     00:00    
(10/11): 0f2a2afd740d476ad77c508847bad1f559afc2425816c1f2ce4432a62dfe0b9d-kubernetes-cni-1.2.0-0.x86_64.rpm                                                                                                    25 MB/s |  17 MB     00:00    
(11/11): 484ddb88e9f2aaff13842f2aa730170f768e66fd4d8a30efb139d7868d224fcf-kubelet-1.27.3-0.x86_64.rpm                                                                                                          25 MB/s |  20 MB     00:00    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                          31 MB/s |  67 MB     00:02     
Kubernetes                                                                                                                                                                                                    6.0 kB/s | 975  B     00:00    
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <[email protected]>"
 Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
 From       : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                      1/1 
  Upgrading        : libnetfilter_conntrack-1.0.9-1.el9.x86_64                                                                                                                                                                           1/12 
  Installing       : libnetfilter_queue-1.0.5-1.el9.x86_64                                                                                                                                                                               2/12 
  Installing       : libnetfilter_cthelper-1.0.0-22.el9.x86_64                                                                                                                                                                           3/12 
  Installing       : socat-1.7.4.1-5.el9.x86_64                                                                                                                                                                                          4/12 
  Installing       : libnetfilter_cttimeout-1.0.0-19.el9.x86_64                                                                                                                                                                          5/12 
  Installing       : conntrack-tools-1.4.7-2.el9.x86_64                                                                                                                                                                                  6/12 
  Running scriptlet: conntrack-tools-1.4.7-2.el9.x86_64                                                                                                                                                                                  6/12 
  Installing       : kubernetes-cni-1.2.0-0.x86_64                                                                                                                                                                                       7/12 
  Installing       : kubelet-1.27.3-0.x86_64                                                                                                                                                                                             8/12 
  Installing       : kubectl-1.27.3-0.x86_64                                                                                                                                                                                             9/12 
  Installing       : cri-tools-1.26.0-0.x86_64                                                                                                                                                                                          10/12 
  Installing       : kubeadm-1.27.3-0.x86_64                                                                                                                                                                                            11/12 
  Cleanup          : libnetfilter_conntrack-1.0.8-5.el9_1.x86_64                                                                                                                                                                        12/12 
  Running scriptlet: libnetfilter_conntrack-1.0.8-5.el9_1.x86_64                                                                                                                                                                        12/12 
  Verifying        : cri-tools-1.26.0-0.x86_64                                                                                                                                                                                           1/12 
  Verifying        : kubeadm-1.27.3-0.x86_64                                                                                                                                                                                             2/12 
  Verifying        : kubectl-1.27.3-0.x86_64                                                                                                                                                                                             3/12 
  Verifying        : kubelet-1.27.3-0.x86_64                                                                                                                                                                                             4/12 
  Verifying        : kubernetes-cni-1.2.0-0.x86_64                                                                                                                                                                                       5/12 
  Verifying        : conntrack-tools-1.4.7-2.el9.x86_64                                                                                                                                                                                  6/12 
  Verifying        : libnetfilter_cttimeout-1.0.0-19.el9.x86_64                                                                                                                                                                          7/12 
  Verifying        : socat-1.7.4.1-5.el9.x86_64                                                                                                                                                                                          8/12 
  Verifying        : libnetfilter_cthelper-1.0.0-22.el9.x86_64                                                                                                                                                                           9/12 
  Verifying        : libnetfilter_queue-1.0.5-1.el9.x86_64                                                                                                                                                                              10/12 
  Verifying        : libnetfilter_conntrack-1.0.9-1.el9.x86_64                                                                                                                                                                          11/12 
  Verifying        : libnetfilter_conntrack-1.0.8-5.el9_1.x86_64                                                                                                                                                                        12/12 

Upgraded:
  libnetfilter_conntrack-1.0.9-1.el9.x86_64                                                                                                                                                                                                   
Installed:
  conntrack-tools-1.4.7-2.el9.x86_64          cri-tools-1.26.0-0.x86_64              kubeadm-1.27.3-0.x86_64     kubectl-1.27.3-0.x86_64  kubelet-1.27.3-0.x86_64  kubernetes-cni-1.2.0-0.x86_64  libnetfilter_cthelper-1.0.0-22.el9.x86_64 
  libnetfilter_cttimeout-1.0.0-19.el9.x86_64  libnetfilter_queue-1.0.5-1.el9.x86_64  socat-1.7.4.1-5.el9.x86_64 

Complete!
[root@k8s-1 ~]# sudo systemctl enable --now kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@k8s-1 ~]# 

Now that that's done, well check that the kubelet service correctly fails:

[root@k8s-1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Fri 2023-07-07 11:06:19 CEST; 2s ago
       Docs: https://kubernetes.io/docs/
    Process: 340461 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
   Main PID: 340461 (code=exited, status=1/FAILURE)
        CPU: 42ms

Jul 07 11:06:19 k8s-1 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jul 07 11:06:19 k8s-1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

What do you expect? It's not yet setup so it can't really work!

We'll also need to load br_netfilter linux kernel module:

Create a new file /etc/modules-load.d/k8s.conf:

br_netfilter

also run the following so we don't need to reboot:

[root@k8s-1 ~]# modprobe br_netfilter

We also need to allow iptables to see bridged network. For that we'll create another file /etc/sysctl.d/k8s.conf:

net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1

and then apply it using:

[root@k8s-1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
* Applying /usr/lib/sysctl.d/50-default.conf ...
* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
* Applying /usr/lib/sysctl.d/50-redhat.conf ...
* Applying /etc/sysctl.d/99-custom.conf ...
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...
kernel.yama.ptrace_scope = 0
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
kernel.core_pipe_limit = 16
fs.suid_dumpable = 2
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.enp1s0.rp_filter = 2
net.ipv4.conf.lo.rp_filter = 2
net.ipv4.conf.wg0.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.enp1s0.accept_source_route = 0
net.ipv4.conf.lo.accept_source_route = 0
net.ipv4.conf.wg0.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.enp1s0.promote_secondaries = 1
net.ipv4.conf.lo.promote_secondaries = 1
net.ipv4.conf.wg0.promote_secondaries = 1
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
net.core.optmem_max = 81920
kernel.pid_max = 4194304
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.enp1s0.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.wg0.rp_filter = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

OK, now we have to decide if we want highly available k8s cluster or not. But first, let's create VM snapshots in case we mess up.

Nginx load balancer - what?!

I've decided that we'll use a HA "Stacked etcd topology". Though because we're cheap, we'll hack around a bit. Normally we'll need a load balancer for this. This load balancer should be on a different machine (or a set of machines so it is still HA) or using some kind of cloud provider. But because we're "poor" and don't want to use a 3rd party, we will:

  1. on all machines we'll create an nginx proxy which will route to any of the machines we have.
  2. append /etc/hosts file so we can use a DNS name instead of an IP address, so we can easily upgrade to a proper load balancer once we have riches

Let's update the /etc/hosts files first:

@@ -1,2 +1,3 @@
 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
+127.0.0.1   lb.k8s.dejfcold

Then install nginx and it's stream mod on all the machines:

[root@k8s-1 ~]# dnf install nginx nginx-mod-stream -y
Last metadata expiration check: 0:54:47 ago on Fri 07 Jul 2023 11:07:53 AM CEST.
Dependencies resolved.
==============================================================================================================================================================================================================================================
 Package                                                        Architecture                                        Version                                                      Repository                                              Size
==============================================================================================================================================================================================================================================
Installing:
 nginx                                                          x86_64                                              1:1.20.1-14.el9                                              appstream                                               38 k
 nginx-mod-stream                                               x86_64                                              1:1.20.1-14.el9                                              appstream                                               79 k
Installing dependencies:
 nginx-core                                                     x86_64                                              1:1.20.1-14.el9                                              appstream                                              567 k
 nginx-filesystem                                               noarch                                              1:1.20.1-14.el9                                              appstream                                               10 k
 rocky-logos-httpd                                              noarch                                              90.14-1.el9                                                  appstream                                               24 k

Transaction Summary
==============================================================================================================================================================================================================================================
Install  5 Packages

Total download size: 718 k
Installed size: 1.9 M
Downloading Packages:
(1/5): nginx-filesystem-1.20.1-14.el9.noarch.rpm                                                                                                                                                               50 kB/s |  10 kB     00:00    
(2/5): rocky-logos-httpd-90.14-1.el9.noarch.rpm                                                                                                                                                               112 kB/s |  24 kB     00:00    
(3/5): nginx-mod-stream-1.20.1-14.el9.x86_64.rpm                                                                                                                                                              332 kB/s |  79 kB     00:00    
(4/5): nginx-1.20.1-14.el9.x86_64.rpm                                                                                                                                                                         956 kB/s |  38 kB     00:00    
(5/5): nginx-core-1.20.1-14.el9.x86_64.rpm                                                                                                                                                                    7.2 MB/s | 567 kB     00:00    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                         1.2 MB/s | 718 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                      1/1 
  Running scriptlet: nginx-filesystem-1:1.20.1-14.el9.noarch                                                                                                                                                                              1/5 
  Installing       : nginx-filesystem-1:1.20.1-14.el9.noarch                                                                                                                                                                              1/5 
  Installing       : nginx-core-1:1.20.1-14.el9.x86_64                                                                                                                                                                                    2/5 
  Installing       : rocky-logos-httpd-90.14-1.el9.noarch                                                                                                                                                                                 3/5 
  Installing       : nginx-1:1.20.1-14.el9.x86_64                                                                                                                                                                                         4/5 
  Running scriptlet: nginx-1:1.20.1-14.el9.x86_64                                                                                                                                                                                         4/5 
  Installing       : nginx-mod-stream-1:1.20.1-14.el9.x86_64                                                                                                                                                                              5/5 
  Running scriptlet: nginx-mod-stream-1:1.20.1-14.el9.x86_64                                                                                                                                                                              5/5 
  Verifying        : rocky-logos-httpd-90.14-1.el9.noarch                                                                                                                                                                                 1/5 
  Verifying        : nginx-mod-stream-1:1.20.1-14.el9.x86_64                                                                                                                                                                              2/5 
  Verifying        : nginx-filesystem-1:1.20.1-14.el9.noarch                                                                                                                                                                              3/5 
  Verifying        : nginx-1:1.20.1-14.el9.x86_64                                                                                                                                                                                         4/5 
  Verifying        : nginx-core-1:1.20.1-14.el9.x86_64                                                                                                                                                                                    5/5 

Installed:
  nginx-1:1.20.1-14.el9.x86_64            nginx-core-1:1.20.1-14.el9.x86_64            nginx-filesystem-1:1.20.1-14.el9.noarch            nginx-mod-stream-1:1.20.1-14.el9.x86_64            rocky-logos-httpd-90.14-1.el9.noarch           

Complete!

Let's create config dir for stream:

[root@k8s-1 ~]# mkdir /etc/nginx/stream.conf.d

Also create a file in /etc/nginx/stream.conf.d/k8s_cp_lb.conf:

# lb.k8s.dejfcold

upstream k8s_cp {
  least_conn;
  server 10.0.0.116:6443; # k8s-1
  server 10.0.0.124:6443; # k8s-2
  server 10.0.0.48:6443;  # k8s-3
}

server {
  listen 6440;
  listen [::]:6440;

  proxy_pass k8s_cp;
}

And also edit the /etc/nginx.conf file:

@@ -14,6 +14,15 @@
     worker_connections 1024;
 }
 
+stream {
+    log_format  main  '$remote_addr [$time_local] $protocol '
+                      '$status $bytes_sent $bytes_received '
+                      '$session_time';
+
+    access_log  /var/log/nginx/stream_access.log  main;
+    include /etc/nginx/stream.conf.d/*.conf;
+}
+
 http {
     log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                       '$status $body_bytes_sent "$http_referer" '

Now test the config using:

[root@k8s-1 nginx]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

and start the nginx service:

[root@k8s-1 nginx]# systemctl enable nginx --now
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.

Obviously do all this to all machines!

Firewall config

Remember when I said ... we can't tell if it'll be blocked by firewall. So ... we'll just skip that and deal with it later.? Yeah, so later is now. Through a try/error method I figured what we need to do - open ports in our internal zone.

For the master nodes we'll open these ports

[root@k8s-1 ~]# firewall-cmd --permanent --zone=internal --add-port=6440/tcp
firewall-cmd --permanent --zone=internal --add-service=kube-control-plane
firewall-cmd --permanent --zone=internal --add-port=10259/tcp
firewall-cmd --permanent --zone=internal --add-port=10257/tcp
firewall-cmd --permanent --zone=internal --add-port=8285/udp
firewall-cmd --permanent --zone=internal --add-port=8472/udp
firewall-cmd --zone=internal --add-masquerade --permanent
success
success
success
success
success
success
success

and for the worker nodes, we'll open these ports:

[root@k8s-1 ~]# firewall-cmd --permanent --zone=internal --add-service=kubelet-worker
firewall-cmd --permanent --zone=internal --add-port=8285/udp
firewall-cmd --permanent --zone=internal --add-port=8472/udp
firewall-cmd --zone=internal --add-masquerade --permanent
success
Warning: ALREADY_ENABLED: 8285:udp
success
Warning: ALREADY_ENABLED: 8472:udp
success
Warning: ALREADY_ENABLED: masquerade
success

So now that we run both on the same server, we'll apply both. Now finalize it with:

[root@k8s-1 ~]# systemctl reload firewalld

K8S - create the cluster finally!

OK, let's create the k8s cluster. Hold on a minute though. Read this chapter first, then execute. Why? Apparently this is a time sensitive operation! Well ... you have a 2 hours to complete this chapter, because there are some secrets that might disappear. So I'd recommend creating a VM snapshot again and only then continue, in case your washing machine breaks and you'll have to mop the floor for the next 3 hours or whatever. There is a way to do it even after 2 hours but ... it's just easier to revert back than to try to fix something.

Before we begin, we need to know which CNI we'll be using. I was thinking of using just Flannel first, but then, since the point of this to have a somewhat production cluster, I've decided for Canal, which is just Flannel and Calico together. Flannel for the networking stuff and Calico for the network policies.

On one machine and one machine only (I'll be using k8s-1) run the following:

Let's prepare the CNI manifest installation whatever file:

[root@k8s-1 ~]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/canal.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  238k  100  238k    0     0   594k      0 --:--:-- --:--:-- --:--:--  593k

And if you remember, we were fixing one path issue when installing containerd. And since this will install more CNIs, we'll need to edit this file. This took me a while to figure out, so pay attention:

@@ -4897,11 +4897,11 @@
           configMap:
             name: canal-config
         # Used to install CNI.
         - name: cni-bin-dir
           hostPath:
-            path: /opt/cni/bin
+            path: /usr/libexec/cni/
         - name: cni-net-dir
           hostPath:
             path: /etc/cni/net.d
         # Used to access CNI logs.
         - name: cni-log-dir

Finally init the k8s cluster:

[root@k8s-1 ~]# kubeadm init --control-plane-endpoint "lb.k8s.dejfcold:6440" --upload-certs --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.0.116
[init] Using Kubernetes version: v1.27.3
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0707 22:10:45.493524  347653 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.k8s.dejfcold] and IPs [10.96.0.1 10.0.0.116]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-1 localhost] and IPs [10.0.0.116 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-1 localhost] and IPs [10.0.0.116 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0707 22:10:55.680751  347653 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0707 22:10:55.791115  347653 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
W0707 22:10:55.882413  347653 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0707 22:10:56.000127  347653 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.520336 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
7097da260fc89e000b0f92937de495c8f3fcf7620102f5844d019bead907370e
[mark-control-plane] Marking the node k8s-1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ru8oqu.5s0cs5iofc15q9l8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
W0707 22:11:10.983519  347653 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join lb.k8s.dejfcold:6440 --token ru8oqu.5s0cs5iofc15q9l8 \
	--discovery-token-ca-cert-hash sha256:792d131bc6ff79be81f85f6cc0facbd054819baf82aa1fff2c7a00c7e3fc8d5a \
	--control-plane --certificate-key 7097da260fc89e000b0f92937de495c8f3fcf7620102f5844d019bead907370e

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.k8s.dejfcold:6440 --token ru8oqu.5s0cs5iofc15q9l8 \
	--discovery-token-ca-cert-hash sha256:792d131bc6ff79be81f85f6cc0facbd054819baf82aa1fff2c7a00c7e3fc8d5a

Let's do what it says to do to allow the non-root user to use k8s. But I want to continue as root for now, so I'll export the KUBECONFIG as well.

[root@k8s-1 ~]# su dejfcold
[dejfcold@k8s-1 root]$ cd ~
[dejfcold@k8s-1 ~]$ mkdir -p $HOME/.kube
[dejfcold@k8s-1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[sudo] password for dejfcold: 
[dejfcold@k8s-1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[dejfcold@k8s-1 ~]$ exit
exit
[root@k8s-1 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

Now we'll apply the CNI plugin manifest and hope for the best:

[root@k8s-1 ~]# kubectl apply -f canal.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/canal created
serviceaccount/calico-cni-plugin created
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/canal created
deployment.apps/calico-kube-controllers created

Let's see how are the containers getting started using:

[root@k8s-1 ~]# kubectl get pod -n kube-system -w
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-85578c44bf-m8tlc   1/1     Running   0          34s
canal-499r6                                2/2     Running   0          34s
coredns-5d78c9869d-r75wx                   1/1     Running   0          56s
coredns-5d78c9869d-v2lq4                   1/1     Running   0          56s
etcd-k8s-1                                 1/1     Running   0          64s
kube-apiserver-k8s-1                       1/1     Running   0          64s
kube-controller-manager-k8s-1              1/1     Running   0          64s
kube-proxy-vtsnd                           1/1     Running   0          57s
kube-scheduler-k8s-1                       1/1     Running   0          64s

Now that that's running, we'll run the following on the k8s-2 and k8s-3 machines:

[root@k8s-2 dejfcold]# kubeadm join lb.k8s.dejfcold:6440 --token ru8oqu.5s0cs5iofc15q9l8 \
        --discovery-token-ca-cert-hash sha256:792d131bc6ff79be81f85f6cc0facbd054819baf82aa1fff2c7a00c7e3fc8d5a \
        --control-plane --certificate-key 7097da260fc89e000b0f92937de495c8f3fcf7620102f5844d019bead907370e \
        --apiserver-advertise-address 10.0.0.124
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0707 22:15:09.504786  343670 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-2 localhost] and IPs [10.0.0.124 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-2 localhost] and IPs [10.0.0.124 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.k8s.dejfcold] and IPs [10.96.0.1 10.0.0.124]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0707 22:15:19.817793  343670 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0707 22:15:19.888274  343670 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0707 22:15:19.960634  343670 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-3 ~]# kubeadm join lb.k8s.dejfcold:6440 --token ru8oqu.5s0cs5iofc15q9l8 \
        --discovery-token-ca-cert-hash sha256:792d131bc6ff79be81f85f6cc0facbd054819baf82aa1fff2c7a00c7e3fc8d5a \
        --control-plane --certificate-key 7097da260fc89e000b0f92937de495c8f3fcf7620102f5844d019bead907370e \
        --apiserver-advertise-address 10.0.0.48
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0707 22:16:41.644159  345594 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.k8s.dejfcold] and IPs [10.96.0.1 10.0.0.48]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-3 localhost] and IPs [10.0.0.48 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-3 localhost] and IPs [10.0.0.48 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0707 22:16:51.199275  345594 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0707 22:16:51.356901  345594 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0707 22:16:51.534014  345594 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-3 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-3 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

You may notice that the command is the same as one of the commands from the output of kubeadm init on the k8s-1 machine. And you'd be right! Except for one difference. We've added a --apiserver-advertise-address 10.0.0.124 switch for k8s-2 and --apiserver-advertise-address 10.0.0.48 for k8s-3. If you're wondering what are those IP addresses, then be assured they are our wireguard addresses for each server respectively.

Now when you query for all the nodes in the cluster should get something like this on all machines:

[root@k8s-1 ~]# kubectl get nodes
NAME    STATUS   ROLES           AGE     VERSION
k8s-1   Ready    control-plane   3m50s   v1.27.3
k8s-2   Ready    control-plane   109s    v1.27.3
k8s-3   Ready    control-plane   24s     v1.27.3
[root@k8s-1 ~]# kubectl get pod -n kube-system -w
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-85578c44bf-m8tlc   1/1     Running   0          4m46s
canal-499r6                                2/2     Running   0          4m46s
canal-6b6gt                                2/2     Running   0          111s
canal-zz7zj                                2/2     Running   0          3m16s
coredns-5d78c9869d-r75wx                   1/1     Running   0          5m8s
coredns-5d78c9869d-v2lq4                   1/1     Running   0          5m8s
etcd-k8s-1                                 1/1     Running   0          5m16s
etcd-k8s-2                                 1/1     Running   0          3m15s
etcd-k8s-3                                 1/1     Running   0          111s
kube-apiserver-k8s-1                       1/1     Running   0          5m16s
kube-apiserver-k8s-2                       1/1     Running   0          3m15s
kube-apiserver-k8s-3                       1/1     Running   0          109s
kube-controller-manager-k8s-1              1/1     Running   0          5m16s
kube-controller-manager-k8s-2              1/1     Running   0          3m15s
kube-controller-manager-k8s-3              1/1     Running   0          109s
kube-proxy-28cf2                           1/1     Running   0          111s
kube-proxy-frkcw                           1/1     Running   0          3m16s
kube-proxy-vtsnd                           1/1     Running   0          5m9s
kube-scheduler-k8s-1                       1/1     Running   0          5m16s
kube-scheduler-k8s-2                       1/1     Running   0          3m15s
kube-scheduler-k8s-3                       1/1     Running   0          109s

Tada! The cluster is running... Except we can't really do any work on it since all the nodes are control planes and they'll refuse to schedule any work on them. For that we'll need to untaint them. You may have noticed the [mark-control-plane] Marking the node k8s-1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] line in the output. No matter! We'll just untaint them. A word of warning - we shouldn't really do that in production. But there's a difference between poor production and regular production. Remember, we're a poor production. We can fix that once we generate some shiny metal.

For that we'll run this:

[root@k8s-1 ~]# kubectl taint nodes k8s-1 node-role.kubernetes.io/control-plane-
node/k8s-1 untainted
[root@k8s-1 ~]# kubectl taint nodes k8s-2 node-role.kubernetes.io/control-plane-
node/k8s-2 untainted
[root@k8s-1 ~]# kubectl taint nodes k8s-3 node-role.kubernetes.io/control-plane-
node/k8s-3 untainted

It's OK to run this only on one machine since they're now all connected together!

And finally, one additional fix for each machine in /var/lib/kubelet/kubeadm-flags.env:

@@ -1 +1 @@
-KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9"
+KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9 --node-ip=10.0.0.116"

Then restart the kubelets:

[root@k8s-1 ~]# systemctl restart kubelet

Again, the --node-ip is the IP of the wireguard interface. It should look like so:

[root@k8s-1 ~]# kubectl get nodes -o wide
NAME    STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                 CONTAINER-RUNTIME
k8s-1   Ready    control-plane   75m   v1.27.3   10.0.0.116    <none>        Rocky Linux 9.1 (Blue Onyx)   5.14.0-162.23.1.el9_1.x86_64   containerd://1.6.20
k8s-2   Ready    control-plane   71m   v1.27.3   10.0.0.124    <none>        Rocky Linux 9.1 (Blue Onyx)   5.14.0-162.23.1.el9_1.x86_64   containerd://1.6.20
k8s-3   Ready    control-plane   69m   v1.27.3   10.0.0.48     <none>        Rocky Linux 9.1 (Blue Onyx)   5.14.0-162.23.1.el9_1.x86_64   containerd://1.6.20

First deployment

Now that we have everything running, let's try our first deployment! (Maybe wait a bit between each individual command :) )

[root@k8s-1 ~]# kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
deployment.apps/nginx-deployment created
[root@k8s-1 ~]# kubectl rollout status deployment/nginx-deployment
deployment "nginx-deployment" successfully rolled out
[root@k8s-1 ~]# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           23s
[root@k8s-1 ~]# kubectl get pods --show-labels
NAME                               READY   STATUS    RESTARTS   AGE   LABELS
nginx-deployment-cbdccf466-n5cx6   1/1     Running   0          37s   app=nginx,pod-template-hash=cbdccf466
nginx-deployment-cbdccf466-rxtw9   1/1     Running   0          37s   app=nginx,pod-template-hash=cbdccf466
nginx-deployment-cbdccf466-srbqn   1/1     Running   0          37s   app=nginx,pod-template-hash=cbdccf466

You can also see that it runs on different nodes. May be different from my output, but probably won't:

[root@k8s-1 ~]# kubectl get pod -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-cbdccf466-n5cx6   1/1     Running   0          10m   10.244.0.5   k8s-1   <none>           <none>
nginx-deployment-cbdccf466-rxtw9   1/1     Running   0          10m   10.244.1.2   k8s-2   <none>           <none>
nginx-deployment-cbdccf466-srbqn   1/1     Running   0          10m   10.244.2.2   k8s-3   <none>           <none>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment