Prevent normal workloads from being scheduled on the control-plane node:
microk8s kubectl taint nodes {{ your-control-pane-node }} node-role.kubernetes.io/control-plane=:NoScheduleWhat this does: - Blocks new pods from being scheduled there - Does NOT affect system pods - Does NOT remove existing pods
If application pods are already running on the control-plane node and you want to move them:
microk8s kubectl drain {{ your-control-pane-node }} --ignore-daemonsets --delete-emptydir-dataOptions explained:
--ignore-daemonsetsβ keeps DaemonSet pods (like ingress)--delete-emptydir-dataβ removes ephemeral storage data- Pods will be rescheduled onto worker nodes (if replicas exist)
drain automatically cordons the node (marks it as
unschedulable).
This prevents ANY new pods from being scheduled --- even those with
tolerations.
After draining, if you still want specific pods to run there, you MUST uncordon the node:
microk8s kubectl uncordon {{ your-control-pane-node }}Without this step, you may see errors like:
0/10 nodes are available: 1 node(s) were unschedulable
This error usually means the node is still cordoned.
If certain workloads must run on the control-plane node
(e.g., hostPath storage or special system workloads),
Add both:
- nodeAffinity (forces pod onto that node)
- toleration (allows pod to bypass the taint)
Example:
spec:
# Allow pod to be scheduled even if node has NoSchedule taint
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"If you want the control-plane node to behave like a normal worker again:
microk8s kubectl taint nodes {{ your-control-pane-node }} node-role.kubernetes.io/control-plane:NoSchedule-