workshop github guideline: https://github.com/aws-samples/aws-workshop-for-kubernetes
if you wanna follow the guideline, make sure you:
- setup cloud9, follow the instructions under the heading: Create AWS Cloud9 Environment
- setup Kubernetes multi master cluster, follow the instructions under the heading: Create a Kubernetes Cluster with kops and Kubernetes Cluster Context. Suggest to create a multi-master cluster as the examples will be easier to follow.
-
Pod
- Pod is a group of containers (not necessarily docker, can be rkt container)
- All containers in one pod always run in same instance
- All containers share the same storage and network
-
Every pod has its own unique internal IP address
-
the pod IP address must not be relied on as it keeps changing during redeployment/recovery
Servicecan be relied for this purpose as a service discovery method
-
- All containers share the same resource (memory/CPU)
- there are two types of resource for memory and CPU
- Limit (upper limit, optional)
- If excess, the pod will be killed, auto restart [Burstable]
- If not defined, it can use as much resource as the instance have [BestEffort]
- If another pod is added, this pod will reduce the auto Limit automatically
- Request (how much we need, at least) [Guaranteed]
- this resource is allocated to this pod only and guarantee it always available
- Will only deploy to instance has this amount of memory
- Limit (upper limit, optional)
- there are two types of resource for memory and CPU
- Pod supports failover by default, k8s will restart the pod when failed. If failed again, it will wait for
xseconds before retry, if failed again, it will wait for more time before retry
-
Deployment
-
a yaml file is suggested to store all parameters for pods deployment
example:
apiVersion: v1 kind: Pod metadata: name: nginx-pod-guaranteed2 labels: name: nginx-pod spec: containers: - name: nginx image: nginx:latest resources: limits: memory: "200Mi" cpu: 1 requests: memory: "200Mi" cpu: 1 ports: - containerPort: 80
-
k8s supports
canary deployment,rolling deploymentand alsoblue/green deployment -
How redeployment works:
- k8s will create a new set of replicate set which the number is identical to current replicate set, with updated code
- every new node started and pass health check, one old node will be removed
- if one new node failed health check, all new nodes will rollback and terminated old nodes will start again
- so be reminded that there is a period that both new and old versions are running together
-
-
Service
-
Pod has its own IP address, but it changes in case of redeployment/recovery, so we should rely on service's IP.
-
There are 3 types of IP address for service:
- Load balancer/ELB(for public)
- Cluster IP(serves internally)
- Pod IP
-
Should make use of
Labelfeature to identify pod in single clusterExample
apiVersion: v1 kind: Service metadata: name: echo-service spec: selector: app: echo-pod # Label ports: - name: http protocol: TCP port: 80 targetPort: 8080 type: LoadBalancerec2-user # <---a ELB type
-
-
Namespace
- as it is fairy common to run pods that developed by different team / for different project, on the same cluster. Namespace is needed to avoid pod name collusion.
- Namespace can also be used for NetworkPolicy, which can control if Namespace A can accept traffic from Namespace B but not Namespace C
- one common NetworkPolicy implementation is
Calico
-
Daemon Set / Monitoring
- Daemon Set ensure that a copy of the pod runs on a selected set of nodes
- As new nodes are added to the cluster, pods are started on them. As nodes are removed, pods are removed through garbage collection.
- One common use case is to create a daemon set pod for monitoring a node health (as we don't need multiple pods to monitor the health).
prometheusis the most famous monitoring tool for k8s.- see this tutorial for more info
-
Logging
- there are multiple to achieve logging, the most common pattern will be following:
- Sidecar
- a helper container on EVERY pods to collect log from other containers in the same pod
- ELK
- E stands for Elasticsearch
- L stands for Logstash
- K stands for Kibana (UI for log)
- for more info, see here
- EFK
- E and K same as ELK
- F stands for Fluentd
- Sidecar
- there are multiple to achieve logging, the most common pattern will be following:
-
ResourceQuota
- a yaml file that defined a resource usage for both Limit/Request in all pods