Docker/Kubernetes workshop
kubectl
In this module, we introduce kubectl
to access a remote Kubernetes cluster.
You will learn about:
Depending on where the Kubernetes cluster for the workshop has been setup, follow the instructions in one of the sections below:
```bash
# Windows
choco upgrade gcloudsdk
# MacOS
brew cask install google-cloud-sdk
```
```bash
gcloud auth login
gcloud auth application-default login
gcloud container clusters get-credentials k8s-cluster --region australia-southeast1-a --project rotcaus
```
```bash
# Windows
choco upgrade azure-cli
# MacOS
brew cask install azure-cli
```
```console
az login
az aks get-credentials --name rotcaus-aks --resource-group rotcaus
```
No code provided. We will only use terminal command lines to access a remote cluster!
When running Kubernetes locally, it typically runs inside a Linux VM. This VM is not directly accessible.
When running Kubernetes on AWS/Azure/GCP etc, the Kubernetes API is hosted by the provider.
You can see the address for the Kubernetes API server using:
kubectl cluster-info
In order to access services available within the cluster, you can proxy a port to your local host machine with this command:
kubectl proxy --port=8080
You can then access the Kubernetes API at http://localhost:8080/
Ctrl+C to stop the port forwarding.
In this exercise, we will look at the core components of the cluster. This will give us an opportunity to explore the main commands and options of kubectl
.
First, list all the nodes of the cluster:
kubectl get nodes
The output should be similar to this:
NAME STATUS ROLES AGE VERSION
gke-k8s-cluster-default-node-pool-6681a730-d257 Ready <none> 3d21h v1.15.12-gke.6001
gke-k8s-cluster-default-node-pool-e61f3b84-f9h6 Ready <none> 3d21h v1.15.12-gke.6001
gke-k8s-cluster-default-node-pool-ffbb1244-ghkc Ready <none> 3d21h v1.15.12-gke.6001
Any Kubernetes object can be inspected via the kubectl
command line:
kubectl get nodes --output yaml
--output yaml
specifies that kubectl
should print the full object spec, rather than a list as seen above.
There may be a lot of output but have a look at the top of the definition there is a spec:
section.
Wait, what is the spec
?
The spec describe how we want this object to be. It is the definition that you supply when defining Kubernetes objects. This is what is likely stored in version control.
If the spec is updated on an object, Kubernetes will reconcile the current state with the spec and take a series of actions (through the controller) to converge to this declared state.
If you look at images
YAML entry in this output, you will also find all the Docker images that are known by the node. These have been downloaded, stored on the node and can be used now to create/recreate containers and pods in the node (note that in a multi-node environment, different nodes may have difference images cached).
Now that the cluster is running, which of the core Kubernetes components are running?
To find them, list the pods in all namespaces:
kubectl get pods --all-namespaces
The output should be similar to this:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system event-exporter-v0.3.0-74bf544f8b-6fsnt 2/2 Running 0 3d21h
kube-system fluentd-gcp-scaler-dd489f778-vzthj 1/1 Running 0 3d21h
kube-system fluentd-gcp-v3.1.1-6sdr8 2/2 Running 0 3d21h
kube-system fluentd-gcp-v3.1.1-ltmb5 2/2 Running 0 3d21h
kube-system fluentd-gcp-v3.1.1-rwhlv 2/2 Running 0 3d21h
kube-system heapster-gke-6984f5967b-j7t42 3/3 Running 0 3d21h
kube-system kube-dns-5dbbd9cc58-2bc7t 4/4 Running 0 3d21h
kube-system kube-dns-5dbbd9cc58-xv6gn 4/4 Running 0 3d21h
kube-system kube-dns-autoscaler-6b7f784798-xj8hz 1/1 Running 0 3d21h
kube-system kube-proxy-gke-k8s-cluster-default-node-pool-6681a730-d257 1/1 Running 0 3d21h
kube-system kube-proxy-gke-k8s-cluster-default-node-pool-e61f3b84-f9h6 1/1 Running 0 3d21h
kube-system kube-proxy-gke-k8s-cluster-default-node-pool-ffbb1244-ghkc 1/1 Running 0 3d21h
kube-system l7-default-backend-84c9fcfbb-l4qll 1/1 Running 0 3d21h
kube-system metrics-server-v0.3.3-6d96fcc55-frsgt 2/2 Running 0 3d21h
kube-system prometheus-to-sd-7j246 2/2 Running 0 3d21h
kube-system prometheus-to-sd-b7ktx 2/2 Running 0 3d21h
kube-system prometheus-to-sd-xmzjm 2/2 Running 0 3d21h
kube-system stackdriver-metadata-agent-cluster-level-c678bc98d-j5sbz 2/2 Running 0 3d21h
Some components that are typically found in clusters are:
coredns
/kube-dns
: the default DNS that is used for service discovery in Kubernetes clusters.kube-apiserver
: the Kubernetes API server responsible for the communication between all the components in the clusteretcd
: the Kubernetes “database” storing all object definitionskube-controller-manager
: handles node failures, replicating components, maintaining the correct amount of pods etc.kube-scheduler
: decides which pod to assign to which node based on resource affinity rules / selectors / hardware requirements.Exactly which of these components are deployed depend on how the cluster is configured (for example, a managed cluster in GCP won’t show etcd
pods).
Finally, there is one component that is typically only present on worker nodes:
kube-proxy
: load balances traffic between applications within a nodeThese pods are what is needed by Kubernetes to operate, as well as additional cluster wide pods providing additional functionality such as log forwarding and metrics.
In the next exercise we will create our own custom Pods to run an application.
Kubernetes Components Overview API Server Scheduler Controller Manager ectd Datastore kubelet kube-proxy Container Runtime