Master Node

Before initializing the cluster, kubeadm will pull images from Google’s image repository. If you cannot access it normally, please refer to “Preparing Kubernetes Cluster Environment”.

sudo kubeadm init \
    --apiserver-advertise-address 10.0.8.81 \
    --apiserver-bind-port 6443 \
    --control-plane-endpoint cluster-endpoint \
    --kubernetes-version v1.25.2 \
    --service-cidr 10.96.0.0/16 \
    --pod-network-cidr 192.168.0.0/16
[init] Using Kubernetes version: v1.25.2
[preflight] Running pre-flight checks
	[WARNING HTTPProxy]: Connection to "https://10.0.8.81" uses proxy "http://10.0.8.18:8234". If that is not intended, adjust your proxy settings
	[WARNING HTTPProxyCIDR]: connection to "192.168.0.0/16" uses proxy "http://10.0.8.18:8234". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
	[WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [cluster-endpoint kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local s01] and IPs [10.96.0.1 10.0.8.81]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost s01] and IPs [10.0.8.81 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost s01] and IPs [10.0.8.81 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 27.503677 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node s01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node s01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: aog7zw.pigdvq7fzg1e4y5w
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint:6443 --token aog7zw.pigdvq7fzg1e4y5w \
	--discovery-token-ca-cert-hash sha256:09149deed5c5697105c73c64168dd5d2e2e92fc565e94c04a61792f8012e514c \
	--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token aog7zw.pigdvq7fzg1e4y5w \
	--discovery-token-ca-cert-hash sha256:09149deed5c5697105c73c64168dd5d2e2e92fc565e94c04a61792f8012e514c

Remember to save the output after successfully initializing the cluster, as the Token will be used later when joining nodes! By default, the Token for joining the cluster is valid for 24 hours. After it expires, you can use kubeadm token create --print-join-command to regenerate it.

Parameter explanation:

  • –apiserver-advertise-address 10.0.8.81: The IP address the API Server will advertise it’s listening on. If not set, the default network interface will be used.
  • –apiserver-bind-port 6443: Port for the API Server to bind to.
  • –control-plane-endpoint cluster-endpoint: Specify a stable IP address or DNS name for the control plane.
  • –kubernetes-version v1.25.2: Choose a specific Kubernetes version for the control plane.
  • –service-cidr 10.96.0.0/16: Use alternative range of IP address for service VIPs, default: 10.96.0.0/16.
  • –pod-network-cidr 192.168.0.0/16: Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node, default: 192.168.0.0/16.

The service-cidr and pod-network-cidr network segments cannot conflict with each other, and even more so, they cannot conflict with the apiserver-advertise-address, otherwise some strange issues may exist.

As you can see in the output above, lines 10 and 11, because installing kubeadm, kubectl, and kubelet requires accessing Google’s sources, I used the following command in the virtual machine SSH Session to set up a proxy:

export https_proxy=http://10.0.8.18:8234;export http_proxy=http://10.0.8.18:8234;export all_proxy=socks5://10.0.8.18:8235;export no_proxy=cluster-master,cluster-endpoint,10.96.0.1,localhost,127.0.0.1,::1

This caused some warnings during verification, but it doesn’t affect the initialization of the cluster and can be ignored. However, it’s still recommended to unset these temporary environment variables before initialization.

unset https_proxy http_proxy all_proxy no_proxy

Copy Kubernetes Configuration File

Execute the commands from lines 66-68 of the output after successfully initializing the cluster:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

At this point, when checking the Pods started by Kubernetes, coreDNS should be in a Pending state, and it needs to install CNI to start normally.

Install Calico CNI

The CNI implementation I’m using here is Calico, with the current version being v3.24.1. For more information, refer to the official documentation:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/tigera-operator.yaml

If when initializing the cluster, the value of pod-network-cidr is the default 192.168.0.0/16 network segment, then you can directly execute the following command:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/custom-resources.yaml

Otherwise, you need to download the custom-resources.yaml configuration file and manually replace the CIDR settings inside:

curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/custom-resources.yaml
cat custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 192.168.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

Modify the cidr value in line 15 to match the pod-network-cidr value you used when initializing the cluster, then execute:

kubectl create -f custom-resources.yaml

If you previously configured a proxy for Containerd, remember to cancel the proxy settings after executing the above command, otherwise it will cause the Calico plugin to be unable to get cluster information from the Kubernetes Service API, resulting in failure to start the Pod!

Check Cluster Pod Running Status

After installing Calico, use the following command to check the running status of the cluster’s Pods (updates every two seconds):

watch kubectl get pods -A

Every 2.0s: kubectl get pods -A                                     s01: Mon Oct  3 03:31:39 2022

NAMESPACE          NAME                                       READY   STATUS    RESTARTS      AGE
calico-apiserver   calico-apiserver-5c5d497dbc-cxb5q          1/1     Running   1 (10m ago)   14h
calico-apiserver   calico-apiserver-5c5d497dbc-gbh6m          1/1     Running   1 (10m ago)   14h
calico-system      calico-kube-controllers-85666c5b94-h7gnj   1/1     Running   1 (10m ago)   14h
calico-system      calico-node-6qrfk                          1/1     Running   1 (10m ago)   14h
calico-system      calico-typha-b84cfb796-ctzx2               1/1     Running   2 (10m ago)   14h
calico-system      calico-typha-b84cfb796-w9t7k               1/1     Running   1 (10m ago)   14h
calico-system      csi-node-driver-c94pg                      2/2     Running   2 (10m ago)   14h
kube-system        coredns-565d847f94-fm848                   1/1     Running   1 (10m ago)   15h
kube-system        coredns-565d847f94-tbhr2                   1/1     Running   1 (10m ago)   15h
kube-system        etcd-s01                                   1/1     Running   1 (10m ago)   15h
kube-system        kube-apiserver-s01                         1/1     Running   1 (10m ago)   15h
kube-system        kube-controller-manager-s01                1/1     Running   1 (10m ago)   15h
kube-system        kube-proxy-kmvzb                           1/1     Running   1 (10m ago)   15h
kube-system        kube-scheduler-s01                         1/1     Running   1 (10m ago)   15h
tigera-operator    tigera-operator-6675dc47f4-7w8gm           1/1     Running   1 (10m ago)   14h

When all Pods have a STATUS of Running, it means that the Master node of the Kubernetes cluster has been initialized and started successfully.

Join Worker Nodes

The worker nodes are basically configured the same as the Master node, please refer to “Preparing Kubernetes Cluster Environment”.

After installing Containerd and Kubeadm CLI, execute the join node command provided when initializing the Master node:

kubeadm join cluster-endpoint:6443 --token aog7zw.pigdvq7fzg1e4y5w \
    --discovery-token-ca-cert-hash sha256:09149deed5c5697105c73c64168dd5d2e2e92fc565e94c04a61792f8012e514c

At this point, the node will pull the required images and then start the Pods. You can go to the Master node to check the running status of the Pods. If all Pods are in the Running state, it indicates that the node has joined successfully:

watch kubectl get pods -A

Every 2.0s: kubectl get pods -A                                     s01: Mon Oct  3 03:31:39 2022

NAMESPACE          NAME                                       READY   STATUS    RESTARTS      AGE
calico-apiserver   calico-apiserver-5c5d497dbc-cxb5q          1/1     Running   1 (26m ago)   15h
calico-apiserver   calico-apiserver-5c5d497dbc-gbh6m          1/1     Running   1 (26m ago)   15h
calico-system      calico-kube-controllers-85666c5b94-h7gnj   1/1     Running   1 (26m ago)   15h
calico-system      calico-node-6qrfk                          1/1     Running   1 (26m ago)   14h
calico-system      calico-node-cnwdl                          1/1     Running   1 (26m ago)   15h
calico-system      calico-node-w8p2h                          1/1     Running   1 (26m ago)   14h
calico-system      calico-typha-b84cfb796-ctzx2               1/1     Running   2 (26m ago)   14h
calico-system      calico-typha-b84cfb796-w9t7k               1/1     Running   1 (26m ago)   15h
calico-system      csi-node-driver-c94pg                      2/2     Running   2 (26m ago)   15h
calico-system      csi-node-driver-jmmg2                      2/2     Running   2 (26m ago)   14h
calico-system      csi-node-driver-qmvpx                      2/2     Running   2 (26m ago)   14h
kube-system        coredns-565d847f94-fm848                   1/1     Running   1 (26m ago)   15h
kube-system        coredns-565d847f94-tbhr2                   1/1     Running   1 (26m ago)   15h
kube-system        etcd-s01                                   1/1     Running   1 (26m ago)   15h
kube-system        kube-apiserver-s01                         1/1     Running   1 (26m ago)   15h
kube-system        kube-controller-manager-s01                1/1     Running   1 (26m ago)   15h
kube-system        kube-proxy-kmvzb                           1/1     Running   1 (26m ago)   15h
kube-system        kube-proxy-w6swd                           1/1     Running   1 (26m ago)   14h
kube-system        kube-proxy-x7z96                           1/1     Running   1 (26m ago)   14h
kube-system        kube-scheduler-s01                         1/1     Running   1 (26m ago)   15h
tigera-operator    tigera-operator-6675dc47f4-7w8gm           1/1     Running   1 (26m ago)   15h

As you can see above, I have added two more Nodes and this shows all the running Pods. Each node will start two Pods: calico-node and csi-node-driver.

kubectl get nodes -o wide
NAME   STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
s01    Ready    control-plane   15h   v1.25.2   10.0.8.81     <none>        Ubuntu 22.04.1 LTS   5.15.0-48-generic   containerd://1.6.8
s02    Ready    <none>          14h   v1.25.2   10.0.8.82     <none>        Ubuntu 22.04.1 LTS   5.15.0-48-generic   containerd://1.6.8
s03    Ready    <none>          14h   v1.25.2   10.0.8.83     <none>        Ubuntu 22.04.1 LTS   5.15.0-48-generic   containerd://1.6.8

At this point, the entire Kubernetes Cluster installation is complete. Next is to install the Dashboard…

I hope this is helpful, Happy hacking…