このセクションでは、Kubernetes クラスタの初期化について紹介します。初期化することで、Master と Node サーバー間で Kubernetes クラスタを組むことができます。

ここでは、Kubernetes Master × 1台、Kubernetes Node × 2台のクラスター構成を前提にしています。

Master サーバーでの作業

hosts 設定ファイル( /etc/hosts )を編集し、Master と Node サーバー の名前解決ができるようにしておきます。

[root@kube-master ~]# vi /etc/hosts
(下記 設定を追加)
# kubernetes
192.168.25.100          kube-master
192.168.25.101          kube-work1      
192.168.25.102          kube-work2      

Master で Kubernetes のクラスタを構成するためのコマンドを実行します。 ここでは、クラスタ内ネットワーク( Pod ネットワーク)用に Flannel を利用するために – – pod-network-cidr オプションで 10.224.0.0/16 を指定しています。 Flannel を使用する場合は、10.224.0.0/16 固定となります。

[root@kube-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube-master localhost] and IPs [192.168.25.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube-master localhost] and IPs [192.168.25.100 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.25.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.001824 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-master" as an annotation
[mark-control-plane] Marking the node kube-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kube-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6957gq.686kkrgapewsvu3s
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.25.100:6443 --token 6957gq.686kkrgapewsvu3s --discovery-token-ca-cert-hash sha256:737497defe8e557b03da02e539bc991be3ab0ce7eb4f972a37388325a94d30b3

[root@kube-master ~]# 

kubeadm init コマンド実行結果に記述される下記のコマンドは、Node を Kubernetes クラスタに参加させるために必要となるため、控えておきます。

kubeadm join 192.168.25.100:6443 --token w12ue5.ltptzgiwncp5lrev --discovery-token-ca-cert-hash sha256:13b544eb7776dca2c842a495ba84c1764c8cbfae5bf812f77124d843018d967b

rootユーザで kubectl を実行するために環境変数を指定します。

[root@kube-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@kube-master ~]# 

.bash_profileに環境変数を指定します。

[root@kube-master ~]# cat <<'EOF' > /root/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

# add for kubernetes
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@kube-master ~]# 

Master サーバー 上でノードの状態を確認します。ここでは、Master サーバーであるkube-master ノードが確認できます。

[root@kube-master ~]# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
kube-master   NotReady   master   42s   v1.13.1
[root@kube-master ~]# 

Master サーバー上で全てネームスペースの Pod の状態を確認します。ここでは、3つの Pod が確認でき、coredns が Pending の状態になっています。

[root@kube-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-jrvm4   0/1     Pending   0          28s
kube-system   coredns-86c58d9df4-s9pkg   0/1     Pending   0          28s
kube-system   kube-proxy-bsw84           1/1     Running   0          27s
[root@kube-master ~]# 

Kubernetes クラスタで必要となる TCP 6443 通信をファイアウォールで許可します。

[root@kube-master ~]# firewall-cmd --add-port=6443/tcp --zone=public --permanent
success
[root@kube-master ~]# 

ファイアウォールで TCP 6443 が許可された状態になっていることを確認します。

[root@kube-master ~]# firewall-cmd --list-port --zone=public --permanent
6443/tcp
[root@kube-master ~]# 

ファイアウォールをリロードし、設定を反映します。

[root@kube-master ~]# firewall-cmd --reload
success
[root@kube-master ~]# 

Node サーバーでの作業

一つ目の Nodeサーバー( kube-work1 )で Master のクラスタに参加するようにします。

[root@kube-work1 ~]# kubeadm join 192.168.25.100:6443 --token 6957gq.686kkrgapewsvu3s --discovery-token-ca-cert-hash sha256:737497defe8e557b03da02e539bc991be3ab0ce7eb4f972a37388325a94d30b3
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.25.100:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.25.100:6443"
[discovery] Requesting info from "https://192.168.25.100:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.25.100:6443"
[discovery] Successfully established connection with API Server "192.168.25.100:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-work1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@kube-work1 ~]# 

二つ目の Nodeサーバー( kube-work2 )でも Master のクラスタに参加するようにします。

[root@kube-work2 ~]# kubeadm join 192.168.25.100:6443 --token 6957gq.686kkrgapewsvu3s --discovery-token-ca-cert-hash sha256:737497defe8e557b03da02e539bc991be3ab0ce7eb4f972a37388325a94d30b3
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.25.100:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.25.100:6443"
[discovery] Requesting info from "https://192.168.25.100:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.25.100:6443"
[discovery] Successfully established connection with API Server "192.168.25.100:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube-work2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@kube-work2 ~]# 

Kubernetes クラスタの動作確認

Master サーバー 上でノードの状態を確認します。Master サーバーであるkube-master ノード と Node サーバーである kube-work1 と kube-work2 ノードが確認できます。 この時点では、各ノードの STATUS は NotReady のままになっています。

[root@kube-master ~]# kubectl get nodes
NAME          STATUS     ROLES    AGE    VERSION
kube-master   NotReady   master   8m2s   v1.13.1
kube-work1    NotReady   <none>   82s    v1.13.1
kube-work2    NotReady   <none>   87s    v1.13.1
[root@kube-master ~]# 

Master サーバー上で全てネームスペースの Pod の状態を確認します。ここでは、9つの Pod が確認でき、coredns が ContainerCreating の状態になっています。

[root@kube-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS              RESTARTS   AGE
kube-system   coredns-86c58d9df4-fvsjf              0/1     ContainerCreating   0          7m23s
kube-system   coredns-86c58d9df4-nnbb5              0/1     ContainerCreating   0          12m
kube-system   etcd-kube-master                      1/1     Running             0          11m
kube-system   kube-apiserver-kube-master            1/1     Running             0          11m
kube-system   kube-controller-manager-kube-master   1/1     Running             0          11m
kube-system   kube-proxy-jt4jq                      1/1     Running             0          12m
kube-system   kube-proxy-tll9v                      1/1     Running             0          6m2s
kube-system   kube-proxy-tzs4v                      1/1     Running             0          6m7s
kube-system   kube-scheduler-kube-master            1/1     Running             0          11m
[root@kube-master ~]#