kubeadm部署kubernetes集群


声明:本文转载自https://my.oschina.net/logmm/blog/2246278,转载目的在于传递更多信息,仅供学习交流之用。如有侵权行为,请联系我,我会及时删除。

​一、环境要求

这里使用RHEL7.5

master、etcd:192.168.10.101,主机名:master

node1:192.168.10.103,主机名:node1

node2:192.168.10.104,主机名:node2

所有机子能基于主机名通信,编辑每台机子的/etc/hosts文件:

192.168.10.101   master

192.168.10.103  node1 

192.168.10.104  node2

所有机子时间要同步

所有机子关闭防火墙和selinux。

master可以免密登录全部机子。

【重要问题】

集群初始化以及节点加入集群的时候都会从谷歌仓库下载镜像,然而,我们并不能访问到谷歌,所以无法下载所需的镜像。而我已经将所需镜像上传至阿里云个人仓库。

二、安装步骤

1、etcd cluster,仅master节点;

2、flannel,集群的所有节点;

3、配置k8s的master:仅master节点;

kubernetes-master

启动的服务:kube-apiserver,kube-scheduler,kube-controller-manager

4、配置k8s的各Node节点;

kubernetes-node

先设定启动docker服务;

启动的k8s的服务:kube-proxy,kubelet

kubeadm

1、master,nodes:安装kubelet,kubeadm,docker

2、master:kubeadm init

3、nodes:kubeadm join

https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md

三、集群安装

1、master节点安装配置

(1)yum源配置

这里使用1.12.0版本。下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#downloads-for-v1120

这里使用yum下载。配置yum源,先配置docker的yum源,直接下载阿里云的repo文件即可:

[root@master ~]# curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

创建kubernetes的yum源文件:

[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1

将这两个repo文件复制到其他节点的/etc/yum.repo.d目录中:

[root@master ~]# for i in 102 103; do scp /etc/yum.repos.d/{docker-ce.repo,kubernetes.repo} root@192.168.10.$i:/etc/yum.repos.d/; done

安装yum源的检验key:

[root@master ~]# ansible all -m shell -a "curl -O https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg && rpm --import rpm-package-key.gpg"

(2)安装docker、kubelet、kubeadm、kubectl

[root@master ~]# yum install docker-ce kubelet kubeadm kubectl -y

(3)修改防火墙

[root@master ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables 
[root@master ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@master ~]# ansible all -m shell -a "iptables -P  FORWARD ACCEPT"

注意:这是临时修改,重启机器参数会失效。

永久修改:/usr/lib/sysctl.d/00-system.conf

(4)修改docker服务文件并启动docker

[root@master ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
#Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16"

在Service段中添加:
Environment="NO_PROXY=127.0.0.1/8,127.0.0.1/16"

启动docker:

[root@master ~]# systemctl daemon-reload 
[root@master ~]# systemctl  start docker
[root@master ~]# systemctl enable docker

(5)设置kubelet开机启动

[root@master ~]# systemctl  enable  kubelet

(6)初始化

编辑配置文件,忽略某些参数:

[root@master ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

执行初始化:

[root@master ~]# kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
	[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@master ~]# 

无法下载镜像。因为无法访问谷歌镜像仓库。可以通过其他途径下载镜像到本地,再执行初始化。

相关镜像我已上传到阿里云,执行以下脚本即可:

[root@master ~]# vim pull-images.sh
#!/bin/bash
images=(kube-apiserver:v1.12.0 kube-controller-manager:v1.12.0 kube-scheduler:v1.12.0 kube-proxy:v1.12.0 pause:3.1 etcd:3.2.24 coredns:1.2.2)

for ima in ${images[@]}
do
   docker pull   registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima
   docker tag    registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima   k8s.gcr.io/$ima
   docker rmi  -f  registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima
done
[root@master ~]# sh pull-images.sh

用到的镜像有:

[root@master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager   v1.12.0             07e068033cf2        2 weeks ago         164MB
k8s.gcr.io/kube-apiserver            v1.12.0             ab60b017e34f        2 weeks ago         194MB
k8s.gcr.io/kube-scheduler            v1.12.0             5a1527e735da        2 weeks ago         58.3MB
k8s.gcr.io/kube-proxy                v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        3 weeks ago         220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        6 weeks ago         39.2MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        9 months ago        742kB
[root@master ~]# 

重新初始化:

[root@master ~]# kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.10.101 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 71.135592 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: qaqahg.5xbt355fl26wu8tg
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47

[root@master ~]# 

OK。初始化成功。

初始化成功,最后的提示:很重要

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47

master节点:按照提示,做以下操作:

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite ‘/root/.kube/config’? y
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]# 

查看一下:

[root@master ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[root@master ~]#

健康状态。

查看集群节点:

[root@master ~]# kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
master    NotReady   master    110m      v1.12.1
[root@master ~]#

只有master节点,但处于NotReady状态。因为没有部署flannel。

(7)安装flannel

地址:https://github.com/coreos/flannel

执行以下命令:

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@master ~]#

执行完成后,需要等待很长时间,因为要下载flannel镜像。

[root@master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager   v1.12.0             07e068033cf2        2 weeks ago         164MB
k8s.gcr.io/kube-apiserver            v1.12.0             ab60b017e34f        2 weeks ago         194MB
k8s.gcr.io/kube-scheduler            v1.12.0             5a1527e735da        2 weeks ago         58.3MB
k8s.gcr.io/kube-proxy                v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        3 weeks ago         220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        6 weeks ago         39.2MB
quay.io/coreos/flannel               v0.10.0-amd64       f0fad859c909        8 months ago        44.6MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        9 months ago        742kB
[root@master ~]# 

​OK,flannel镜像下载完成。查看节点:

[root@master ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    155m      v1.12.1
[root@master ~]# 

OK,master处于Ready状态。

如果flannel下载不成功,可以下载阿里云的:

docker pull registry.cn-shenzhen.aliyuncs.com/lurenjia/flannel:v0.10.0-amd64

下载成功后,修改镜像的tag:

docker  tag  registry.cn-shenzhen.aliyuncs.com/lurenjia/flannel:v0.10.0-amd64    quay.io/coreos/flannel:v0.10.0-amd64

查看一下命名空间情况:

[root@master ~]# kubectl get ns
NAME          STATUS    AGE
default       Active    158m
kube-public   Active    158m
kube-system   Active    158m
[root@master ~]# 

查看kube-system的pod:

[root@master ~]# kubectl get pods -n kube-system
NAME                             READY     STATUS    RESTARTS   AGE
coredns-576cbf47c7-hfvcq         1/1       Running   0          158m
coredns-576cbf47c7-xcpgd         1/1       Running   0          158m
etcd-master                      1/1       Running   6          132m
kube-apiserver-master            1/1       Running   9          132m
kube-controller-manager-master   1/1       Running   33         132m
kube-flannel-ds-amd64-vqc9h      1/1       Running   3          41m
kube-proxy-z9xrw                 1/1       Running   4          158m
kube-scheduler-master            1/1       Running   33         132m
[root@master ~]# 

2、node节点安装配置

1、安装docker-ce、kubelet、kubeadm

[root@node1 ~]# yum install docker-ce kubelet kubeadm -y
[root@node2 ~]# yum install docker-ce kubelet kubeadm -y

2、复制master节点的文件到node

[root@master ~]# scp /etc/sysconfig/kubelet 192.168.10.103:/etc/sysconfig/
kubelet                                                                                                       100%   42    45.4KB/s   00:00    
[root@master ~]# scp /etc/sysconfig/kubelet 192.168.10.104:/etc/sysconfig/
kubelet                                                                                                       100%   42     4.0KB/s   00:00    
[root@master ~]#

3、node节点加入集群

启动docker、kubelet

[root@node1 ~]# systemctl  start docker kubelet
[root@node1 ~]# systemctl  enable docker kubelet
[root@node2 ~]# systemctl  start docker kubelet
[root@node2 ~]# systemctl  enable docker kubelet

node节点加入集群:

[root@node1 ~]# kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 --ignore-preflight-errors=Swap
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@node1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@node1 ~]# 

报错,按提示设置即可。

[root@node1 ~]# kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 --ignore-preflight-errors=Swap
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING Swap]: running with swap on is not supported. Please disable swap
[discovery] Trying to connect to API Server "192.168.10.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.101:6443"
[discovery] Requesting info from "https://192.168.10.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.101:6443"
[discovery] Successfully established connection with API Server "192.168.10.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@node1 ~]# 

OK,node1加入成功。

[root@node2 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@node2 ~]# kubeadm join 192.168.10.101:6443 --token qaqahg.5xbt355fl26wu8tg --discovery-token-ca-cert-hash sha256:654f52a18fa04234c05eb38a001d92b9831982d06272e5a22b7d898bc6280e47 --ignore-preflight-errors=Swap
[preflight] running pre-flight checks
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

	[WARNING Swap]: running with swap on is not supported. Please disable swap
[discovery] Trying to connect to API Server "192.168.10.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.101:6443"
[discovery] Requesting info from "https://192.168.10.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.101:6443"
[discovery] Successfully established connection with API Server "192.168.10.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@node2 ~]# 

OK,node2加入成功。

4、node手动下载kube-proxy、pause镜像

node节点均执行以下命令:

for ima in kube-proxy:v1.12.0 pause:3.1;do docker pull   registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima && docker tag    registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima   k8s.gcr.io/$ima && docker rmi  -f  registry.cn-shenzhen.aliyuncs.com/lurenjia/$ima ;done

5、到master节点查看node情况:

[root@master ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    3h10m     v1.12.1
node1     Ready     <none>    18m       v1.12.1
node2     Ready     <none>    17m       v1.12.1
[root@master ~]#

​OK,全部处于Ready状态。如果node节点还是不正常,就重启一下node节点的docker、kubelet服务。

查看kube-system的pod信息:

[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                             READY     STATUS    RESTARTS   AGE       IP               NODE      NOMINATED NODE
coredns-576cbf47c7-hfvcq         1/1       Running   0          3h11m     10.244.0.3       master    <none>
coredns-576cbf47c7-xcpgd         1/1       Running   0          3h11m     10.244.0.2       master    <none>
etcd-master                      1/1       Running   6          165m      192.168.10.101   master    <none>
kube-apiserver-master            1/1       Running   9          165m      192.168.10.101   master    <none>
kube-controller-manager-master   1/1       Running   33         165m      192.168.10.101   master    <none>
kube-flannel-ds-amd64-bd4d8      1/1       Running   0          21m       192.168.10.103   node1     <none>
kube-flannel-ds-amd64-srhb9      1/1       Running   0          20m       192.168.10.104   node2     <none>
kube-flannel-ds-amd64-vqc9h      1/1       Running   3          74m       192.168.10.101   master    <none>
kube-proxy-8bfvt                 1/1       Running   1          21m       192.168.10.103   node1     <none>
kube-proxy-gz55d                 1/1       Running   1          20m       192.168.10.104   node2     <none>
kube-proxy-z9xrw                 1/1       Running   4          3h11m     192.168.10.101   master    <none>
kube-scheduler-master            1/1       Running   33         165m      192.168.10.101   master    <none>
[root@master ~]# 

至此,集群搭建成功。看看搭建用到的镜像有哪些:

master节点:

[root@master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager   v1.12.0             07e068033cf2        2 weeks ago         164MB
k8s.gcr.io/kube-apiserver            v1.12.0             ab60b017e34f        2 weeks ago         194MB
k8s.gcr.io/kube-scheduler            v1.12.0             5a1527e735da        2 weeks ago         58.3MB
k8s.gcr.io/kube-proxy                v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        3 weeks ago         220MB
k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        6 weeks ago         39.2MB
quay.io/coreos/flannel               v0.10.0-amd64       f0fad859c909        8 months ago        44.6MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        9 months ago        742kB
[root@master ~]# 

node节点:

[root@node1 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy    v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
quay.io/coreos/flannel   v0.10.0-amd64       f0fad859c909        8 months ago        44.6MB
k8s.gcr.io/pause         3.1                 da86e6ba6ca1        9 months ago        742kB
[root@node1 ~]# 



[root@node2 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy    v1.12.0             9c3a9d3f09a0        2 weeks ago         96.6MB
quay.io/coreos/flannel   v0.10.0-amd64       f0fad859c909        8 months ago        44.6MB
k8s.gcr.io/pause         3.1                 da86e6ba6ca1        9 months ago        742kB
[root@node2 ~]#

 

本文发表于2018年10月16日 07:00
(c)注:本文转载自https://my.oschina.net/logmm/blog/2246278,转载目的在于传递更多信息,并不代表本网赞同其观点和对其真实性负责。如有侵权行为,请联系我们,我们会及时删除.

阅读 5086 讨论 0 喜欢 2

抢先体验

扫码体验
趣味小程序
文字表情生成器

闪念胶囊

你要过得好哇,这样我才能恨你啊,你要是过得不好,我都不知道该恨你还是拥抱你啊。

直抵黄龙府,与诸君痛饮尔。

那时陪伴我的人啊,你们如今在何方。

不出意外的话,我们再也不会见了,祝你前程似锦。

这世界真好,吃野东西也要留出这条命来看看

快捷链接
网站地图
提交友链
Copyright © 2016 - 2021 Cion.
All Rights Reserved.
京ICP备2021004668号-1