0. 概要
 
使用kubeadm搭建一个单节点kubernets实例,仅供学习. 运行环境和软件概要如下:
 
 
  
   
   | ~ | 版本 | 备注 | 
 
  
  
   
   | OS | Ubuntu 18.0.4 | 192.168.132.152 my.servermaster.local/192.168.132.154 my.worker01.local | 
 
   
   | Docker | 18.06.1~ce~3-0~ubuntu | k8s最新版(1.12.3)支持的最高版本, 必须固定 | 
 
   
   | Kubernetes | 1.12.3 | 目标软件新 | 
 
  
 
以上系统和软件基本是2018最新的状态, 其中docker需要注意必须安装k8s支持到的版本.
 
1. 安装步骤
 
 
swapoff -a
 
 
 - 安装运行时, 默认使用docker, 安装docker即可
apt-get install docker-ce=18.06.1~ce~3-0~ubuntu
 
 
 - 安装kubeadm 一下命令和官网的命令一致, 但是是包源改为阿里云
apt-get update && apt-get install -y apt-transport-https
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
 
2. 使用kubeadm创建集群
 
2.1 准备镜像
 
因为国内是访问不到k8s.gcr.io所以需要将需要的镜像提前下载, 这次采用从阿里云镜像仓库下载, 并修改下载后的镜像tag为k8s.gcr.io
 
# a. 查看都需要哪些镜像需要下载
kubeadm config images list --kubernetes-version=v1.12.3
k8s.gcr.io/kube-apiserver:v1.12.3
k8s.gcr.io/kube-controller-manager:v1.12.3
k8s.gcr.io/kube-scheduler:v1.12.3
k8s.gcr.io/kube-proxy:v1.12.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2
# b. 创建一个自动处理脚本下载镜像->重新tag->删除老tag
vim ./load_images.sh
#!/bin/bash
### config the image map
declare -A images map=()
images["k8s.gcr.io/kube-apiserver:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3"
images["k8s.gcr.io/kube-controller-manager:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3"
images["k8s.gcr.io/kube-scheduler:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3"
images["k8s.gcr.io/kube-proxy:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3"
images["k8s.gcr.io/pause:3.1"]="registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1"
images["k8s.gcr.io/etcd:3.2.24"]="registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24"
images["k8s.gcr.io/coredns:1.2.2"]="registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2"
### re-tag foreach
for key in ${!images[@]}
do
	docker pull ${images[$key]}
	docker tag ${images[$key]} $key
	docker rmi ${images[$key]}
done
### check
docker images
# c. 执行脚本准镜像
sudo chmod +x load_images.sh
./load_images.sh
 
2.2 初始化集群(master)
 
初始化需要指定至少两个参数:
 
 
 - kubernetes-version: 方式kubeadm访问外网获取版本
- pod-network-cidr: flannel网络插件配置需要
### 执行初始化命令
sudo kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16
### 最后的结果如下
... ...
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
  kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9
 
2.3 根据成功信息配置非管理员账号使用kubectl
 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
使用非root账号查看节点情况:
 
kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
servermaster   NotReady   master   28m   v1.12.3
 
发现有一个master节点, 但是状态是NotReady, 这里需要做一个决定:
 
如果希望是单机则执行如下
 
kubectl taint nodes --all node-role.kubernetes.io/master-
 
如果希望搭建继续, 则继续后续步骤, 此时主节点状态可以忽略.
 
2.4 应用网络插件
 
查看kube-flannel.yml文件内容, 复制到本地文件避免terminal无法远程获取
 
kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
 
2.5 新建worker节点
 
worker节点新建参考[1. 安装步骤]在另外一台服务器上新建即可, worker节点不用准备2.1~2.3及之后的所有步骤, 仅需完成基本安装, 安装完毕进入新的worker节点, 执行上一步最后得到join命令:
 
kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9
... ...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
 
2.6 检查集群(1 master, 1 worker)
 
kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
servermaster   Ready    master   94m   v1.12.3
worker01       Ready    <none>   54m   v1.12.3
 
2.5 创建dashboard
 
复制kubernetes-dashboard.yaml内容到本地文件
 
kubectl create -f kubernetes-dashboard.yaml 
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
 
浏览器输入worker节点ip和端口使用https访问如下:https://my.worker01.local:30000/#!/login 即可以验证dashboard是否安装成功.
 
3. 遇到问题
 
 
4. 参考资料
 
安装kuadmin相关:
 
 
创建集群相关: