快速搭建环境(仅供测试)

通过Kubekit快速搭建(Docker)

下载最新版:https://github.com/Orientsoft/kubekit

安装参考:https://github.com/Orientsoft/kubekit/wiki/Kubekit-%E5%AE%89%E8%A3%85%E4%B8%8E%E4%BD%BF%E7%94%A8%E6%89%8B%E5%86%8C

通过Rancher快速搭建(Docker)

https://www.kubernetes.org.cn/2955.html

k8s Kubernetes v1.10 最简易安装(Docker)

http://www.cnblogs.com/elvi/p/8976305.html

参考:http://blog.51cto.com/lizhenliang/1983392


完整集群搭建

Centos

三台主机配置都是2核4G(1核4G也是可以的) Centos 7
Linux 3.10.0-693.21.1.el7.x86_64 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

|---|---|---|
|主机1|k8sMaster01| 192.168.10.110|
|主机1|VIP| 192.168.10.5|
|主机2|k8sMaster02| 192.168.10.107|
|主机3|k8sMaster03| 192.168.10.108|
|主机4|k8sNode01| 192.168.10.112|


配置基本环境

# 配置基本环境(Master都运行)

systemctl stop firewalld
systemctl disable firewalld

# 关闭swap(重要),1.8+以后版本要求关闭系统 Swap,不关闭会导致服务无法启动
swapoff -a && sysctl -w vm.swappiness=0
sed 's/.*swap.*/#&/' /etc/fstab (并不能注释,请手动修改)
# 记得检查/etc/fstab也要注解掉SWAP挂载。


setenforce  0 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config 

#内核#
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
echo "sysctl -p /etc/sysctl.d/k8s.conf" >>/etc/profile

echo "#myset
* soft nofile 65536
* hard nofile 65536
* soft nproc 65536
* hard nproc 65536
* soft  memlock  unlimited
* hard memlock  unlimited
">> /etc/security/limits.conf


# 在k8sMaster01上执行   192.168.10.110
hostnamectl --static set-hostname k8sMaster01
# 在k8sMaster02上执行   192.168.10.107
hostnamectl --static set-hostname k8sMaster02
# 在k8sMaster03上执行   192.168.10.108
hostnamectl --static set-hostname k8sMaster03
# 在k8sNode01上执行     192.168.10.112
hostnamectl --static set-hostname k8sNode01

sed -i '$a\192.168.10.110 k8sMaster01' /etc/hosts
sed -i '$a\192.168.10.107 k8sMaster02' /etc/hosts
sed -i '$a\192.168.10.108 k8sMaster03' /etc/hosts

yum install wget -y
mkdir -p /etc/yum.repos.d/bak
mv /etc/yum.repos.d/CentOS* /etc/yum.repos.d/bak
wget  -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/Centos-7.repo 
wget  -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum clean all
yum install chrony -y
systemctl enable chronyd.service    
systemctl start chronyd.service
systemctl status chronyd.service

配置keepalived和etcd集群

yum install -y keepalived

# 以下分开在每台Master上执行,每台都是有区别的。

#############################################################################################
# 在k8sMaster01上执行   192.168.10.110

# 注意修改为自己的主机名和对应的IP
sed -i  '$a\export NODE_NAME=k8sMaster01' /etc/profile
sed -i  '$a\export NODE_IP=192.168.10.110                   # 当前部署的机器 IP' /etc/profile
sed -i  '$a\export NODE_IPS="192.168.10.110 192.168.10.107 192.168.10.108"        #etcd集群所有机器 IP' /etc/profile
sed -i  '$a\export ETCD_NODES=k8sMaster01=https://192.168.10.110:2380,k8sMaster02=https://192.168.10.107:2380,k8sMaster03=https://192.168.10.108:2380 
#etcd 集群间通信的IP和端口' /etc/profile
source /etc/profile

# 这里每个Master修改对应的IP然后执行。
cat >/etc/keepalived/keepalived.conf  <<EOF
global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.10.5:6443"
    # 此处是VIP
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    # 改为自己的网卡
    virtual_router_id 61
    # 主节点权重最高 依次减少
    priority 120
    advert_int 1
    #修改为本地IP 
    mcast_src_ip 192.168.10.110
    nopreempt
    authentication {
        auth_type PASS
        auth_pass awzhXylxy.T
    }
    unicast_peer {
        #注释掉本地IP
        #192.168.10.110
        192.168.10.107
        192.168.10.108
    }
    virtual_ipaddress {
        192.168.10.5/24
        # 此处是VIP
    }
    track_script {
        CheckK8sMaster
    }

}
EOF

systemctl enable keepalived && systemctl restart keepalived && systemctl status keepalived

# 以下是配置证书
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
#############################ca-config Begin################################################################################
cat >  ca-config.json <<EOF
{
"signing": {
"default": {
  "expiry": "8760h"
},
"profiles": {
  "kubernetes-Soulmate": {
    "usages": [
        "signing",
        "key encipherment",
        "server auth",
        "client auth"
    ],
    "expiry": "8760h"
  }
}
}
}
EOF

cat >  ca-csr.json <<EOF
{
"CN": "kubernetes-Soulmate",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
  "C": "CN",
  "ST": "shanghai",
  "L": "shanghai",
  "O": "k8s",
  "OU": "System"
}
]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

# 注意改为自己的IP
cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.10.110",
    "192.168.10.107",
    "192.168.10.108"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "shanghai",
      "L": "shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd
  
mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem  ca.pem /etc/etcd/ssl/
# 上传证书到另外两台机
scp -r /etc/etcd/ssl root@192.168.10.107:/etc/etcd/
scp -r /etc/etcd/ssl root@192.168.10.108:/etc/etcd/
systemctl status keepalived


#############################################################################################
# 在k8sMaster02上执行    192.168.10.107

# 注意修改为自己的主机名和对应的IP
sed -i  '$a\export NODE_NAME=k8sMaster02' /etc/profile
sed -i  '$a\export NODE_IP=192.168.10.107                   # 当前部署的机器 IP' /etc/profile
sed -i  '$a\export NODE_IPS="192.168.10.110 192.168.10.107 192.168.10.108"        #etcd集群所有机器 IP' /etc/profile
sed -i  '$a\export ETCD_NODES=k8sMaster01=https://192.168.10.110:2380,k8sMaster02=https://192.168.10.107:2380,k8sMaster03=https://192.168.10.108:2380 
#etcd 集群间通信的IP和端口' /etc/profile
source /etc/profile

cat >/etc/keepalived/keepalived.conf  <<EOF
global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.10.5:6443"
    # 此处为VIP
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP 
    # 改为自己的网卡
    interface eth0
    virtual_router_id 61
    # 主节点权重最高 依次减少
    priority 110
    advert_int 1
    #修改为本地IP 
    mcast_src_ip 192.168.10.107
    nopreempt
    authentication {
        auth_type PASS
        auth_pass awzhXylxy.T
    }
    unicast_peer {
        #注释掉本地IP
        192.168.10.110
        #192.168.10.107
        192.168.10.108
    }
    virtual_ipaddress {
        192.168.10.5/24
        # 此处为VIP
    }
    track_script {
        CheckK8sMaster
    }

}
EOF

systemctl enable keepalived && systemctl restart keepalived
mkdir -p /etc/etcd
systemctl status keepalived


#############################################################################################
# 在k8sMaster03上执行   192.168.10.108

# 注意修改为自己的主机名和对应的IP
sed -i  '$a\export NODE_NAME=k8sMaster03' /etc/profile
sed -i  '$a\export NODE_IP=192.168.10.108                   # 当前部署的机器 IP' /etc/profile
sed -i  '$a\export NODE_IPS="192.168.10.110 192.168.10.107 192.168.10.108"        #etcd集群所有机器 IP' /etc/profile
sed -i  '$a\export ETCD_NODES=k8sMaster01=https://192.168.10.110:2380,k8sMaster02=https://192.168.10.107:2380,k8sMaster03=https://192.168.10.108:2380 
#etcd 集群间通信的IP和端口' /etc/profile
source /etc/profile

cat >/etc/keepalived/keepalived.conf  <<EOF
global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.10.5:6443"
    # 此处为VIP
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    # 改为自己的网卡
    interface eth0
    virtual_router_id 61
    # 主节点权重最高 依次减少
    priority 100
    advert_int 1
    #修改为本地IP 
    mcast_src_ip 192.168.10.108
    nopreempt
    authentication {
        auth_type PASS
        auth_pass awzhXylxy.T
    }
    unicast_peer {
        #注释掉本地IP
        192.168.10.110
        192.168.10.107
        #192.168.10.108
    }
    virtual_ipaddress {
        192.168.10.5/24
        # 此处为VIP
    }
    track_script {
        CheckK8sMaster
    }

}
EOF

systemctl enable keepalived && systemctl restart keepalived
mkdir -p /etc/etcd
systemctl status keepalived





#############################################################################################
# 在3台主机中执行以下命令(直接执行或保存为脚本)

#!/bin/bash
#etcd install
wget http://github.com/coreos/etcd/releases/download/v3.1.13/etcd-v3.1.13-linux-amd64.tar.gz

if [ -f etcd-v3.1.13-linux-amd64.tar.gz ];then
	tar -xvf etcd-v3.1.13-linux-amd64.tar.gz
	mv etcd-v3.1.13-linux-amd64/etcd* /usr/local/bin
	mkdir -p /var/lib/etcd  

else 
	echo "etcd-v3.1.13-linux-amd64.tar.gz not found!!Please confirm Downlowd 'etcd' SUCCESS"
    exit 1
fi
echo "
  [Unit]
  Description=Etcd Server
  After=network.target
  After=network-online.target
  Wants=network-online.target
  Documentation=https://github.com/coreos
  
  [Service]
  Type=notify
  WorkingDirectory=/var/lib/etcd/
  ExecStart=/usr/local/bin/etcd \\
  --name=${NODE_NAME} \\
  --cert-file=/etc/etcd/ssl/etcd.pem \\
  --key-file=/etc/etcd/ssl/etcd-key.pem \\
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \\
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --initial-advertise-peer-urls=https://${NODE_IP}:2380 \\
  --listen-peer-urls=https://${NODE_IP}:2380 \\
  --listen-client-urls=https://${NODE_IP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://${NODE_IP}:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new \\
  --data-dir=/var/lib/etcd
  Restart=on-failure
  RestartSec=5
  LimitNOFILE=65536
  
  [Install]
  WantedBy=multi-user.target
"> etcd.service 

mv etcd.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd


# 检查keepalive是不是正常工作。
systemctl status keepalived

# 检查etcd是否正常
etcdctl --endpoints=https://${NODE_IP}:2379 --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health


安装Docker-ce、kubeadm及初始化

# 目前Kuberadm只支持最高17.03.x版本的docker-ce
yum install -y yum-utils device-mapper-persistent-data lvm2
# 如果之前安装过docker,执行以下命令,否则启动报错:Error starting daemon: error initializing graphdriver: driver not supported
yum remove docker docker.io docker-ce docker-selinux -y
rm -rf /var/lib/docker
#
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y

############################# 这部分废弃 ###########################################
# 以下这部分废弃
#配置阿里源(最新版)
echo '#Docker for centos 7
[docker-ce-stable]
name=Docker CE - Aliyun
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
'>/etc/yum.repos.d/docker-ce.repo

echo 'install docker'
yum install -y docker-ce


#官方:安装最新版Docker CE 版本的容器引擎
curl -fsSL "https://get.docker.com/" | sh
############################# 这部分废弃 ###########################################


# 设置阿里云docker镜像加速(可选)
# SetOPTS=" --registry-mirror=https://your-code.mirror.aliyuncs.com"
SetOPTS="--registry-mirror=https://vi7cv85n.mirror.aliyuncs.com"
sed  -i "s#^ExecStart.*#& $SetOPTS #" /usr/lib/systemd/system/docker.service
grep 'ExecStart' /usr/lib/systemd/system/docker.service
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
systemctl status docker

############################# 这部分废弃 ###########################################
# 在3台主机禁用docker启动项参数关于SELinux的设置
sed -i 's/--selinux-enabled/--selinux-enabled=false/g' /etc/sysconfig/docker

# 在3台主机的kubelet配置文件中添加如下参数
sed -i '9a\Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/osoulmate/pause-amd64:3.0"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf


# 在3台主机添加docker加速器配置(可选步骤);请自行申请阿里云账号获取镜像加速链接
# 登录 https://cr.console.aliyun.com/ 在页面中找到并点击镜像加速按钮,即可看到属于自己的专属加速链接,选择Centos版本后即可看到配置方法。
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://your-code.mirror.aliyuncs.com"]
}
EOF
############################# 这部分废弃 ###########################################



###############################################################################################
# 在3台主机上安装kubeadm,kubelet,kubctl
yum install kubelet kubeadm kubectl kubernetes-cni -y

############################# 这部分废弃 ###########################################
yum安装的是最新版本,可能与docker镜像版本不符;可以通过这个接口下载指定版本。

export KUBE_URL="https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/linux/amd64"
wget "${KUBE_URL}/kubelet" -O /usr/bin/kubelet
wget "${KUBE_URL}/kubeadm" -O /usr/bin/kubeadm
wget "${KUBE_URL}/kubectl" -O /usr/bin/kubectl
chmod +x /usr/local/bin/kubelet
chmod +x /usr/local/bin/kubeadm
chmod +x /usr/local/bin/kubectl

mkdir -p /opt/cni/bin && cd /opt/cni/bin
export CNI_URL="https://github.com/containernetworking/plugins/releases/download"
wget -qO- --show-progress "${CNI_URL}/v0.6.0/cni-plugins-amd64-v0.6.0.tgz" | tar -zx
############################# 这部分废弃 ###########################################


# 下载K8S相关镜像
# 备用:MyUrl=registry.cn-shenzhen.aliyuncs.com/leolan/k8s
MyUrl=registry.cn-shanghai.aliyuncs.com/alik8s
images=(kube-proxy-amd64:v1.10.0 kube-controller-manager-amd64:v1.10.0 kube-scheduler-amd64:v1.10.0 kube-apiserver-amd64:v1.10.0 etcd-amd64:3.1.12 kubernetes-dashboard-amd64:v1.8.3 heapster-grafana-amd64:v4.4.3 heapster-influxdb-amd64:v1.3.3 heapster-amd64:v1.4.2 k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s-dns-sidecar-amd64:1.14.8 k8s-dns-kube-dns-amd64:1.14.8 pause-amd64:3.1)
#
for imageName in ${images[@]} ; do
  docker pull $MyUrl/$imageName
  docker tag $MyUrl/$imageName k8s.gcr.io/$imageName
  docker rmi $MyUrl/$imageName
done
#
docker pull $MyUrl/flannel:v0.10.0-amd64
docker tag $MyUrl/flannel:v0.10.0-amd64  quay.io/coreos/flannel:v0.10.0-amd64
docker rmi $MyUrl/flannel:v0.10.0-amd64


# 配置kubelet
# sed -i '9a\Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=k8s.gcr.io/pause-amd64:3.1"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sed -i 's/driver=systemd/driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet
# 这里kubelet服务即使启动了又会迅速关闭的,要执行下面这步才会启动服务,同时启动相应容器。


# 生成配置文件config.yaml
# 修改为自己的IP,hostname一定要小写,否则报错:altname is not a valid dns label or ip address
cat <<EOF > config.yaml 
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  endpoints:
  - https://192.168.10.110:2379
  - https://192.168.10.107:2379
  - https://192.168.10.108:2379
  # 这里改为你的IP
  caFile: /etc/etcd/ssl/ca.pem 
  certFile: /etc/etcd/ssl/etcd.pem 
  keyFile: /etc/etcd/ssl/etcd-key.pem
  dataDir: /var/lib/etcd
networking:
  podSubnet: 10.244.0.0/16
kubernetesVersion: 1.10.0
api:
  advertiseAddress: "192.168.10.5"
  # 这里改为VIP
  # 下面的token可以不用管,自己设置的,登陆该token登陆
token: "b33a99.a244ef88531e4354"
tokenTTL: "0s"
apiServerCertSANs:
# 这里主机名一定要小写,并且填上VIP
- k8smaster01
- k8smaster02
- k8smaster02
- 192.168.10.110
- 192.168.10.107
- 192.168.10.108
- 192.168.10.5
featureGates:
  CoreDNS: true
# imageRepository: "registry.cn-hangzhou.aliyuncs.com/osoulmate"
# imageRepository: "registry.cn-shanghai.aliyuncs.com/alik8s"
# 改为阿里云仓库的地址,不然找不到镜像会从google去下载(可选;执行了上面的镜像下载,此步就不用设置了)。
EOF


# 在k8sMaster01主机中首先执行kubeadm初始化操作
kubeadm init --config config.yaml

初始化后会得到节点加入地址(保存备用):
kubeadm join 192.168.10.5:6443 --token b33a99.a244ef88531e4354 --discovery-token-ca-cert-hash sha256:79adb1b7c6b9f9cf6aea5bd992bbff72988c184b57396e26f7041209a76b4b1f

# 排错
systemctl status kubelet
journalctl -xeu kubelet

# 在k8sMaster01主机中执行初始化后操作
# 对于非root用户
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 对于root用户
export KUBECONFIG=/etc/kubernetes/admin.conf
也可以直接放到~/.bash_profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile



# 将主机k8sMaster01中kubeadm初始化后生成的证书和密钥文件拷贝至k8sMaster02,k8sMaster03相应目录下
scp -r /etc/kubernetes/pki root@192.168.10.107:/etc/kubernetes/
scp -r /etc/kubernetes/pki root@192.168.10.108:/etc/kubernetes/
scp /root/config.yaml root@192.168.10.107:/root/
scp /root/config.yaml root@192.168.10.108:/root/
# 此时k8sMaster02、k8sMaster03上依然启动不了,接着下一步


# 为主机k8sMaster01安装网络组件 podnetwork【这里选用flannel】
mkdir -p /etc/cni/net.d
cat <<EOF > /etc/cni/net.d/10-flannel.conf
{
"name": "cbr0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
  }
}
EOF

mkdir -p /usr/share/oci-umount/oci-umount.d
mkdir /run/flannel/
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

############################# 这部分废弃 ###########################################
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
systemctl stop kubelet    #由于kubelet会调用docker到默认url【谷歌】下载镜像,所以先禁用
systemctl restart docker
docker pull registry.cn-hangzhou.aliyuncs.com/osoulmate/flannel:v0.10.0-amd64
systemctl start kubelet
############################# 这部分废弃 ###########################################


# 在k8sMaster02,k8sMaster03上执行如下命令
kubeadm init --config config.yaml
# 此时也保存一下kubeadm join;会自动启动kubelet

# 在k8sMaster02、k8sMaster03主机中执行初始化后操作
# 对于非root用户
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 对于root用户
export KUBECONFIG=/etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
# 也可以直接放到~/.bash_profile

# 部署flannel网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Dashboard

mkdir kubernetes-dashboard
cd kubernetes-dashboard

# heapster-rbac.yaml
tee ./heapster-rbac.yaml <<-'EOF'
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system
EOF

# heapster.yaml
tee ./heapster.yaml <<-'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: k8s.gcr.io/heapster-amd64:v1.4.2
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster
EOF

# kubernetes-dashboard-admin.rbac.yaml
tee ./kubernetes-dashboard-admin.rbac.yaml <<-'EOF'
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system
  
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system
EOF

# kubernetes-dashboard.yaml
tee ./kubernetes-dashboard.yaml <<-'EOF'
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      serviceAccountName: kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          #- --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # 这里写上VIP和端口,端口在该文件的最低下nodePort中一样。
          #- --apiserver-host=http://192.168.10.5:30000
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard-admin
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 9090
      targetPort: 9090
  selector:
    k8s-app: kubernetes-dashboard

# ------------------------------------------------------------
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-external
  namespace: kube-system
spec:
  ports:
    - port: 9090
      targetPort: 9090
      nodePort: 30000
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
EOF

# 加载配置文件
kubectl -n kube-system create -f .
kubectl -n kube-system create sa dashboard
kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard
kubectl -n kube-system get po,svc -l k8s-app=kubernetes-dashboard

# 获取token
# 法一
SECRET=$(kubectl -n kube-system get sa dashboard -o yaml | awk '/dashboard-token/ {print $3}')
kubectl -n kube-system describe secrets ${SECRET} | awk '/token:/{print $2}'

# 法二
echo -e "\033[32mdashboard登录令牌,保存到/root/k8s.token.dashboard.txt\033[0m" 
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') |awk '/token:/{print$2}' >/root/k8s.token.dashboard.txt


浏览器访问:https://192.168.10.5:30000  通过token登陆。

# 更新配置
修改了kubernetes-dashboard.yaml配置文件要用以下命令更新配置。
kubectl apply -f kubernetes-dashboard.yaml -f kubernetes-dashboard-admin.rbac.yaml

# 删除Dashboasd
kubectl delete -f kubernetes-dashboard.yaml
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl delete svc  kubernetes-dashboard --namespace=kube-system

参考:
https://www.kubernetes.org.cn/3834.html
https://www.kubernetes.org.cn/3814.html

后续步骤

# 高可用验证
将k8sMaster01关机,在k8sMaster03上执行
while true; do sleep 1; kubectl get node;date; done
在k8sMaster02上观察keepalived是否已切换为主状态。


# 查看集群各节点状态【任意一台主机都可执行】
kubectl get nodes
kubectl get cs
kubectl get po --all-namespaces(kubectl get pods --all-namespaces)
kubectl -n kube-system get po -o wide -l k8s-app=kube-proxy

# 排错
systemctl status kubelet
journalctl -xeu kubelet
tail -f /var/log/messages

# 如果因config的原因配置不成功,按以下方法重新配置
kubeadm reset
rm -f $HOME/.kube/config
修改config.yaml文件
重新初始化

# 让Master也运行pod
默认情况下,Master节点不参与工作负载,但如果希望安装出一个All-In-One的k8s环境,或者不想浪费服务器资源;则可以执行以下命令,让Master节点也成为一个Node节点:
kubectl taint nodes --all node-role.kubernetes.io/master-



node节点加入

# 配置基本环境(Master都运行)

systemctl stop firewalld
systemctl disable firewalld

# 关闭swap(重要),1.8+以后版本要求关闭系统 Swap,不关闭会导致服务无法启动
swapoff -a && sysctl -w vm.swappiness=0
sed 's/.*swap.*/#&/' /etc/fstab (并不能注释)
# 记得检查/etc/fstab也要注解掉SWAP挂载。


setenforce  0 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config 

#内核#
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
echo "sysctl -p /etc/sysctl.d/k8s.conf" >>/etc/profile

echo "#myset
* soft nofile 65536
* hard nofile 65536
* soft nproc 65536
* hard nproc 65536
* soft  memlock  unlimited
* hard memlock  unlimited
">> /etc/security/limits.conf


# 在k8sNode01上执行   192.168.10.112
hostnamectl --static set-hostname k8sNode01

sed -i '$a\192.168.10.110 k8sMaster01' /etc/hosts
sed -i '$a\192.168.10.107 k8sMaster02' /etc/hosts
sed -i '$a\192.168.10.108 k8sMaster03' /etc/hosts

yum install wget -y
mkdir -p /etc/yum.repos.d/bak
mv /etc/yum.repos.d/CentOS* /etc/yum.repos.d/bak
wget  -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/Centos-7.repo 
wget  -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum clean all
yum install chrony -y
systemctl enable chronyd.service    
systemctl start chronyd.service
systemctl status chronyd.service

yum install -y yum-utils device-mapper-persistent-data lvm2
# 如果之前安装过docker,执行以下命令,否则启动报错:Error starting daemon: error initializing graphdriver: driver not supported
yum remove docker docker.io docker-ce docker-selinux -y
rm -rf /var/lib/docker
#
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y

# 设置阿里云docker镜像加速(可选)
# SetOPTS=" --registry-mirror=https://your-code.mirror.aliyuncs.com"
SetOPTS="--registry-mirror=https://vi7cv85n.mirror.aliyuncs.com"
sed  -i "s#^ExecStart.*#& $SetOPTS #" /usr/lib/systemd/system/docker.service
grep 'ExecStart' /usr/lib/systemd/system/docker.service
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
systemctl status docker

yum install kubelet kubeadm kubectl kubernetes-cni -y
sed -i 's/driver=systemd/driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet

# 执行之前保存的信息。
kubeadm join 192.168.10.5:6443 --token b33a99.a244ef88531e4354 --discovery-token-ca-cert-hash sha256:79adb1b7c6b9f9cf6aea5bd992bbff72988c184b57396e26f7041209a76b4b1f

Ubuntu16.04手动部署Kubernetes:http://time-track.cn/deploy-kubernetes-step-by-step-on-trusty-section-1.html


一键部署Kubernetes

GitHub:https://github.com/kubeup/okdc?spm=a2c4e.11153940.blogcont74640.17.be292d93zwoNvY
curl -s https://raw.githubusercontent.com/kubeup/okdc/master/okdc-centos.sh|sh

一键部署Kubernetes高可用集群:http://www.cnblogs.com/keithtt/p/6649995.html


其他

批量修改、上传镜像到网易仓库

推送到docker hub指定用户名即可(其他仓库需要完整路径)

# 不使用网易的(有镜像数量限制),这里上传到阿里云
# 在你需要上传的区新建仓库(这里以华南1为例),
# 修改密码,搜索:容器镜像服务,可以看到一个:修改Registry登录密码。
docker login --username=842632422@qq.com registry.cn-shenzhen.aliyuncs.com

images=(
kube-proxy-amd64:v1.10.0
kube-apiserver-amd64:v1.10.0
kube-scheduler-amd64:v1.10.0
kube-controller-manager-amd64:v1.10.0
etcd-amd64:3.1.12
kubernetes-dashboard-amd64:v1.8.3
heapster-grafana-amd64:v4.4.3
heapster-influxdb-amd64:v1.3.3
heapster-amd64:v1.4.2
k8s-dns-sidecar-amd64:1.14.8
k8s-dns-kube-dns-amd64:1.14.8
k8s-dns-dnsmasq-nanny-amd64:1.14.8
pause-amd64:3.1
)

for imageName in ${images[@]} ; 
do
    docker tag k8s.gcr.io/$imageName registry.cn-shenzhen.aliyuncs.com/leolan/k8s/$imageName;
    docker push registry.cn-shenzhen.aliyuncs.com/leolan/k8s/$imageName;
    docker rmi registry.cn-shenzhen.aliyuncs.com/leolan/k8s/$imageName;
done

docker tag coredns/coredns:1.0.6 registry.cn-shenzhen.aliyuncs.com/leolan/k8s/coredns:1.0.6
docker tag quay.io/coreos/flannel:v0.10.0-amd64 registry.cn-shenzhen.aliyuncs.com/leolan/k8s/flannel:v0.10.0-amd64
docker push registry.cn-shenzhen.aliyuncs.com/leolan/k8s/coredns:1.0.6
docker push registry.cn-shenzhen.aliyuncs.com/leolan/k8s/flannel:v0.10.0-amd64
docker rmi registry.cn-shenzhen.aliyuncs.com/leolan/k8s/coredns:1.0.6
docker rmi registry.cn-shenzhen.aliyuncs.com/leolan/k8s/flannel:v0.10.0-amd64

镜像批量更名

#!/bin/bash
images=(
kube-proxy-amd64:v1.10.0
kube-apiserver-amd64:v1.10.0
kube-scheduler-amd64:v1.10.0
kube-controller-manager-amd64:v1.10.0
etcd-amd64:3.1.12
kubernetes-dashboard-amd64:v1.8.3
heapster-grafana-amd64:v4.4.3
heapster-influxdb-amd64:v1.3.3
heapster-amd64:v1.4.2
k8s-dns-sidecar-amd64:1.14.8
k8s-dns-kube-dns-amd64:1.14.8
k8s-dns-dnsmasq-nanny-amd64:1.14.8
pause-amd64:3.1
)
for imageName in ${images[@]} ; do
  docker pull registry.cn-shenzhen.aliyuncs.com/leolan/k8s/$imageName
  docker tag registry.cn-shenzhen.aliyuncs.com/leolan/k8s/$imageName k8s.gcr.io/$imageName
  docker rmi registry.cn-shenzhen.aliyuncs.com/leolan/k8s/$imageName
done

容器集群监控与效能分析

Heapster
Heapster 是 Kubernetes 社区维护的容器集群监控与效能分析工具。Heapster 会从 Kubernetes apiserver 取得所有 Node 信息,然后再通过这些 Node 来取得 kubelet 上的资料,最后再将所有收集到资料送到 Heapster 的后台储存 InfluxDB,最后利用 Grafana 来抓取 InfluxDB 的资料源来进行视觉化。

# 在k8sMaster01通过 kubectl 来建立 kubernetes monitor 即可:
# 法一
kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-monitor.yml.conf"
kubectl -n kube-system get po,svc

完成后,就可以通过浏览器存取 Grafana Dashboard:https://192.168.10.5:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/

# 法二
kubectl create -f http://elven.vip/ks/k8s/oneinstall/yml/heapster/grafana.yaml
kubectl create -f http://elven.vip/ks/k8s/oneinstall/yml/heapster/heapster.yaml
kubectl create -f http://elven.vip/ks/k8s/oneinstall/yml/heapster/influxdb.yaml
kubectl create -f http://elven.vip/ks/k8s/oneinstall/yml/heapster-rbac.yaml

# 法三
kubectl create -f kube-heapster/influxdb/
kubectl create -f kube-heapster/rbac/
kubectl get pods --all-namespaces
访问https://192.168.10.5:30000/#!/login即可看到监控信息
参考:https://www.kubernetes.org.cn/3808.html

Ingress Controller(利用 Nginx 或 HAProxy 等负载平衡器曝露集群内服务元件)

Ingress

Ingress是利用 Nginx 或 HAProxy 等负载平衡器来曝露集群内服务的元件,Ingress 主要通过设定 Ingress 规格来定义 Domain Name 映射 Kubernetes 内部 Service,这种方式可以避免掉使用过多的 NodePort 问题。

# 在k8sMaster01通过 kubectl 来建立 Ingress Controller 即可:
kubectl create ns ingress-nginx
kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/ingress-controller.yml.conf"
kubectl -n ingress-nginx get po

Traefik

也可以选择 Traefik 的 Ingress Controller。


Helm Tiller Server(发布组件)

Helm 是 Kubernetes Chart 的管理工具,Kubernetes Chart 是一套预先组态的 Kubernetes 资源套件。其中Tiller Server主要负责接收来至 Client 的指令,并通过 kube-apiserver 与 Kubernetes 集群做沟通,根据 Chart 定义的内容,来产生与管理各种对应 API 物件的 Kubernetes 部署文档(又称为 Release)。

# 首先在k8sMaster01安装 Helm tool:
wget -qO- https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz | tar -zx
mv linux-amd64/helm /usr/local/bin/

# 另外在所有node节点安装 socat:
apt-get install -y socat

# 接着初始化 Helm(这边会安装 Tiller Server):
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller...Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Happy Helming!
kubectl -n kube-system get po -l app=helm
helm versionClient: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}

参考资料:
https://www.kubernetes.org.cn/3773.html
https://www.kubernetes.org.cn/3805.html
https://www.kubernetes.org.cn/3814.html
https://www.kubernetes.org.cn/3808.html
http://www.cnblogs.com/elvi/p/8976305.html
https://blog.linuxeye.cn/458.html


Ceph 分布式存储

https://github.com/slpcat/docker-images/blob/master/ceph/kubernetes/README.md


持久化存储Cephfs

https://mp.weixin.qq.com/s?__biz=MzIyNzUwMjM2MA==&mid=2247485611&idx=1&sn=e6d427e0b3ab6475595f9098c3920dad&chksm=e86178dcdf16f1cac5b85ba6f7f9a67f8b4ba77cfb2b5694a4734483dd0a2abac3e3598a402e&mpshare=1&scene=1&srcid=0601nj05pSzD2W2d0xJu0ej5#rd


Escalator

Escalator 是 Atlassian 开源的一款 Kubernetes 自动扩展工具,专为大批量或基于作业任务(job-based)的工作负载而设计。当集群需要按比例缩放时,无法强制释放和移动,Escalator 将确保在终止节点之前已在节点上完成了 pod 。它还针对集群扩展进行了速度优化,以确保 pod 不会处于挂起状态。

https://github.com/atlassian/escalator


自动构建CI/CD

谷歌开源 Kubernetes 原生 CI/CD 构建框架 Tekton
参考:https://www.leolan.top/index.php/posts/283.html

文章作者: Leo
本文链接:
版权声明: 本站所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 LeoLan的小站
集群自动化 docker Kubernetes k8s
喜欢就支持一下吧