Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用。不能用于生产环境。官方地址:https://kubernetes.io/docs/setup/minikube/
方式二:kubeadmKubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
方式三:二进制包从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
总结:生产环境中部署Kubernetes集群,只有Kubeadm和二进制包可选,Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。我们这里使用二进制包部署Kubernetes集群,我也是推荐大家使用这种方式,虽然手动部署麻烦点,但学习很多工作原理,更有利于后期维护。
实验环境系统版本:centos7x7.6.1810x86_64
Kubernetes版本:v1.13
Kubernetes-node版本:v1.13
Docker版本:docker-ce.x86_64 0:18.03.1.ce-1.el7.centos
注:以上最好两个均采用当前最新稳定版本。
关闭防火墙并禁止开机自启
systemctl stop firewalld.servicesystemctl disable firewalld
关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
关闭swap
#swapoff –a 临时关闭#vi /etc/fstab 永久关闭
重启 reboot
服务器角色 主机名规划 1、设置主机名cat /etc/hostname
master
2、映射主机IPcat /etc/hosts
192.168.150.101 master192.168.150.104 node1192.168.150.105 node2
注:node节点同样操作即可,切记主机名要修改为node。
创建etcd证书 1、下载cfssl工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
注:使用cfssl来生成自签证书。
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
3、移动cfssl工具到指定位置mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
4、创建etcd证书文件注:以下创建的文件不要多此一举每台节点都创建,只需要在主节点创建即可!本人踩过坑,在每台节点生成一遍,后边连接etcd集群会报错证书问题。
第二步创建认证机构cat ca-config.json{"signing": {"default": {"expiry": "87600h"},"profiles": {"www": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}}
第三步创建服务器认证机构cat ca-csr.json{"CN": "etcd CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}]}
5、生成证书 第一步生成ca证书cat server-csr.json{"CN": "etcd","hosts": ["192.168.150.101","192.168.150.104","192.168.150.105"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]}
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
第二步生成服务器证书cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
注:以上命令都是在证书文件在当前路径下的前提,不然需要指定绝对路径。
注:以下四步需要在etcd所有节点都执行。
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
注:如果网络打不开链接:https://pan.baidu.com/s/1HWwIQT-j5u7g-UDtODvWqw 提取码:gg5m
mkdir /opt/etcd/{bin,cfg,ssl} -p
3)解压etcd二进制包tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
4)将etcd主要配置文件移动到规划目录mv etcd-v3.3.10-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2、创建etcd配置文件注:以下配置文件需要在etcd所有节点都执行但是一定要记得修改IP。
cat /opt/etcd/cfg/etcd #[Member]ETCD_NAME="etcd01"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.150.101:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.150.101:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.150.101:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.150.101:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.150.101:2380,etcd02=https://192.168.150.104:2380,etcd03=https://192.168.150.105:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"
注释:
3、创建etcd启动文件注:以下配置文件需要在etcd所有节点都执行。
4、拷贝证书到指定位置cat /usr/lib/systemd/system/etcd.service [Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=/opt/etcd/cfg/etcdExecStart=/opt/etcd/bin/etcd \--name=${ETCD_NAME} \--data-dir=${ETCD_DATA_DIR} \--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \--initial-cluster=${ETCD_INITIAL_CLUSTER} \--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \--initial-cluster-state=new \--cert-file=/opt/etcd/ssl/server.pem \--key-file=/opt/etcd/ssl/server-key.pem \--peer-cert-file=/opt/etcd/ssl/server.pem \--peer-key-file=/opt/etcd/ssl/server-key.pem \--trusted-ca-file=/opt/etcd/ssl/ca.pem \--peer-trusted-ca-file=/opt/etcd/ssl/ca.pemRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
cp ca*pem server*pem /opt/etcd/ssl
scp ca*pem server*pem root@192.168.150.104: /opt/etcd/ssl
scp ca*pem server*pem root@192.168.150.105: /opt/etcd/ssl
注:同时使用scp命令将这些证书也拷贝到etcd其他节点同一位置,如上后两条命令。
systemctl start etcd && systemctl enable etcd
注:当启动第一台的时候会卡住不动,这属于etcd集群等待寻找的正常情况,当同时启动第二台节点会同时都执行成功。
echo "PATH=$PATH:/opt/etcd/bin/" >>/etc/profile
2)环境变量生效source /etc/profile
注:设置etcd变量操作主要为了方便以后直接在全局使用etcd命令,所以需要在所有etcd节点执行。
etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.150.101:2379,https://192.168.150.104:2379,https://192.168.150.105:2379" cluster-health
注:在使用以上命令时切记对应证书的绝对路径,否则会报找不到证书的错误。
注:如果出现以上截图输出信息,说明etcd集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd
注:只需要在node节点安装docker。
yum install -y yum-utils device-mapper-persistent-data lvm2
2、添加docker源yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3、安装dockeryum -y install docker-ce
4、添加国内镜像仓库curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
5、启动docker并设置开机自启systemctl start docker && systemctl enable docker
部署Flannel网络 1、写入预定义子网段etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.150.101:2379,https://192.168.150.104:2379,https://192.168.150.105:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
注:在master节点执行以上命令,切记如果多个master都需要执行。
注:在所有node节点执行一下所有命令。
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
注:如果网络打不开链接:https://pan.baidu.com/s/1HWwIQT-j5u7g-UDtODvWqw <br/>提取码:gg5m
tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
3)创建flannel规划目录mkdir -p /opt/flannel/{bin,cfg}
4)将指定文件移动到规划目录mv flanneld mk-docker-opts.sh /opt/flannel/bin/
3、创建flannel配置文件注:在所有node节点执行一下所有命令。
cat /opt/flannel/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=https://192.168.150.101:2379,https://192.168.150.104:2379,https://192.168.150.105:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
4、创建flannel启动服务注:在所有node节点执行一下所有命令。
5、配置Docker启动指定子网段cat /usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/flannel/cfg/flanneldExecStart=/opt/flannel/bin/flanneld --ip-masq $FLANNEL_OPTIONSExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.target
注:在所有node节点执行一下所有命令。
6、启动flannelcat /usr/lib/systemd/system/docker.service[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notifyEnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill -s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.target
注:在所有node节点执行一下所有命令。
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker
7、检查flannel注:在所有node节点执行一下所有命令。
ip a
2)使用flannel的IP互ping注:如果如上图所示表示网络互通,flannel网络部署成功!
部署k8s-server端之证书注:以下执行命令都在master上。
2)第二步创建认证机构cat ca-config.json{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}}
3)第三步生成证书cat ca-csr.json{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}]}
cfssl gencert -initca ca-csr.json | cfssljson -bare ca –
2、创建apiserver证书 1)创建apiserver认证机构2)生成apiserver证书cat server-csr.json{"CN": "kubernetes","hosts": ["10.0.0.1","127.0.0.1","192.168.150.101","192.168.150.104","192.168.150.105","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
3、创建kube-proxy证书 1)创建kube-proxy认证机构2)创建kube-proxy证书cat kube-proxy-csr.json{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
4、查看证书 部署k8s-server端之apiserver组件 1、下载kubernetes二进制包https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md
注:如果网络打不开链接:https://pan.baidu.com/s/1HWwIQT-j5u7g-UDtODvWqw 提取码:gg5m
tar zxvf kubernetes-server-linux-amd64.tar.gz
3、创建规划目录mkdir -p /opt/kubernetes/{bin,cfg,ssl}
4、拷贝证书cp server.pem server-key.pem ca.pem ca-key.pem /opt/kubernetes/ssl/
5、移动指定文件到规划目录cp kubernetes/server/bin/kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler /opt/kubernetes/bin/
6、创建token文件cat /opt/kubernetes/cfg/token.csv
674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
注:具体token使用和创建方式可以百度,这里就不详细介绍了。
8、创建apiserver启动文件cat /opt/kubernetes/cfg/kube-apiserverKUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=https://192.168.150.101:2379,https://192.168.150.104:2379,https://192.168.150.105:2379 \--bind-address=192.168.150.101 \--secure-port=6443 \--advertise-address=192.168.150.101 \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
9、启动apiservercat /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserverExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
部署k8s-server端之scheduler组件 1、创建scheduler配置文件2、创建scheduler启动文件cat /opt/kubernetes/cfg/kube-schedulerKUBE_SCHEDULER_OPTS="--logtostderr=true \--v=4 \--master=127.0.0.1:8080 \--leader-elect"
3、启动schedulercat /usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-schedulerExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
部署k8s-server端之controller-manager组件 1、创建controller-manager配置文件2、创建controller-manager启动文件cat /opt/kubernetes/cfg/kube-schedulerKUBE_SCHEDULER_OPTS="--logtostderr=true \--v=4 \--master=127.0.0.1:8080 \--leader-elect"
3、启动controller-managercat /usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-schedulerExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
部署k8s-server端之测试组件 1、设置环境变量echo "PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile
source /etc/profile
2、查看当前集群状态kubectl get cs
部署k8s-node1/node2端 1、下载二进制包注:如果网络打不开链接:https://pan.baidu.com/s/1HWwIQT-j5u7g-UDtODvWqw 提取码:gg5m
mkdir -p /opt/kubernetes/{bin,cfg,ssl}
3、解压tar zxvf kubernetes-node-linux-amd64.tar.gz
4、拷贝指定文件到规划目录cp kubernetes/node/bin/* /opt/kubernetes/bin/
5、设置环境变量echo "PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile
source /etc/profile
6、将kubelet-bootstrap用户绑定到系统集群角色kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
注:这条命令在master上执行。
注:以下都是在master上执行。
BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc KUBE_APISERVER="https://192.168.150.101:6443"
2)设置集群参数kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=bootstrap.kubeconfig
3)设置客户端认证参数kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfig
4)设置上下文参数kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
5)设置默认上下文kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
6)创建kube-proxy kubeconfig文件第一步:
kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig
第二步:
kubectl config set-credentials kube-proxy --client-certificate=./kube-proxy.pem --client-key=./kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
第三步:
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
第四步:
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
第五步:
注:将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下。
第六步:
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.150.104:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.150.105:/opt/kubernetes/cfg/
部署k8s-node1/node2端之kubelet组件 1、创建kubelet配置文件cat /opt/kubernetes/cfg/kubeletKUBELET_OPTS="--logtostderr=true \--v=4 \--hostname-override=192.168.150.104 \--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \--config=/opt/kubernetes/cfg/kubelet.config \--cert-dir=/opt/kubernetes/ssl \--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
注:在node2上创建时记得修改为自己的IP。
cat /opt/kubernetes/cfg/kubelet.config kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 192.168.150.104port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.0.0.2"]clusterDomain: cluster.local.failSwapOn: falseauthentication:anonymous:enabled: true
注:在node2上创建时记得修改为自己的IP。
4、启动kubeletcat /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=/opt/kubernetes/cfg/kubeletExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTSRestart=on-failureKillMode=process[Install]WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
5、master审批node加入集群注:以下都是在master上操作的。
kubectl get csr
2)同意请求的节点加入kubectl certificate approve node-csr-BXnxQ93UK5Hr1ggXBKgVwvLtYZSNkl-pqmYLhmqW-a8
kubectl certificate approve node-csr-wYihsL1QYL8hvvQ7km3B0uBI37cgWNhrc4EyYCem68U
部署k8s-node1/node2端之kube-proxy组件 1、创建kube-proxy配置文件cat /opt/kubernetes/cfg/kube-proxyKUBE_PROXY_OPTS="--logtostderr=true \--v=4 \--hostname-override=192.168.150.104 \--cluster-cidr=10.0.0.0/24 \--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
注:在node2几点上创建时记得修改IP。
3、启动kube-proxycat /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTSRestart=on-failure[Install]WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
4、查看node节点加入集群状态kubectl get node
部署k8s完成之测试 1、创建一个Nginx Webkubectl run nginx --image=nginx --replicas=3
2、指定web的映射端口kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
3、查看pod状态kubectl get pods
kubectl get svc
4、浏览器访问测试地址:http://192.168.150.104:32121/地址:http://192.168.150.105:32121/注:两个node节点都可以访问。