环境准备:
这里准备三台虚拟机,一主两从的集群模式
主机名 | IP | 用途 | 部署软件 |
---|---|---|---|
k8s-master | 10.211.55.22 | master | apiserver,scheduler,controller-manageretcd,flanneld |
k8s-node-1 | 10.211.55.23 | node | kubelet,kube-proxyetcd,flanneld |
k8s-node-2 | 10.211.55.24 | node | kubelet,kube-proxyetcd,flanneld |
我的虚拟机可以直接ping通主机名,如果不能ping通记得每一台的机器加下hosts
[root@k8s-master?~]#?ping?k8s-node-1 PING?k8s-node-1.localdomain?(10.211.55.23)?56(84)?bytes?of?data. 64?bytes?from?k8s-node-1.shared?(10.211.55.23):?icmp_seq=1?ttl=64?time=0.935?ms 64?bytes?from?k8s-node-1.shared?(10.211.55.23):?icmp_seq=2?ttl=64?time=0.282?ms
系统版本以及测试环境需要关闭的东西:
[root@k8s-master?~]#?cat?/etc/redhat-release CentOS?Linux?release?7.2.1511?(Core) [root@k8s-master?~]# [root@k8s-master?~]#?uname?-r 3.10.0-327.el7.x86_64 [root@k8s-master?~]# [root@k8s-master?~]#?systemctl?status?firewalld.service ●?firewalld.service?-?firewalld?-?dynamic?firewall?daemon ???Loaded:?loaded?(/usr/lib/systemd/system/firewalld.service;?enabled;?vendor?preset:?enabled) ???Active:?inactive?(dead)?since?六?2020-03-14?17:52:39?CST;?3h?32min?ago ?Main?PID:?705?(code=exited,?status=0/SUCCESS) 3月?14?17:50:44?Daya-01?systemd[1]:?Starting?firewalld... 3月?14?17:50:44?Daya-01?systemd[1]:?Started?firewalld?... 3月?14?17:52:38?k8s-master?systemd[1]:?Stopping?firewa... 3月?14?17:52:39?k8s-master?systemd[1]:?Stopped?firewal... Hint:?Some?lines?were?ellipsized,?use?-l?to?show?in?full. [root@k8s-master?~]# [root@k8s-master?~]#?getenforce Disabled
软件包下载地址:
软件包 | 下载地址 |
---|---|
kubernetes-node-linux-amd64.tar.gz | https://dl.k8s.io/v1.15.1/kubernetes-node-linux-amd64.tar.gz |
kubernetes-server-linux-amd64.tar.gz | https://dl.k8s.io/v1.15.1/kubernetes-server-linux-amd64.tar.gz |
flannel-v0.11.0-linux-amd64.tar.gz | https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz |
etcd-v3.3.10-linux-amd64.tar.gz | https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz |
关闭swap(三台机器相同操作)
[root@k8s-master?~]#?swapoff?-a?&&?sysctl?-w?vm.swappiness=0 vm.swappiness?=?0 [root@k8s-master?~]#?sed?-i?'/?swap?/?s/^\(.*\)$/#\1/g'?/etc/fstab
设置docker所需参数并安装启动docker(三台机器相同操作):
cat?>?/etc/sysctl.d/kubernetes.conf?<<EOF net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 EOF sysctl?-p??/etc/sysctl.d/kubernetes.conf? #?配置yum源 cd??/etc/yum.repo.d/ wget?https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum?clean?all?;yum??repolist?-y #?安装docker?,版本?18.06.2 yum??install??docker-ce-18.06.2.ce-3.el7?-y systemctl?start?docker?&&?systemctl?enable?docker
创建相关目录(三台机器相同操作):
#?创建安装包存储目录 mkdir??/data/{install,ssl_config}?-p mkdir?/data/ssl_config/{etcd,kubernetes}??-p #?创建安装目录 mkdir?/cloud/k8s/etcd/{bin,cfg,ssl}?-p mkdir?/cloud/k8s/kubernetes/{bin,cfg,ssl}?-p
master节点配置ssh登录互信:
[root@k8s-master?.ssh]#?ssh-copy-id?k8s-master The?authenticity?of?host?'k8s-master?(10.211.55.22)'?can't?be?established. ECDSA?key?fingerprint?is?ee:4e:aa:d1:10:bb:f5:ec:0f:19:73:63:90:42:b4:b4. Are?you?sure?you?want?to?continue?connecting?(yes/no)??yes /bin/ssh-copy-id:?INFO:?attempting?to?log?in?with?the?new?key(s),?to?filter?out?any?that?are?already?installed /bin/ssh-copy-id:?INFO:?1?key(s)?remain?to?be?installed?--?if?you?are?prompted?now?it?is?to?install?the?new?keys root@k8s-master's?password: Number?of?key(s)?added:?1 Now?try?logging?into?the?machine,?with:???"ssh?'k8s-master'" and?check?to?make?sure?that?only?the?key(s)?you?wanted?were?added. [root@k8s-master?.ssh]# [root@k8s-master?.ssh]#?ssh-copy-id?k8s-node-1 The?authenticity?of?host?'k8s-node-1?(10.211.55.23)'?can't?be?established. ECDSA?key?fingerprint?is?ee:4e:aa:d1:10:bb:f5:ec:0f:19:73:63:90:42:b4:b4. Are?you?sure?you?want?to?continue?connecting?(yes/no)??yes /bin/ssh-copy-id:?INFO:?attempting?to?log?in?with?the?new?key(s),?to?filter?out?any?that?are?already?installed /bin/ssh-copy-id:?INFO:?1?key(s)?remain?to?be?installed?--?if?you?are?prompted?now?it?is?to?install?the?new?keys root@k8s-node-1's?password: Number?of?key(s)?added:?1 Now?try?logging?into?the?machine,?with:???"ssh?'k8s-node-1'" and?check?to?make?sure?that?only?the?key(s)?you?wanted?were?added. [root@k8s-master?.ssh]#?ssh-copy-id?k8s-node-2 The?authenticity?of?host?'k8s-node-2?(10.211.55.24)'?can't?be?established. ECDSA?key?fingerprint?is?ee:4e:aa:d1:10:bb:f5:ec:0f:19:73:63:90:42:b4:b4. Are?you?sure?you?want?to?continue?connecting?(yes/no)??yes /bin/ssh-copy-id:?INFO:?attempting?to?log?in?with?the?new?key(s),?to?filter?out?any?that?are?already?installed /bin/ssh-copy-id:?INFO:?1?key(s)?remain?to?be?installed?--?if?you?are?prompted?now?it?is?to?install?the?new?keys root@k8s-node-2's?password: Number?of?key(s)?added:?1 Now?try?logging?into?the?machine,?with:???"ssh?'k8s-node-2'" and?check?to?make?sure?that?only?the?key(s)?you?wanted?were?added. 进行验证 [root@k8s-master?~]#?for?i?in?k8s-master?k8s-node-1?k8s-node-2?;??do?ssh?$i?hostname?;?done k8s-master k8s-node-1 k8s-node-2
将命令添加到环境变量:
echo?'export?PATH=$PATH:/cloud/k8s/etcd/bin/:/cloud/k8s/kubernetes/bin/'?>>/etc/profile
生成相关etcd证书(此步骤只在master节点操作):
wget?https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget?https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget?https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod?+x?cfssl_linux-amd64?cfssljson_linux-amd64?cfssl-certinfo_linux-amd64 mv?cfssl_linux-amd64?/usr/local/bin/cfssl mv?cfssljson_linux-amd64?/usr/local/bin/cfssljson mv?cfssl-certinfo_linux-amd64?/usr/bin/cfssl-certinfo
创建etcd相关证书:
cd?/data/ssl_config/etcd/ cat?<<?EOF?|?tee?ca-config.json { ??"signing":?{ ????"default":?{ ??????"expiry":?"87600h" ????}, ????"profiles":?{ ??????"www":?{ ?????????"expiry":?"87600h", ?????????"usages":?[ ????????????"signing", ????????????"key?encipherment", ????????????"server?auth", ????????????"client?auth" ????????] ??????} ????} ??} } EOF
创建etcd ca配置证书:
cat?<<?EOF?|?tee?ca-csr.json { ????"CN":?"etcd?CA", ????"key":?{ ????????"algo":?"rsa", ????????"size":?2048 ????}, ????"names":?[ ????????{ ????????????"C":?"CN", ????????????"L":?"Beijing", ????????????"ST":?"Beijing" ????????} ????] } EOF
创建etcd server证书:
cat?<<?EOF?|?tee?server-csr.json { ????"CN":?"etcd", ????"hosts":?[ ????"k8s-master", ????"k8s-node-1", ????"k8s-node-2", ????"10.211.55.22", ????"10.211.55.23", ????"10.211.55.24" ????], ????"key":?{ ????????"algo":?"rsa", ????????"size":?2048 ????}, ????"names":?[ ????????{ ????????????"C":?"CN", ????????????"L":?"Beijing", ????????????"ST":?"Beijing" ????????} ????] } EOF
生成etcd ca证书和私钥:
cd?/data/ssl_config/etcd/ #?生成ca证书 cfssl?gencert?-initca?ca-csr.json?|?cfssljson?-bare?ca?- #?生成server证书 cfssl?gencert?-ca=ca.pem?-ca-key=ca-key.pem?-config=ca-config.json?-profile=www?server-csr.json?|?cfssljson?-bare?server
创建kubernetes相关证书:
cd ?/data/ssl_config/kubernetes/ cat?<<?EOF?|?tee?ca-config.json { ??"signing":?{ ????"default":?{ ??????"expiry":?"87600h" ????}, ????"profiles":?{ ??????"kubernetes":?{ ?????????"expiry":?"87600h", ?????????"usages":?[ ????????????"signing", ????????????"key?encipherment", ????????????"server?auth", ????????????"client?auth" ????????] ??????} ????} ??} } EOF
创建ca证书:
cat?<<?EOF?|?tee?ca-csr.json { ????"CN":?"kubernetes", ????"key":?{ ????????"algo":?"rsa", ????????"size":?2048 ????}, ????"names":?[ ????????{ ????????????"C":?"CN", ????????????"L":?"Beijing", ????????????"ST":?"Beijing", ????????????"O":?"k8s", ????????????"OU":?"System" ????????} ????] } EOF
生成api-server证书:
cat?<<?EOF?|?tee?server-csr.json { ????"CN":?"kubernetes", ????"hosts":?[ ??????"10.0.0.1", ??????"127.0.0.1", ??????"10.211.55.22", ??????"k8s-1", ??????"kubernetes", ??????"kubernetes.default", ??????"kubernetes.default.svc", ??????"kubernetes.default.svc.cluster", ??????"kubernetes.default.svc.cluster.local" ????], ????"key":?{ ????????"algo":?"rsa", ????????"size":?2048 ????}, ????"names":?[ ????????{ ????????????"C":?"CN", ????????????"L":?"Beijing", ????????????"ST":?"Beijing", ????????????"O":?"k8s", ????????????"OU":?"System" ????????} ????] } EOF
创建kubernetes proxy证书:
cat?<<?EOF?|?tee?kube-proxy-csr.json { ??"CN":?"system:kube-proxy", ??"hosts":?[], ??"key":?{ ????"algo":?"rsa", ????"size":?2048 ??}, ??"names":?[ ????{ ??????"C":?"CN", ??????"L":?"Beijing", ??????"ST":?"Beijing", ??????"O":?"k8s", ??????"OU":?"System" ????} ??] } EOF
生成kubernetes CA证书和私钥:
#?生成ca证书 cfssl?gencert?-initca?ca-csr.json?|?cfssljson?-bare?ca?- #?生成?api-server?证书 cfssl?gencert?-ca=ca.pem?-ca-key=ca-key.pem?-config=ca-config.json?-profile=kubernetes?server-csr.json?|?cfssljson?-bare?server #?生成?kube-proxy?证书 cfssl?gencert?-ca=ca.pem?-ca-key=ca-key.pem?-config=ca-config.json?-profile=kubernetes?kube-proxy-csr.json?|?cfssljson?-bare?kube-proxy
部署ETCD集群(三台机器相同):
cd ?/data/install/ tar?-xvf?etcd-v3.3.10-linux-amd64.tar.gz cd ?etcd-v3.3.10-linux-amd64/ cp?etcd?etcdctl?/cloud/k8s/etcd/bin/
编辑配置文件(文件三个节点相同,但是里面的IP地址记得变更):
###?k8s-master节点 #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.211.55.22:2380" ETCD_LISTEN_CLIENT_URLS="https://10.211.55.22:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.211.55.22:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.211.55.22:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.211.55.22:2380,etcd02=https://10.211.55.23:2380,etcd03=https://10.211.55.24:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" ###?k8s-node-1 #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.211.55.23:2380" ETCD_LISTEN_CLIENT_URLS="https://10.211.55.23:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.211.55.23:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.211.55.23:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.211.55.22:2380,etcd02=https://10.211.55.23:2380,etcd03=https://10.211.55.24:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" ###?k8s-node-2 #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.211.55.24:2380" ETCD_LISTEN_CLIENT_URLS="https://10.211.55.24:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.211.55.24:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.211.55.24:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.211.55.22:2380,etcd02=https://10.211.55.23:2380,etcd03=https://10.211.55.24:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
创建etcd的进程管理文件(三台机器相同):
vim?/usr/lib/systemd/system/etcd.service [Unit] Description=Etcd?Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/cloud/k8s/etcd/cfg/etcd ExecStart=/cloud/k8s/etcd/bin/etcd?\ --name= ${ETCD_NAME} ?\ --data-dir= ${ETCD_DATA_DIR} ?\ --listen-peer-urls= ${ETCD_LISTEN_PEER_URLS} ?\ --listen-client-urls= ${ETCD_LISTEN_CLIENT_URLS} ,http://127.0.0.1:2379?\ --advertise-client-urls= ${ETCD_ADVERTISE_CLIENT_URLS} ?\ --initial-advertise-peer-urls= ${ETCD_INITIAL_ADVERTISE_PEER_URLS} ?\ --initial-cluster= ${ETCD_INITIAL_CLUSTER} ?\ --initial-cluster-token= ${ETCD_INITIAL_CLUSTER_TOKEN} ?\ --initial-cluster-state=new?\ --cert-file=/cloud/k8s/etcd/ssl/server.pem?\ --key-file=/cloud/k8s/etcd/ssl/server-key.pem?\ --peer-cert-file=/cloud/k8s/etcd/ssl/server.pem?\ --peer-key-file=/cloud/k8s/etcd/ssl/server-key.pem?\ --trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem?\ --peer-trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
配置证书文件并拷贝到node节点:
cd?/data/ssl_config/etcd/ cp?ca*pem?server*pem?/cloud/k8s/etcd/ssl cd?/cloud/k8s/? scp?-r?etcd?k8s-node-1:/cloud/k8s/ scp?-r?etcd?k8s-node-2:/cloud/k8s/ scp?/usr/lib/systemd/system/etcd.service??k8s-node-1:/usr/lib/systemd/system/etcd.service scp?/usr/lib/systemd/system/etcd.service??k8s-ndoe-2:/usr/lib/systemd/system/etcd.service
同时启动三台etcd并查看集群状态:
systemctl?daemon-reload systemctl?enable?etcd systemctl?start?etcd [root@k8s-master?bin]#?./etcdctl??--ca-file=/cloud/k8s/etcd/ssl/ca.pem?--cert-file=/cloud/k8s/etcd/ssl/server.pem?--key-file=/cloud/k8s/etcd/ssl/server-key.pem?cluster-health member?67f5fb1fce7850cb?is?healthy:?got?healthy?result?from?https://10.211.55.23:2379 member?bd1ce44b83380692?is?healthy:?got?healthy?result?from?https://10.211.55.22:2379 member?ddc9604a558260cb?is?healthy:?got?healthy?result?from?https://10.211.55.24:2379 cluster?is?healthy
部署flannel网络:
向etcd集群写入集群pod网段信息(master节点操作):
[root@k8s-master?bin]#?cd?/cloud/k8s/etcd/bin/ [root@k8s-master?bin]#??./etcdctl?--ca-file=/cloud/k8s/etcd/ssl/ca.pem?--cert-file=/cloud/k8s/etcd/ssl/server.pem?--key-file=/cloud/k8s/etcd/ssl/server-key.pem??--endpoints="https://10.211.55.22:2379,https://10.211.55.23:2379,https://10.211.55.24:2379"??set?/coreos.com/network/config???'{?"Network":?"172.18.0.0/16",?"Backend":?{"Type":?"vxlan"}}' {?"Network":?"172.18.0.0/16",?"Backend":?{"Type":?"vxlan"}}
部署配置flannel:
? cd?/data/install/ tar?xf??flannel-v0.11.0-linux-amd64.tar.gz mv?flanneld?mk-docker-opts.sh?/cloud/k8s/kubernetes/bin/ [root@k8s-master?cfg]#?cat?flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://10.211.55.22:2379,https://10.211.55.23:2379,https://10.211.55.24:2379?-etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem?-etcd-certfile=/cloud/k8s/etcd/ssl/server.pem?-etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"
创建进程管理文件;
vim?/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld?overlay?address?etcd?agent After=network-online.target?network.target Before=docker.service [Service] Type=notify EnvironmentFile=/cloud/k8s/kubernetes/cfg/flanneld ExecStart=/cloud/k8s/kubernetes/bin/flanneld?--ip-masq? $FLANNEL_OPTIONS ExecStartPost=/cloud/k8s/kubernetes/bin/mk-docker-opts.sh?-k?DOCKER_NETWORK_OPTIONS?-d?/run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
配置docker启动指定子网段:
vim?/usr/lib/systemd/system/docker.service [Unit] Description=Docker?Application?Container?Engine Documentation=https://docs.docker.com After=network-online.target?firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd?? $DOCKER_NETWORK_OPTIONS ExecReload=/bin/ kill ?-s?HUP? $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
将所需文件同步至其他node节点:
cd?/cloud/k8s/ scp?-r?kubernetes?k8s-node-1:/cloud/k8s/ scp?-r?kubernetes?k8s-node-2:/cloud/k8s/ scp?/cloud/k8s/kubernetes/cfg/flanneld??k8s-node-1:/cloud/k8s/kubernetes/cfg/flanneld scp?/cloud/k8s/kubernetes/cfg/flanneld??k8s-node-2:/cloud/k8s/kubernetes/cfg/flanneld scp?/usr/lib/systemd/system/docker.service????k8s-node-1:/usr/lib/systemd/system/docker.service? scp?/usr/lib/systemd/system/docker.service????k8s-node-2:/usr/lib/systemd/system/docker.service scp?/usr/lib/systemd/system/flanneld.service??k8s-node-1:/usr/lib/systemd/system/flanneld.service? scp?/usr/lib/systemd/system/flanneld.service??k8s-node-2:/usr/lib/systemd/system/flanneld.service? #?启动服务(每台节点都操作) systemctl?daemon-reload systemctl?start?flanneld systemctl?enable?flanneld systemctl?restart?docker
验证flannel配置是否成功
[root@k8s-master?bin]#?ip?a|grep?flannel 4:?flannel.1:?<BROADCAST,MULTICAST,UP,LOWER_UP>?mtu?1450?qdisc?noqueue?state?UNKNOWN ????inet?172.18.15.0/32?scope?global?flannel.1
部署master节点
master需要三个重要组件:kube-apiserver kube-scheduler kube-controller-manager 其中scheduler 和controller-manager可以以集群形式运行,通过选举机制产生一个工作进程,其他进程处于阻塞模式
配置所需文件以及相关证书:
? cd?/data/install/ tar?xf?kubernetes-server-linux-amd64.tar.gz? ? cd?kubernetes/server/bin/ cp?kube-scheduler?kube-apiserver?kube-controller-manager?kubectl?/cloud/k8s/kubernetes/bin/ cd?/data/ssl_config/kubernetes/ cp?*pem?/cloud/k8s/kubernetes/ssl/
部署kube-apiserver组件:
#?创建?TLS?Bootstrapping?Token #?head?-c?16?/dev/urandom?|?od?-An?-t?x?|?tr?-d?'?' 88029d7c2d35a6621699685cb25b466d #?此token后面kubelet配置文件中会用到,写错将无法正常连接apiserver #?vim?/cloud/k8s/kubernetes/cfg/token.csv 88029d7c2d35a6621699685cb25b466d,kubelet-bootstrap,10001, "system:kubelet-bootstrap"
创建apisrever配置文件:
vim?/cloud/k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true?\ --v=4?\ --etcd-servers=https://10.211.55.22:2379,https://10.211.55.23:2379,https://10.211.55.24:2379?\ --bind-address=10.211.55.22?\ --secure-port=6443?\ --advertise-address=10.211.55.22?\ --allow-privileged=true?\ --service-cluster-ip-range=10.0.0.0/24?\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction?\ --authorization-mode=RBAC,Node?\ --enable-bootstrap-token-auth?\ --token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv?\ --service-node-port-range=30000-50000?\ --tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem??\ --tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem?\ --client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem?\ --service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem?\ --etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem?\ --etcd-certfile=/cloud/k8s/etcd/ssl/server.pem?\ --etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"
创建进程管理文件:
vim?/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes?API?Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-apiserver ExecStart=/cloud/k8s/kubernetes/bin/kube-apiserver?$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
启动服务并验证:
[root@k8s-master?kubernetes]#?systemctl?daemon-reload [root@k8s-master?kubernetes]#?systemctl?start?kube-apiserver.service [root@k8s-master?kubernetes]#?ps?-ef?|grep?apiserver root??????8225?????1?23?19:48??????????00:00:04?/cloud/k8s/kubernetes/bin/kube-apiserver?--logtostderr=true?--v=4?--etcd-servers=https://10.211.55.22:2379,https://10.211.55.23:2379,https://10.211.55.24:2379?--bind-address=10.211.55.22?--secure-port=6443?--advertise-address=10.211.55.22?--allow-privileged=true?--service-cluster-ip-range=10.0.0.0/24?--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction?--authorization-mode=RBAC,Node?--enable-bootstrap-token-auth?--token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv?--service-node-port-range=30000-50000?--tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem?--tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem?--client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem?--service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem?--etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem?--etcd-certfile=/cloud/k8s/etcd/ssl/server.pem?--etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem root??????8261??1518??0?19:49?pts/0????00:00:00?grep?--color=auto?apiserver
部署kube-scheduler
编辑配置文件:
vim?/cloud/k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true?--v=4?--master=127.0.0.1:8080?--leader-elect"
创建进程管理文件:
vim?/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes?API?Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-apiserver ExecStart=/cloud/k8s/kubernetes/bin/kube-apiserver?$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target [root@k8s-master?~]#?cat?/cloud/k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true?--v=4?--master=127.0.0.1:8080?--leader-elect" [root@k8s-master?~]#?cat?/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes?Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-scheduler ExecStart=/cloud/k8s/kubernetes/bin/kube-scheduler?$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
启动服务并验证:
[root@k8s-master?kubernetes]#?systemctl?daemon-reload [root@k8s-master?kubernetes]#?systemctl?enable?kube-scheduler.service Created?symlink?from?/etc/systemd/system/multi-user.target.wants/kube-scheduler.service?to?/usr/lib/systemd/system/kube-scheduler.service. [root@k8s-master?kubernetes]#?systemctl?restart?kube-scheduler.service [root@k8s-master?kubernetes]# [root@k8s-master?kubernetes]#?systemctl?status?kube-scheduler.service ●?kube-scheduler.service?-?Kubernetes?Scheduler ???Loaded:?loaded?(/usr/lib/systemd/system/kube-scheduler.service;?enabled;?vendor?preset:?disabled) ???Active:?active?(running)?since?Sat?2020-03-14?19:52:49?CST;?19s?ago ?????Docs:?https://github.com/kubernetes/kubernetes ?Main?PID:?8635?(kube-scheduler) ???Memory:?45.0M ???CGroup:?/system.slice/kube-scheduler.service ???????????└─8635?/cloud/k8s/kubernetes/bin/kube-scheduler?--logtostderr=true?--v=4?--master=127.0.0.1:8080?--leader-elect Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.280532????8635?shared_informer.go:176]?caches?populated Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.380933????8635?shared_informer.go:176]?caches?populated Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.481138????8635?shared_informer.go:176]?caches?populated Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.582034????8635?shared_informer.go:176]?caches?populated Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.682507????8635?shared_informer.go:176]?caches?populated Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.782895????8635?shared_informer.go:176]?caches?populated Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.883521????8635?shared_informer.go:176]?caches?populated Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.883621????8635?leaderelection.go:235]?attempting?to?acquire?leader?lease??kube-system/kube-scheduler... Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.895745????8635?leaderelection.go:245]?successfully?acquired?lease?kube-system/kube-scheduler Mar?14?19:52:51?k8s-master?kube-scheduler[8635]:?I0314?19:52:51.997305????8635?shared_informer.go:176]?caches?populated
部署kube-controller-manager:
编辑配置文件:
vim?/cloud/k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true?\ --v=4?\ --master=127.0.0.1:8080?\ --leader-elect=true?\ --address=127.0.0.1?\ --service-cluster-ip-range=10.0.0.0/24?\ --cluster-name=kubernetes?\ --cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem?\ --cluster-signing-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem??\ --root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem?\ --service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem"
配置进程管理文件
vim?/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes?Controller?Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/cloud/k8s/kubernetes/bin/kube-controller-manager?$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
启动服务并验证:
[root@k8s-master?kubernetes]#?systemctl?daemon-reload [root@k8s-master?kubernetes]#?systemctl?enable?kube-controller-manager Created?symlink?from?/etc/systemd/system/multi-user.target.wants/kube-controller-manager.service?to?/usr/lib/systemd/system/kube-controller-manager.service. [root@k8s-master?kubernetes]#?systemctl?restart?kube-controller-manager [root@k8s-master?kubernetes]#?systemctl?status?kube-controller-manager ●?kube-controller-manager.service?-?Kubernetes?Controller?Manager ???Loaded:?loaded?(/usr/lib/systemd/system/kube-controller-manager.service;?enabled;?vendor?preset:?disabled) ???Active:?active?(running)?since?Sat?2020-03-14?19:55:16?CST;?6s?ago ?????Docs:?https://github.com/kubernetes/kubernetes ?Main?PID:?8982?(kube-controller) ???Memory:?121.9M ???CGroup:?/system.slice/kube-controller-manager.service ???????????└─8982?/cloud/k8s/kubernetes/bin/kube-controller-manager?--logtostderr=true?--v=4?--master=127.0.0.1:8080?--leader-elect=true?--address=127.0.0.1?--service-cluster-... Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?I0314?19:55:19.739674????8982?garbagecollector.go:199]?syncing?garbage?collector?with?updated?resources?from...,?Resourc Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?ge.k8s.io/v1beta1,?Resource=csidrivers?storage.k8s.io/v1beta1,?Resource=csinodes],?removed:?[] Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?I0314?19:55:19.739702????8982?garbagecollector.go:205]?reset?restmapper Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?I0314?19:55:19.739768????8982?graph_builder.go:220]?synced?monitors;?added?0,?kept?49,?removed?0 Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?I0314?19:55:19.739782????8982?graph_builder.go:252]?started?0?new?monitors,?49?currently?running Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?I0314?19:55:19.739787????8982?garbagecollector.go:220]?resynced?monitors Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?I0314?19:55:19.739805????8982?controller_utils.go:1029]?Waiting?for?caches?to?sync?for?garbage?collector?controller Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?I0314?19:55:19.840190????8982?shared_informer.go:176]?caches?populated Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?I0314?19:55:19.840223????8982?controller_utils.go:1036]?Caches?are?synced?for?garbage?collector?controller Mar?14?19:55:19?k8s-master?kube-controller-manager[8982]:?I0314?19:55:19.840234????8982?garbagecollector.go:240]?synced?garbage?collector Hint:?Some?lines?were?ellipsized,?use?-l?to?show?in?full.
查看master状态:
kubectl??get?cs NAME?????????????????STATUS????MESSAGE?????????????ERROR scheduler????????????Healthy???ok controller-manager???Healthy???ok etcd-1???????????????Healthy???{"health":"true"} etcd-2???????????????Healthy???{"health":"true"} etcd-0???????????????Healthy???{"health":"true"}
下面来部署node节点:
node节点需要docker kubelet kube-proxy即可,kubelet用于接受apiserver发来的请求,从而管理pod执行交互命令等 kube-proxy则用来监听apiserver中service和endpoint的变化情况,创建路由规则来进行服务的负载均衡
将所需命令拷贝至node节点:
[root@k8s-master?kubernetes]#?scp?-rp?bootstrap.kubeconfig?kube-proxy.kubeconfig??k8s-node-1:/cloud/k8s/kubernetes/cfg/ bootstrap.kubeconfig????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????100%?2166?????2.1KB/s???00:00 kube-proxy.kubeconfig???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????100%?6272?????6.1KB/s???00:00 [root@k8s-master?kubernetes]#?scp?-rp?bootstrap.kubeconfig?kube-proxy.kubeconfig??k8s-node-2:/cloud/k8s/kubernetes/cfg/ bootstrap.kubeconfig????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????100%?2166?????2.1KB/s???00:00 kube-proxy.kubeconfig???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????100%?6272?????6.1KB/s???00:00
创建kubelet bootstrap.kubeconfig文件
[root@k8s-master?kubernetes]#?cat?environment.sh #?创建kubelet?bootstrapping?kubeconfig BOOTSTRAP_TOKEN=449bbeb0ea7e50f321087a123a509a19 KUBE_APISERVER="https://10.211.55.22:6443" #?设置集群参数 kubectl?config?set-cluster?kubernetes?\ ??--certificate-authority=./ca.pem?\ ??--embed-certs=true?\ ??--server=${KUBE_APISERVER}?\ ??--kubeconfig=bootstrap.kubeconfig #?设置客户端认证参数 kubectl?config?set-credentials?kubelet-bootstrap?\ ??--token=${BOOTSTRAP_TOKEN}?\ ??--kubeconfig=bootstrap.kubeconfig #?设置上下文参数 kubectl?config?set-context?default?\ ??--cluster=kubernetes?\ ??--user=kubelet-bootstrap?\ ??--kubeconfig=bootstrap.kubeconfig #?设置默认上下文 kubectl?config?use-context?default?--kubeconfig=bootstrap.kubeconfig
执行脚本生成配置文件
sh?environment.sh
创建kubelet.config 文件:
vim?envkubelet.kubeconfig.sh #?创建kubelet?bootstrapping?kubeconfig BOOTSTRAP_TOKEN=449bbeb0ea7e50f321087a123a509a19 KUBE_APISERVER="https://10.211.55.22:6443" #?设置集群参数 kubectl?config?set-cluster?kubernetes?\ ??--certificate-authority=./ca.pem?\ ??--embed-certs=true?\ ??--server=${KUBE_APISERVER}?\ ??--kubeconfig=kubelet.kubeconfig #?设置客户端认证参数 kubectl?config?set-credentials?kubelet?\ ??--token=${BOOTSTRAP_TOKEN}?\ ??--kubeconfig=kubelet.kubeconfig #?设置上下文参数 kubectl?config?set-context?default?\ ??--cluster=kubernetes?\ ??--user=kubelet?\ ??--kubeconfig=kubelet.kubeconfig #?设置默认上下文 kubectl?config?use-context?default?--kubeconfig=kubelet.kubeconfig
执行脚本生成文件:
sh?envkubelet.kubeconfig.sh
创建kube-proxy kubeconfig文件
vim?env_proxy.sh #?创建kube-proxy?kubeconfig文件 BOOTSTRAP_TOKEN=449bbeb0ea7e50f321087a123a509a19 KUBE_APISERVER="https://10.211.55.22:6443" kubectl?config?set-cluster?kubernetes?\ ??--certificate-authority=./ca.pem?\ ??--embed-certs=true?\ ??--server=${KUBE_APISERVER}?\ ??--kubeconfig=kube-proxy.kubeconfig kubectl?config?set-credentials?kube-proxy?\ ??--client-certificate=./kube-proxy.pem?\ ??--client-key=./kube-proxy-key.pem?\ ??--embed-certs=true?\ ??--kubeconfig=kube-proxy.kubeconfig kubectl?config?set-context?default?\ ??--cluster=kubernetes?\ ??--user=kube-proxy?\ ??--kubeconfig=kube-proxy.kubeconfig kubectl?config?use-context?default?--kubeconfig=kube-proxy.kubeconfig
执行脚本生成文件:
sh?env_proxy.sh
将生成的文件拷贝到所有node节点:
scp?-rp?bootstrap.kubeconfig?kube-proxy.kubeconfig?k8s-node-1:/cloud/k8s/kubernetes/cfg/ scp?-rp?bootstrap.kubeconfig?kube-proxy.kubeconfig?k8s-node-2:/cloud/k8s/kubernetes/cfg/
部署kubelet(两台node节点操作):
创建kubelet参数配置模版文件:
vim?/cloud/k8s/kubernetes/cfg/kubelet.config kind:?KubeletConfiguration apiVersion:?kubelet.config.k8s.io/v1beta1 address:?10.211.55.23 port:?10250 readOnlyPort:?10255 cgroupDriver:?cgroupfs clusterDomain:?cluster.local. failSwapOn:?false authentication: ??anonymous: ????enabled:?true
创建kubelet配置文件:
vim?/cloud/k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true?\ --v=4?\ --hostname-override=k8s-node-1?\ --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig?\ --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig?\ --config=/cloud/k8s/kubernetes/cfg/kubelet.config?\ --cert-dir=/cloud/k8s/kubernetes/ssl?\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
创建进程管理文件:
vim?/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes?Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kubelet ExecStart=/cloud/k8s/kubernetes/bin/kubelet?$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
在master节点执行kubectl将kubelet-bootstrap用户绑定到系统集群角色
kubectl?create?clusterrolebinding?kubelet-bootstrap?\?--clusterrole=system:node-bootstrapper?\?--user=kubelet-bootstrap
node节点启动kubelet:
systemctl?daemon-reload systemctl?enable?kubelet systemctl?restart?kubelet
如果启动后有如下报错:
Mar?14?20:16:54?k8s-node-1?kubelet:?I0314?20:16:54.509597????7907?bootstrap.go:148]?No?valid?private?key?and/or?certificate?found,?reusing?existing?private?key?or?creating?a?new one Mar?14?20:16:54?k8s-node-1?kubelet:?I0314?20:16:54.521982????7907?bootstrap.go:293]?Failed?to?connect?to?apiserver:?the?server?has?asked?for?the?client?to?provide?credentials
需要检查下bootstrap.kubeconfig配置文件中的token是否填错了,导致无法通过认证
批准kubelet的注册请求:
[root@k8s-master?bin]#?kubectl??get?csr NAME???????????????????????????????????????????????????AGE???REQUESTOR???????????CONDITION node-csr-Hdu4yWbOWhQY6N1vYxp-7qWWOWMzs84IdJqVfDoRmxU???14s???kubelet-bootstrap???Pending node-csr-TKCdht4JTnx57jyMp7qvhnW79L2ermZhRB01QrgzP9A???79s???kubelet-bootstrap???Pending [root@k8s-master?bin]#?kubectl??certificate?approve?node-csr-Hdu4yWbOWhQY6N1vYxp-7qWWOWMzs84IdJqVfDoRmxU?node-csr-TKCdht4JTnx57jyMp7qvhnW79L2ermZhRB01QrgzP9A certificatesigningrequest.certificates.k8s.io/node-csr-Hdu4yWbOWhQY6N1vYxp-7qWWOWMzs84IdJqVfDoRmxU?approved certificatesigningrequest.certificates.k8s.io/node-csr-TKCdht4JTnx57jyMp7qvhnW79L2ermZhRB01QrgzP9A?approved [root@k8s-master?bin]#?kubectl??get?csr NAME???????????????????????????????????????????????????AGE?????REQUESTOR???????????CONDITION node-csr-Hdu4yWbOWhQY6N1vYxp-7qWWOWMzs84IdJqVfDoRmxU???114s????kubelet-bootstrap???Approved,Issued node-csr-TKCdht4JTnx57jyMp7qvhnW79L2ermZhRB01QrgzP9A???2m59s???kubelet-bootstrap???Approved,Issued
此时集群node状态:
[root@k8s-master?bin]#?kubectl??get?node NAME?????????STATUS???ROLES????AGE???VERSION k8s-node-1???Ready????<none>???59s???v1.15.1 k8s-node-2???Ready????<none>???59s???v1.15.1
部署kube-proxy组件:
编辑配置文件:
vim?/cloud/k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true?\ --v=4?\ --hostname-override=k8s-2?\ --cluster-cidr=10.0.0.0/24?\ --kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
创建进程管理文件:
vim?/cloud/k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true?\ --v=4?\ --hostname-override=k8s-2?\ --cluster-cidr=10.0.0.0/24?\ --kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig" [root@k8s-node-1?cfg]#?cat?/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes?Proxy After=network.target [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-proxy ExecStart=/cloud/k8s/kubernetes/bin/kube-proxy?$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
启动并检查服务:
systemctl?daemon-reload systemctl?enable?kube-proxy systemctl?restart?kube-proxy [root@k8s-node-1?~]#?systemctl?status?kube-proxy.service ●?kube-proxy.service?-?Kubernetes?Proxy ???Loaded:?loaded?(/usr/lib/systemd/system/kube-proxy.service;?disabled;?vendor?preset:?disabled) ???Active:?active?(running)?since?六?2020-03-14?21:14:43?CST;?1h?48min?ago ?Main?PID:?13129?(kube-proxy) ???Memory:?8.8M ???CGroup:?/system.slice/kube-proxy.service ?????????????13129?/cloud/k8s/kubernetes/bin/kube-proxy?--logtostderr=true?--v=4?--hostname-override=k8s-2?--cluster-cidr=10.0.0.0/24?--kubeconfig=/cloud/k8s/kubernetes/cfg/ku... 3月?14?23:03:08?k8s-node-1?kube-proxy[13129]:?I0314?23:03:08.025576???13129?config.go:132]?Calling?handler.OnEndpointsUpdate 3月?14?23:03:08?k8s-node-1?kube-proxy[13129]:?I0314?23:03:08.092317???13129?config.go:132]?Calling?handler.OnEndpointsUpdate 3月?14?23:03:10?k8s-node-1?kube-proxy[13129]:?I0314?23:03:10.034811???13129?config.go:132]?Calling?handler.OnEndpointsUpdate 3月?14?23:03:10?k8s-node-1?kube-proxy[13129]:?I0314?23:03:10.099997???13129?config.go:132]?Calling?handler.OnEndpointsUpdate 3月?14?23:03:12?k8s-node-1?kube-proxy[13129]:?I0314?23:03:12.044579???13129?config.go:132]?Calling?handler.OnEndpointsUpdate 3月?14?23:03:12?k8s-node-1?kube-proxy[13129]:?I0314?23:03:12.111595???13129?config.go:132]?Calling?handler.OnEndpointsUpdate 3月?14?23:03:14?k8s-node-1?kube-proxy[13129]:?I0314?23:03:14.053605???13129?config.go:132]?Calling?handler.OnEndpointsUpdate 3月?14?23:03:14?k8s-node-1?kube-proxy[13129]:?I0314?23:03:14.121189???13129?config.go:132]?Calling?handler.OnEndpointsUpdate 3月?14?23:03:16?k8s-node-1?kube-proxy[13129]:?I0314?23:03:16.064692???13129?config.go:132]?Calling?handler.OnEndpointsUpdate 3月?14?23:03:16?k8s-node-1?kube-proxy[13129]:?I0314?23:03:16.132973???13129?config.go:132]?Calling?handler.OnEndpointsUpdate