当前位置: 首页 > news >正文

腾讯 云上做网站教程开源系统 网站

腾讯 云上做网站教程,开源系统 网站,精通网站开发阅读,网站建设广州哪家好1. 背景 手工部署 Kubernetes 二进制集群相对于使用自动化工具或发行版进行部署有一些优势#xff1a; 定制性#xff1a;手工部署允许您对 Kubernetes 集群的各个组件进行定制。您可以选择特定的版本、配置选项和插件#xff0c;以满足您的需求。这种灵活性使您能够根据具…1. 背景 手工部署 Kubernetes 二进制集群相对于使用自动化工具或发行版进行部署有一些优势 定制性手工部署允许您对 Kubernetes 集群的各个组件进行定制。您可以选择特定的版本、配置选项和插件以满足您的需求。这种灵活性使您能够根据具体的要求进行精细调整和配置。理解和掌控通过手动部署您能够更加深入地理解 Kubernetes 的各个组件和内部工作原理。这有助于您对集群的运行方式和行为有更全面的了解并能更好地进行故障排除和性能优化。教育和学习手动部署对于学习和教育目的是有益的。通过手工部署您将了解到 Kubernetes 的各个方面包括网络、存储、调度和安全等。这对于深入学习和理解 Kubernetes 的工作原理非常有帮助。灵活性手动部署使您能够选择更适合您环境和需求的硬件和网络设置。您可以根据自己的资源和约束进行灵活的部署以满足特定的性能、可用性和安全性要求。版本控制手工部署允许您更好地控制和管理 Kubernetes 的版本更新和升级。您可以选择何时升级集群并可以在升级之前进行必要的测试和验证。 需要注意的是手工部署 Kubernetes 集群需要更多的时间、资源和技术知识。它对于有经验的操作员和对 Kubernetes 有深入了解的人来说可能更合适。 准备条件 2.2.3.2 测试环境K8S网段规划 POD网段10.0.0.0/16 Service网段10.255.0.0/16 kube-master01: 192.168.23.51 os: kylinos cpu: 4 mem: 8G disk: 50G kube-master01: 192.168.23.51 os: kylinos cpu: 4 mem: 8G disk: 50G 基础配置 配置root远程登录 sed -i s/PermitRootLogin no/PermitRootLogin yes/g /etc/ssh/sshd_config sed -i s/#UseDNS yes/UseDNS no/g /etc/ssh/sshd_config systemctl restart sshd配置主机名 根据所在节点配置主机名 hostnamectl set-hostname kube-master01安装 ansible 注意这里ansible 选择性安装。步骤包含单节点执行或批量执行。 yum -y install epel-release yum -y install ansible配置 vim /etc/ansible/hostsbash [all] kube-master01 ansible_host192.168.23.51 kube-node01 ansible_host192.168.23.52[kube_node] kube-node013.1.4 配置互信 ssh-keygen for i in cat /etc/ansible/hosts |grep 192 | awk {print $2} | awk -F {print $2};do ssh-copy-id root$i;done测试 ansible ansible all -m ping配置hosts文件 cat /etc/hosts EOF 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.23.51 kube-master01 192.168.23.52 kube-node01 192.168.23.50 harbor01 EOF批量: ansible all -m copy -a src/etc/hosts dest/etc/hosts forceyes ansible all -m shell -a cat /etc/hosts关闭防firewalld火墙 systemctl status firewalld|grep Active systemctl stop firewalld ; systemctl disable firewalld批量 ansible all -m systemd -a namefirewalld statestopped enabledno关闭 selinux grep ‘SELINUX’ /etc/selinux/config |grep -v ‘#’ sed -i s/SELINUXenforcing/SELINUXdisabled/ /etc/selinux/config getenforce reboot ansible all -m lineinfile -a path/etc/selinux/config regexp^SELINUX lineSELINUXdisabled -b关闭交换分区swap sed -ri s/.*swap.*/#/ /etc/fstab swapoff -a sysctl -w vm.swappiness0 free -h ansible all -m shell -a sed -i /.*swap.*/s/^/#/ /etc/fstab -b ansible all -m shell -a swapoff -a sysctl -w vm.swappiness0 ansible all -m shell -a free -h修改内核参数 modprobe bridge modprobe br_netfilter modprobe ip_conntrack cat EOF /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables 1 net.bridge.bridge-nf-call-iptables 1 net.ipv4.ip_forward 1 kernel.pid_max 99999 vm.max_map_count 262144 EOF sysctl -p /etc/sysctl.d/k8s.confansible all -m shell -a modprobe bridge modprobe br_netfilter modprobe ip_conntrack ansible all -m file -a path/etc/sysctl.d/k8s.conf statetouch mode0644 ansible all -m blockinfile -a path/etc/sysctl.d/k8s.conf blocknet.bridge.bridge-nf-call-ip6tables 1 net.bridge.bridge-nf-call-iptables 1 net.ipv4.ip_forward 1 kernel.pid_max 99999 vm.max_map_count 262144 ansible all -m shell -a sysctl -p /etc/sysctl.d/k8s.conf安装iptables 在所有master节点与node节点上面安装iptables yum install iptables-services -y service iptables stop systemctl disable iptables iptables -F ansible all -m yum -a nameiptables-services statepresent ansible all -m systemd -a nameiptables statestopped enabledno ansible all -m shell -a iptables -F开启 ipvs 在所有master节点与node节点上面需要开启ipvs cat /etc/sysconfig/modules/ipvs.modules EOF #!/bin/bash ipvs_modulesip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack for kernel_module in \${ipvs_modules}; do/sbin/modinfo -F filename \${kernel_module} /dev/null 21if [ 0 -eq 0 ]; then/sbin/modprobe \${kernel_module}fi done EOFchmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep ip_vsansible all -m copy -a src/etc/sysconfig/modules/ipvs.modules dest/etc/sysconfig/modules/ipvs.modulesansible all -m shell -a chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep ip_vs配置limits参数 echo * soft nofile 65536 /etc/security/limits.conf echo * hard nofile 65536 /etc/security/limits.conf echo * soft nproc 65536 /etc/security/limits.conf echo * hard nproc 65536 /etc/security/limits.conf echo * soft memlock unlimited /etc/security/limits.conf echo * hard memlock unlimited /etc/security/limits.confansible all -m lineinfile -a path/etc/security/limits.conf line* soft nofile 65536\n* hard nofile 65536\n* soft nproc 65536\n* hard nproc 65536\n* soft memlock unlimited\n* hard memlock unlimited -b ansible all -m shell -a tail -n 7 /etc/security/limits.conf配置时钟同步 注意时间配置内部NTP服务器需要提供NTP地址。 yum install -y htop tree wget jq git net-tools ntpdate timedatectl set-timezone Asia/Shanghai date echo Asia/Shanghai /etc/timezone date ntpdate -u ntpser01.bsg.com.cn date echo 0,10,20,30,40,50 * * * * /usr/sbin/ntpdate -u ntpser01.bsg.com.cn /var/spool/cron/root crontab -l service crond restart service crond status cat /var/spool/cron/root service crond status |grep -I active ansible all -m shell -a yum install -y htop tree wget jq git net-tools ntpdate ansible all -m shell -a timedatectl set-timezone Asia/Shanghai date echo Asia/Shanghai /etc/timezone ansible all -m shell -a date ntpdate -u ntpser01.bsg.com.cn date ansible all -m shell -a echo 0,10,20,30,40,50 * * * * /usr/sbin/ntpdate -u ntpser01.bsg.com.cn /var/spool/cron/root crontab -l ansible all -m systemd -a namecrond staterestarted配置journal进行持久化 sed -i s/#Storageauto/Storageauto/g /etc/systemd/journald.conf mkdir -p /var/log/journal systemd-tmpfiles --create --prefix /var/log/journal systemctl restart systemd-journald.service ls -al /var/log/journalansible all -m shell -a sed -i s/#Storageauto/Storageauto/g /etc/systemd/journald.conf mkdir -p /var/log/journal systemd-tmpfiles --create --prefix /var/log/journal ansible all -m systemd -a namesystemd-journald.service staterestarted配置history命令 echo export HISTTIMEFORMAT%Y-%m-%d %T ~/.bashrc source ~/.bashrcansible all -m shell -a echo export HISTTIMEFORMAT\%Y-%m-%d %T\ ~/.bashrc source ~/.bashrc依赖包安装 yum -y install openssl-devel libnl libnl-3 libnl-devel.x86_64 gcc gcc-c autoconf automake make zlib zlib-devel unzip conntrack ipvsadm nfs-utils -y Docker软件安装 在所有主机上面安装docker软件 wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.15.tgz#将软件包传送到其他节点 scp docker-19.03.15.tgz 192.168.23.52:/root/#解压安装包 tar -xf docker-20.10.24.tgz cp docker/* /usr/bin mkdir /etc/docker cat /usr/lib/systemd/system/docker.service EOF [Unit] DescriptionDocker Application Container Engine Documentationhttps://docs.docker.com Afternetwork-online.target firewalld.service Wantsnetwork-online.target [Service] Typenotify ExecStart/usr/bin/dockerd ExecReload/bin/kill -s HUP $MAINPID LimitNOFILEinfinity LimitNPROCinfinity LimitCOREinfinity TimeoutStartSec0 Delegateyes KillModeprocess Restarton-failure StartLimitBurst3 StartLimitInterval60s [Install] WantedBymulti-user.target EOF在配置文件中新增本地镜像仓库下载地址 cat EOF /etc/docker/daemon.json { insecure-registries: [harbor.bsgchina.com], exec-opts: [native.cgroupdriversystemd] } EOFsystemctl daemon-reload;systemctl start docker;systemctl enable docker systemctl status docker docker login -u admin -p Bsgchina2023 harbor.bsgchina.com docker pull harbor.bsgchina.com/k8s-public/busybox:1.28Figure 2: 测试同步 通过docker push image同步到一台harbor检查复制管理看看是否有新的同步出现。 批量 ansible all -m copy -a srcdocker dest/root/ ansible all -m shell -a chmod -R 755 /root/docker ansible all -m shell -a cp -a /root/docker/* /usr/bin/ ansible all -m shell -a mkdir /etc/docker ansible all -m copy -a src/usr/lib/systemd/system/docker.service dest/usr/lib/systemd/system/ ansible all -m copy -a src/etc/docker/daemon.json dest/etc/docker/ ansible all -m shell -a systemctl daemon-reload;systemctl start docker;systemctl enable docker;systemctl status docker ansible all -m shell -a docker pull harbor.bsgchina.com/library/busybox:1.28注意harbor 已安装好 Openssl 升级 下载: https://www.openssl.org/source/openssl-1.1.1v.tar.gz cat /etc/redhat-release openssl version mv /usr/bin/openssl /usr/bin/openssl.bak mv /usr/include/openssl /usr/include/openssl.bak tar zxvf openssl-1.1.1v.tar.gz cd openssl-1.1.1v/ ./config --prefix/usr/local/openssl make make install ln -s /usr/local/openssl/bin/openssl /usr/bin/openssl ln -s /usr/local/openssl/include/openssl /usr/include/openssl echo /usr/local/openssl/lib /etc/ld.so.conf ldconfig -v openssl version2测试harbor高可用拉取推送情况 #分别登陆harbor服务器 docker login harbor.bsgchina.com #这个是157和158的vip docker pull httpd docker tag docker.io/library/httpd:latest harbor.bsgchina.com/library/httpd:latest docker push harbor.bsgchina.com/hy/httpd:latest #换一台机器去拉取刚刚推送上去的镜像 [roota-t-k8s-node02 docker]# docker pull harbor.bsgchina.com/library/httpd:latest latest: Pulling from library/httpd 33847f680f63: Already exists d74938eee980: Pull complete #拉取成功harbor的高可用搭建完毕还可以停止一台harbor服务器再进行测试一下3.1.19 镜像准备 将K8S、监控所需要镜像上传到镜像仓库 准备好介质 . ├── docker.io_bats_bats_v1.4.1.tar ├── docker.io_coredns_coredns_1.9.1.tar ├── docker.io_library_busybox_1.31.1.tar ├── images.sh ├── images.txt ├── kubernetesui_dashboard_v2.7.0.tar ├── kubernetesui_metrics-scraper_v1.0.8.tar ├── registry.k8s.io_ingress-nginx_controller_v1.5.1.tar ├── registry.k8s.io_ingress-nginx_kube-webhook-certgen_v20220916-gd32f8c343.tar ├── registry.k8s.io_ingress-nginx_kube-webhook-certgen_v20221220-controller-v1.5.1-58-g787ea74b6.tar ├── registry.k8s.io_kube-state-metrics_kube-state-metrics_v2.9.2.tar ├── registry.k8s.io_pause_3.9.tar └── siriuszg_addon-resizer_1.8.4.tar$ cat images.txt quay.io/calico/node:v3.24.5 quay.io/calico/pod2daemon-flexvol:v3.24.5 quay.io/calico/cni:v3.24.5 quay.io/calico/kube-controllers:v3.24.5 quay.io/calico/node:v3.24.5 quay.io/calico/pod2daemon-flexvol:v3.24.5 quay.io/calico/cni:v3.24.5 registry.k8s.io/ingress-nginx/controller:v1.5.1 registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343 kubernetesui/dashboard:v2.7.0 kubernetesui/metrics-scraper:v1.0.8 coredns/coredns:1.9.1 registry.k8s.io/pause:3.9 registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2 registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6 docker.io/library/busybox:1.31.1#镜像解压入库。 docker login -u admin -p ‘Harbor2021#!’ harbor.bsgchina.com ./images.sh harbor.bsgchina.com harbor.bsgchina.com/k8s-public/calico/node:v3.24.5 harbor.bsgchina.com/k8s-public/calico/pod2daemon-flexvol:v3.24.5 harbor.bsgchina.com/k8s-public/calico/cni:v3.24.5 harbor.bsgchina.com/k8s-public/calico/kube-controllers:v3.24.5 harbor.bsgchina.com/k8s-public/ingress/ingress-nginx_controller:v0.48.1 harbor.bsgchina.com/k8s-public/kubernetesui/dashboard:v2.7.0 harbor.bsgchina.com/k8s-public/kubernetesui/metrics-scraper:v1.0.8 harbor.bsgchina.com/k8s-public/coredns/coredns:v1.9.1 harbor.bsgchina.com/k8s-public/ingress/kube-webhook-certgen:v1.5.1 harbor.bsgchina.com/k8s-public/kubernetesui/metrics-scraper:v1.0.6 harbor.bsgchina.com/k8s-public/ingress/nginx-ingress-controller:0.24.1 harbor.bsgchina.com/k8s-public/busybox:v1.28.4 harbor.bsgchina.com/k8s-public/kubernetesui/pause-amd64:v3.0etcd 安装 注意etcd集群搭建在主机kube-master01上操作。 配置etcd证书 配置etcd工作目录 mkdir -p /etc/etcd/ssl/ mkdir -p /etc/etcd/cfg安装签发证书工具cfssl mkdir cfssl cd cfssl wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_amd64 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl-certinfo_1.6.4_linux_amd64 for i in ls cfssl* ; do mv $i ${i%%_*} ; done cp -a cfssl* /usr/local/bin/ chmod x /usr/local/bin/cfssl* echo export PATH/usr/local/bin:$PATH /etc/profile配置etcd组件CA证书 1、 自签CA cd /etc/etcd/ssl cat ca-config.json EOF {signing: {default: {expiry: 438000h},profiles: {www: {expiry: 438000h,usages: [signing,key encipherment,server auth,client auth]}}} } EOF cat ca-csr.json EOF {CN: etcd CA,key: {algo: rsa,size: 2048},names: [{C: CN,L: Guangzhou,ST: Guangdong}] } EOF2、 生成证书 cfssl gencert -initca ca-csr.json | cfssljson -bare ca $ ls *pem ca-key.pem ca.pem使用自签CA签发Etcd HTTPS证书 1、创建证书申请文件 #文件hosts字段中IP为所有etcd节点的集群内部通信IP不要漏了为了方便后期扩容可以多写几个ip预留扩容 cat etcd-csr.json EOF {CN: etcd,hosts: [192.168.23.51,192.168.23.52,192.168.23.53],key: {algo: rsa,size: 2048},names: [{C: CN,L: Guangzhou,ST: Guangdong}] } EOF2、生成证书 cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilewww etcd-csr.json | cfssljson -bare etcd#生成了一个证书和秘钥 ls etcd*pem etcd-key.pem etcd.pem部署etcd集群 以下在etcd节点1上操作就行然后把生成的文件拷贝到其他etcd集群主机。 下载地址https://github.com/etcd-io/etcd/releases/download/v3.5.3/etcd-v3.5.3-linux-amd64.tar.gz 安装etcd工具 wget https://github.com/etcd-io/etcd/releases/download/v3.5.3/etcd-v3.5.3-linux-amd64.tar.gz tar zxvf etcd-v3.5.3-linux-amd64.tar.gz cp -a etcd-v3.5.3-linux-amd64/{etcd,etcdctl} /usr/local/bin/创建etcd配置文件 cat /etc/etcd/cfg/etcd.conf EOF #[Member] ETCD_NAMEetcd-1 ETCD_DATA_DIR/var/lib/etcd/default.etcd ETCD_LISTEN_PEER_URLShttps://192.168.23.51:2380 ETCD_LISTEN_CLIENT_URLShttps://192.168.23.51:2379 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.23.51:2380 ETCD_ADVERTISE_CLIENT_URLShttps://192.168.23.51:2379 ETCD_INITIAL_CLUSTERetcd-1https://192.168.23.51:2380 ETCD_INITIAL_CLUSTER_TOKENetcd-cluster ETCD_INITIAL_CLUSTER_STATEnew EOF创建启动服务 cat /usr/lib/systemd/system/etcd.service EOF [Unit] DescriptionEtcd Server Afternetwork.target Afternetwork-online.target Wantsnetwork-online.target [Service] Typenotify EnvironmentFile/etc/etcd/cfg/etcd.conf ExecStart/usr/local/bin/etcd \ --cert-file/etc/etcd/ssl/etcd.pem \ --key-file/etc/etcd/ssl/etcd-key.pem \ --peer-cert-file/etc/etcd/ssl/etcd.pem \ --peer-key-file/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file/etc/etcd/ssl/ca.pem \ --peer-trusted-ca-file/etc/etcd/ssl/ca.pem \ --loggerzap Restarton-failure LimitNOFILE65536 [Install] WantedBymulti-user.target EOF systemctl daemon-reload; systemctl start etcd; systemctl enable etcd;systemctl status etcd检查etcd状态 ETCDCTL_API3 /usr/local/bin/etcdctl --cacert/etc/etcd/ssl/ca.pem --cert/etc/etcd/ssl/etcd.pem --key/etc/etcd/ssl/etcd-key.pem --endpointshttps://192.168.23.51:2379 endpoint health #下面为输出信息 successfully成功 https://192.168.23.51:2379 is healthy: successfully committed proposal: took 34.533591ms至此etcd就安装完成 部署K8S master节点组件 注意master组件搭建在主机kube-master01上操作。 下载kubernets组件 下载二进制软件包 Github地址 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG wget https://dl.k8s.io/v1.18.5/kubernetes-server-linux-amd64.tar.gz mkdir -p /etc/kubernetes/bin tar zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin cp kubectl kube-apiserver kube-scheduler kube-controller-manager /usr/local/bin/部署apiserver组件 创建工作目录 将生成的证书和配置文件临时存放在/data/work路径 mkdir -p /etc/kubernetes/ssl mkdir /var/log/kubernetes 生成kube-apiserver证书 1、自签证书颁发机构CA cd /etc/kubernetes/ssl cat ca-config.json EOF {signing: {default: {expiry: 438000h},profiles: {kubernetes: {expiry: 438000h,usages: [signing,key encipherment,server auth,client auth]}}} } EOFcat ca-csr.json EOF {CN: kubernetes,key: {algo: rsa,size: 2048},names: [{C: CN,L: Guangzhou,ST: Guangdong,O: k8s,OU: System}] } EOF2、生成证书 cd /etc/kubernetes/ssl cfssl gencert -initca ca-csr.json | cfssljson -bare ca ls *pem ca-key.pem ca.pem2、使用自签CA签发kube-apiserver HTTPS证书 #hosts字段中IP为所有集群成员的ip集群内部ip一个都不能少为了方便后期扩容可以多写几个预留的IP cd /etc/kubernetes/ssl cat kube-apiserver-csr.json EOF {CN: kubernetes,hosts: [127.0.0.1,10.0.0.1,10.255.0.1,192.168.23.50,192.168.23.51,192.168.23.52,192.168.23.53,192.168.23.54,192.168.23.55,192.168.23.56,192.168.23.57,192.168.23.58,192.168.23.59,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,L: Guangzhou,ST: Guangdong,O: k8s,OU: system}] } EOF4、生成证书 cd /etc/kubernetes/ssl cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserverls kube-apiserver*pem kube-apiserver-key.pem kube-apiserver.pem创建token.csv文件 mkdir -p /etc/kubernetes/cfg/ cd /etc/kubernetes/cfg/ cat token.csv EOF $(head -c 16 /dev/urandom | od -An -t x | tr -d ),kubelet-bootstrap,10001,system:kubelet-bootstrap EOF创建api-server的配置文件 cat /etc/kubernetes/cfg/kube-apiserver.conf EOF KUBE_APISERVER_OPTS--logtostderrfalse \\ --v2 \\ --log-dir/var/log/kubernetes \\ --etcd-servershttps://192.168.23.51:2379 \\ --bind-address192.168.23.51 \\ --secure-port6443 \\ --advertise-address192.168.23.51 \\ --allow-privilegedtrue \\ --service-cluster-ip-range10.255.0.0/16 \\ --enable-admission-pluginsNamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-modeRBAC,Node \\ --enable-bootstrap-token-authtrue \\ --token-auth-file/etc/kubernetes/cfg/token.csv \\ --service-node-port-range30000-61000 \\ --kubelet-client-certificate/etc/kubernetes/ssl/kube-apiserver.pem \\ --kubelet-client-key/etc/kubernetes/ssl/kube-apiserver-key.pem \\ --tls-cert-file/etc/kubernetes/ssl/kube-apiserver.pem \\ --tls-private-key-file/etc/kubernetes/ssl/kube-apiserver-key.pem \\ --client-ca-file/etc/kubernetes/ssl/ca.pem \\ --service-account-signing-key-file/etc/kubernetes/ssl/ca-key.pem \\ --service-account-key-file/etc/kubernetes/ssl/ca-key.pem \\ --service-account-issuerhttps://kubernetes.default.svc.cluster.local \\ --etcd-cafile/etc/etcd/ssl/ca.pem \\ --etcd-certfile/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile/etc/etcd/ssl/etcd-key.pem \\ --requestheader-client-ca-file/etc/kubernetes/ssl/ca.pem \\ --proxy-client-cert-file/etc/kubernetes/ssl/kube-apiserver.pem \\ --proxy-client-key-file/etc/kubernetes/ssl/kube-apiserver-key.pem \\ --requestheader-allowed-nameskubernetes \\ --requestheader-extra-headers-prefixX-Remote-Extra- \\ --requestheader-group-headersX-Remote-Group \\ --requestheader-username-headersX-Remote-User \\ --enable-aggregator-routingtrue \\ --audit-log-maxage30 \\ --audit-log-maxbackup3 \\ --audit-log-maxsize100 \\ --audit-log-path/var/log/kubernetes/k8s-audit.log EOF创建服务启动文件 cat /usr/lib/systemd/system/kube-apiserver.service EOF [Unit] DescriptionKubernetes API Server Documentationhttps://github.com/kubernetes/kubernetes [Service] EnvironmentFile/etc/kubernetes/cfg/kube-apiserver.conf ExecStart/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restarton-failure [Install] WantedBymulti-user.target EOF设置开机自启动 systemctl daemon-reload;systemctl start kube-apiserver;systemctl enable kube-apiserver; systemctl status kube-apiserver部署kubectl 组件 创建 csr 请求文件 注意O 字段作为 Group “O”: “system:masters”, 必须是 system:masters否则后面 kubectl create clusterrolebinding 报错。证书 O 配置为 system:masters 在集群内部 cluster-admin 的 clusterrolebinding 将system:masters 组和cluster-admin clusterrole 绑定在一起 cd /etc/kubernetes/ssl/ cat admin-csr.json EOF {CN: admin,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,ST: Guangzhou,L: Guangzhou,O: system:masters, OU: system}] } EOF生成客户端的证书 cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes admin-csr.json | cfssljson -bare admin配置安全上下文 kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://192.168.23.51:6443 --kubeconfigkube.config kubectl config set-credentials admin --client-certificateadmin.pem --client-keyadmin-key.pem --embed-certstrue --kubeconfigkube.config kubectl config set-context kubernetes --clusterkubernetes --useradmin --kubeconfigkube.config kubectl config use-context kubernetes --kubeconfigkube.config mkdir ~/.kube -p cp kube.config ~/.kube/config cp kube.config /etc/kubernetes/cfg/admin.conf kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrolesystem:kubelet-api-admin --user Kubernetes查看集群组件状态 $ kubectl cluster-info Kubernetes control plane is running at https://192.168.23.51:6443To further debug and diagnose cluster problems, use kubectl cluster-info dump. $ kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19 NAME STATUS MESSAGE ERROR scheduler Unhealthy …connect: connection refused controller-manager Unhealthy …. connect: connection refused etcd-1 Healthy {health:true,reason:} etcd-2 Healthy {health:true,reason:} etcd-0 Healthy {health:true,reason:} $ kubectl get all --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.255.0.1 none 443/TCP 2m部署kube-controller-manager 组件 创建 kube-controller-manager csr 请求文件 注意节点hostsip根据所需设置即可。注意hosts 列表包含所有 kube-controller-manager 节点 IP CN为 system:kube- controller-manager O 为 system:kube-controller-managerkubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。 cd /etc/kuberentes/ssl cat kube-controller-manager-csr.json EOF {CN: system:kube-controller-manager,hosts: [127.0.0.1,10.0.0.1,10.255.0.1,192.168.118.50,192.168.118.51,192.168.118.52,192.168.118.53,192.168.118.54,192.168.118.55,192.168.118.56,192.168.118.57,192.168.118.58,192.168.118.59,192.168.118.60,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,L: Guangzhou,ST: Guangdong,O: system:kube-controller-manager,OU: system}] } EOF生成 kube-controller-manager证书 cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager创建 kube-controller-manager 的 kubeconfig kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://192.168.23.51:6443 --kubeconfigkube-controller-manager.kubeconfigkubectl config set-credentials system:kube-controller-manager --client-certificatekube-controller-manager.pem --client-keykube-controller-manager-key.pem --embed-certstrue --kubeconfigkube-controller-manager.kubeconfigkubectl config set-context system:kube-controller-manager --clusterkubernetes --usersystem:kube-controller-manager --kubeconfigkube-controller-manager.kubeconfigkubectl config use-context system:kube-controller-manager --kubeconfigkube-controller-manager.kubeconfig创建kube-controller-manager配置文件 mv /etc/kubernetes/ssl/kube-controller-manager.kubeconfig /etc/kubernetes/cfg/ cat /etc/kubernetes/cfg/kube-controller-manager.conf EOF KUBE_CONTROLLER_MANAGER_OPTS--port10252 \--bind-address127.0.0.1 \--kubeconfig/etc/kubernetes/cfg/kube-controller-manager.kubeconfig \--service-cluster-ip-range10.255.0.0/16 \--cluster-namekubernetes \--allocate-node-cidrstrue \--cluster-cidr10.0.0.0/16 \--leader-electtrue \--feature-gatesRotateKubeletServerCertificatetrue \--controllers*,bootstrapsigner,tokencleaner \--horizontal-pod-autoscaler-sync-period10s \--use-service-account-credentialstrue \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--cluster-signing-cert-file/etc/kubernetes/ssl/ca.pem \--cluster-signing-key-file/etc/kubernetes/ssl/ca-key.pem \--root-ca-file/etc/kubernetes/ssl/ca.pem \--service-account-private-key-file/etc/kubernetes/ssl/ca-key.pem \--experimental-cluster-signing-duration438000h0m0s \--v2 EOF创建kube-controller-manager服务启动文件 cat /usr/lib/systemd/system/kube-controller-manager.service EOF [Unit] DescriptionKubernetes Controller Manager Documentationhttps://github.com/kubernetes/kubernetes [Service] EnvironmentFile/etc/kubernetes/cfg/kube-controller-manager.conf ExecStart/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restarton-failure [Install] WantedBymulti-user.target EOF设置kube-controller-manager开机自启动 systemctl daemon-reload;systemctl start kube-controller-manager;systemctl enable kube-controller-manager; systemctl status kube-controller-manager部署kube-scheduler组件 创建kube-scheduler的csr 请求 注意节点hostsip根据所需设置即可。hosts 列表包含所有 kube-scheduler 节点 IP CN 为 system:kube-scheduler、O 为 system:kube-schedulerkubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。 cd /etc/kubernetes/ssl cat kube-scheduler-csr.json EOF {CN: system:kube-scheduler,hosts: [127.0.0.1,10.0.0.1,10.255.0.1,192.168.23.50,192.168.23.51,192.168.23.52,192.168.23.53,192.168.23.54,192.168.23.55,192.168.23.56,192.168.23.57,192.168.23.58,192.168.23.59,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,L: Guangzhou,ST: Guangdong,O: system:kube-scheduler,OU: system}] } EOF生成kube-scheduler证书 cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler创建 kube-scheduler 的 kubeconfig文件 kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://192.168.23.51:6443 --kubeconfigkube-scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler --client-certificatekube-scheduler.pem --client-keykube-scheduler-key.pem --embed-certstrue --kubeconfigkube-scheduler.kubeconfig kubectl config set-context system:kube-scheduler --clusterkubernetes --usersystem:kube-scheduler --kubeconfigkube-scheduler.kubeconfig kubectl config use-context system:kube-scheduler --kubeconfigkube-scheduler.kubeconfig创建kube-scheduler配置文件 mv /etc/kubernetes/ssl/kube-scheduler.kubeconfig /etc/kubernetes/cfg/ cat /etc/kubernetes/cfg/kube-scheduler.conf EOF KUBE_SCHEDULER_OPTS--logtostderrfalse \\ --v2 \\ --kubeconfig/etc/kubernetes/cfg/kube-scheduler.kubeconfig \\ --log-dir/var/log/kubernetes \\ --leader-elect \\ --bind-address127.0.0.1 EOF创建kube-scheduler服务启动文件 cat /usr/lib/systemd/system/kube-scheduler.service EOF [Unit] DescriptionKubernetes Scheduler Documentationhttps://github.com/kubernetes/kubernetes [Service] EnvironmentFile/etc/kubernetes/cfg/kube-scheduler.conf ExecStart/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restarton-failure [Install] WantedBymulti-user.target EOF设置kube-scheduler开机自启动 systemctl daemon-reload;systemctl start kube-scheduler;systemctl enable kube-scheduler;systemctl status kube-scheduler检查集群状态 master所有组件都已经启动成功通过kubectl工具查看当前集群组件状态 kubectl get csNAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {health:true} etcd-2 Healthy {health:true} etcd-0 Healthy {health:true}部署k8s-Worker 节点组件 下面还是在Master节点上操作即同时作为Worker Node(master节点也是能工作的只是默认打了不可调度污点) 创建工作目录并拷贝二进制文件 在所有worker node创建工作目录 mkdir -p /etc/kubernetes/{bin,ssl,cfg} mkdir -p /var/log/kubernetes mkdir -p /var/lib/kubelet从master节点拷贝 #还是在master上操作 cd kubernetes/server/bin cp kubelet kube-proxy /usr/local/bin scp kubelet kube-proxy root192.168.23.52:/usr/local/bin/部署kubelet 以下操作在master1上面操作 创建kubelet配置文件 #参数说明 –hostname-override显示名称集群中唯一–network-plugin启用CNI–kubeconfig空路径会自动生成后面用于连接apiserver–bootstrap-kubeconfig首次启动向apiserver申请证书–config配置参数文件–cert-dirkubelet证书生成目录–pod-infra-container-image管理Pod网络容器的镜像-cgroup-driver启用systemd cat /etc/kubernetes/cfg/kubelet.conf EOF KUBELET_OPTS--logtostderrfalse \\ --v2 \\ --log-dir/var/log/kubernetes \\ --hostname-overridekube-master01 \\ --network-plugincni \\ --kubeconfig/etc/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig/etc/kubernetes/cfg/bootstrap.kubeconfig \\ --config/etc/kubernetes/cfg/kubelet-config.yml \\ --cert-dir/etc/kubernetes/ssl \\ --cgroup-driversystemd \\ --pod-infra-container-imageharbor.bsgchina.com/library/pause:3.9 EOF配置kubelet参数文件 cat /etc/kubernetes/cfg/kubelet-config.yml EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 port: 10250 readOnlyPort: 10255 cgroupDriver: systemd clusterDNS: - 10.255.0.2 clusterDomain: cluster.local failSwapOn: false authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/ssl/ca.pem authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110 EOF生成bootstrap.kubeconfig文件 export KUBE_APISERVERhttps://192.168.23.51:6443 export TOKEN$(awk -F , {print $1} /etc/kubernetes/cfg/token.csv)kubectl config set-cluster kubernetes \--certificate-authority/etc/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfigbootstrap.kubeconfigkubectl config set-credentials kubelet-bootstrap \--token${TOKEN} \--kubeconfigbootstrap.kubeconfigkubectl config set-context default \--clusterkubernetes \--userkubelet-bootstrap \--kubeconfigbootstrap.kubeconfigkubectl config use-context default --kubeconfigbootstrap.kubeconfigkubectl create clusterrolebinding kubelet-bootstrap --clusterrolesystem:node-bootstrapper --userkubelet-bootstrap mv bootstrap.kubeconfig /etc/kubernetes/cfg/配置kubelet启动文件 cat /usr/lib/systemd/system/kubelet.service EOF [Unit] DescriptionKubernetes Kubelet Afterdocker.service [Service] EnvironmentFile/etc/kubernetes/cfg/kubelet.conf ExecStart/usr/local/bin/kubelet \$KUBELET_OPTS Restarton-failure LimitNOFILE65536 [Install] WantedBymulti-user.target EOF设置开机启动 systemctl daemon-reload;systemctl start kubelet;systemctl enable kubelet; systemctl status kubelet批准kubelet证书申请并加入集群 kubectl get csr ###下面为输出结果 NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-X2Ez6ppownEMadnJQIegR2Pdo6L6HQIK3zih83Hk_tc 25s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending# 批准申请 kubectl certificate approve node-csr-X2Ez6ppownEMadnJQIegR2Pdo6L6HQIK3zih83Hk_tc# 查看节点 因为还没有部署网络组件和插件所以还没有就绪 kubectl get node部署kube-proxy 创建kube-proxy配置文件 cat /etc/kubernetes/cfg/kube-proxy.conf EOF KUBE_PROXY_OPTS--logtostderrfalse \\ --v2 \\ --log-dir/var/log/kubernetes/ \\ --config/etc/kubernetes/cfg/kube-proxy-config.yml EOF配置参数文件 cat /etc/kubernetes/cfg/kube-proxy-config.yml EOF kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 metricsBindAddress: 0.0.0.0:10249 clientConnection:kubeconfig: /etc/kubernetes/cfg/kube-proxy.kubeconfig hostnameOverride: kube-master01 clusterCIDR: 10.0.0.0/16 mode: ipvs EOF生成kube-proxy.kubeconfig文件 1、生成kube-proxy证书 cd /etc/kubernetes/ssl cat kube-proxy-csr.json EOF {CN: system:kube-proxy,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: Guangzhou,ST: Guangdong,O: k8s,OU: System}] } EOFcfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-proxy-csr.json | cfssljson -bare kube-proxyls kube-proxy*pem2、生成kubeconfig文件 cd /etc/kubernetes/cfg/ KUBE_APISERVERhttps://192.168.23.51:6443kubectl config set-cluster kubernetes \--certificate-authority/etc/kubernetes/ssl/ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfigkube-proxy.kubeconfigkubectl config set-credentials kube-proxy \--client-certificate/etc/kubernetes/ssl/kube-proxy.pem \--client-key/etc/kubernetes/ssl/kube-proxy-key.pem \--embed-certstrue \--kubeconfigkube-proxy.kubeconfigkubectl config set-context default \--clusterkubernetes \--userkube-proxy \--kubeconfigkube-proxy.kubeconfigkubectl config use-context default --kubeconfigkube-proxy.kubeconfig创建kube-proxy启动服务文件 cat /usr/lib/systemd/system/kube-proxy.service EOF [Unit] DescriptionKubernetes Proxy Afternetwork.target [Service] EnvironmentFile/etc/kubernetes/cfg/kube-proxy.conf ExecStart/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS Restarton-failure LimitNOFILE65536 [Install] WantedBymulti-user.target EOF设置开机自启动 systemctl daemon-reload;systemctl start kube-proxy;systemctl enable kube-proxy; sleep 2;systemctl status kube-proxy3.4.4.6 配置kubectl命令自动补全 在所有master节点执行 yum install -y bash-completion source (kubectl completion bash) echo source (kubectl completion bash) ~/.bashrc部署CNI网络 3.4.5.1 下载cni-plugins插件 先准备好CNI二进制文件 下载地址 https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz 解压二进制包并移动到默认工作目录 mkdir -p /opt/cni/bin tar zxvf cni-plugins-linux-amd64-v1.2.0.tgz -C /opt/cni/bin下载calico插件 #下载calico wget https://github.com/projectcalico/calico/releases/download/v3.24.5/release-v3.24.5.tgz tar -zxvf release-v3.24.5.tgzcd release-v3.24.5/manifests/3.4.5.3 修改calico配置文件 下载wget https://github.com/projectcalico/calico/archive/v3.24.5.tar.gz 在calico-3.24.5/manifests/目录编辑calico-etcd.yaml Cp calico-etcd.yaml calico-etcd.yaml_bak vi calico-etcd.yaml ... data:# Populate the following with etcd TLS configuration if desired, but leave blank if# not using TLS for etcd.# The keys below should be uncommented and the values populated with the base64# encoded contents of each file that would be associated with the TLS data.# Example command for encoding a file contents: cat file | base64 -w 0#将以下三行注释取消将null替换成指定值获取方式cat file | base64 -w 0file可以查看/etc/kubernetes/kube-apiserver.conf 中指定的ectd指定的文件路径 cat /etc/etcd/ssl/etcd-key.pem | base64 -w 0 cat /etc/etcd/ssl/etcd.pem | base64 -w 0 cat /etc/etcd/ssl/ca.pem | base64 -w 0etcd-key: nulletcd-cert: nulletcd-ca: null ... data:# Configure this with the location of your etcd cluster.#同样查看/etc/kubernetes/kube-apiserver.conf将etcd-server的地址填些进去etcd_endpoints: https://192.168.23.51:2379,https://192.168.118.44:2379,https://192.168.118.45:2379# If youre using TLS enabled etcd uncomment the following.# You must also populate the Secret below with these files.#这是上面三个文件在容器内的挂载路径去掉注释使用默认的就行etcd_ca: /calico-secrets/etcd-caetcd_cert: /calico-secrets/etcd-certetcd_key: /calico-secrets/etcd-key ... #将以下2行去掉注释将ip修改为/etc/kubernetes/kube-controller-manager.conf中--cluster-cidr10.0.0.0/16- name: CALICO_IPV4POOL_CIDRvalue: 10.0.0.0/16 #默认值是192.168.0.0/16#这一行下发插入下面2行指定服务器使用的网卡可以用.*通配匹配也可以是具体网卡- name: IP_AUTODETECTION_METHODvalue: interfaceens.* ...:: #默认开启的是IPIP模式需要将其关闭就会自动启用BGP模式 #BGP模式网络效率更高但是node节点需要在同一网段如需跨网段部署k8s集群建议使用默认IPIP模式# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: Never #将Always修改成Never ...#查看该yaml中的image将其镜像替换成新的镜像地址 harbor.bsgchina.com/library/calico/cni:v3.24.5 harbor.bsgchina.com/library/calico/node:v3.24.5 harbor.bsgchina.com/library/calico/kube-controllers:v3.24.5 #将文件中的镜像替换成新的 各个节点登陆镜像仓库验证 docker login -u admin -p Harbor2021#! 192.168.128.156 kubectl apply -f calico-etcd.yaml kubectl get pods -n kube-system #pod 启动异常但node已经Ready继续下一步。kubectl get node NAME STATUS ROLES AGE VERSION kube-master01 Ready none invalid v1.23.17授权apiserver访问kubelet cat apiserver-to-kubelet-rbac.yaml EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: truelabels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet rules:- apiGroups:- resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metrics- pods/logverbs:- * --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: system:kube-apiservernamespace: roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kubernetes EOFkubectl apply -f apiserver-to-kubelet-rbac.yaml新增Worker节点 拷贝已部署好的Node相关文件到新节点 在master节点将Worker Node涉及文件拷贝到新节点node节点192.168.118.46、192.168.118.47 scp -r /etc/kubernetes/ root192.168.23.52:/etc/ ;scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root192.168.23.52:/usr/lib/systemd/system; scp -r /opt/cni/ root192.168.23.52:/opt/; kubelet证书和kubeconfig文件 在所有node节点上操作 rm -rf /etc/kubernetes/cfg/kubelet.kubeconfig rm -f /etc/kubernetes/ssl/kubelet* rm -f /etc/kubernetes/cfg/{kube-apiserver,kube-controller-manager,kube-scheduler}.kubeconfig rm -rf /etc/kubernetes/cfg/{kube-controller-manager.conf,kube-scheduler.conf,kube-apiserver.conf} mkdir -p /var/log/kubernetes3.4.6.3 修改主机名 改成各节点对应主机名 vi /etc/kubernetes/cfg/kubelet.conf --hostname-overridea-t-wms-k8s-node01vi /etc/kubernetes/cfg/kube-proxy-config.yml hostnameOverride: a-t-wms-k8s-node01设置开机自启动 systemctl daemon-reload;systemctl start kubelet;systemctl enable kubelet;systemctl start kube-proxy;systemctl enable kube-proxy;systemctl status kubelet;systemctl status kube-proxy在Master上批准新Node kubelet证书申请 kubectl get csr #跟上面一样 NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-8tQMJx_zBLGfmPbbkm6eusU9LYpm95LdFBZAsFfQPxM 41m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued node-csr-LPeOESRPGxxFrrM6uUhHFp22Ick-bjJ3oIYsvlYnhzs 3m48s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issuedkubectl certificate approve node-csr-LPeOESRPGxxFrrM6uUhHFp22Ick-bjJ3oIYsvlYnhzs查看Node节点状态 kubectl get node新增master节点 新Master 与已部署的Master1所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来再修改下服务器IP和主机名启动即可。 拷贝文件Master1操作 拷贝Master1上所有K8s文件和etcd证书到Master2~master3节点 scp -r /etc/kubernetes root192.168.23.52:/etc scp -r /opt/cni/ root192.168.23.52:/opt scp /usr/lib/systemd/system/kube* root192.168.23.52:/usr/lib/systemd/system scp /usr/local/bin/kube* root192.168.23.52:/usr/local/bin/删除证书文件 在新增的master节点上删除kubelet证书和kubeconfig文件 rm -f /etc/Kubernetes/cfg/kubelet.kubeconfig rm -f /etc/kubernetes/ssl/kubelet* mkdir -p /var/log/kubernetes修改配置文件IP和主机名 修改master2~master3两个控制节点的apiserver、kubelet和kube-proxy配置文件为本地IP地址和主机名 vi /etc/kubernetes/cfg/kube-apiserver.conf ... --bind-address192.168.118.$i \ --advertise-address192.168.118.$i \ ...vi /etc/kubernetes/cfg/kubelet.conf --hostname-overridea-t-wms-k8s-master02vi /etc/kubernetes/cfg/kube-proxy-config.yml hostnameOverride: a-t-wms-k8s-master02设置开机自启动 systemctl daemon-reload;systemctl start kube-apiserver;systemctl start kube-controller-manager;systemctl start kube-scheduler systemctl start kubelet;systemctl start kube-proxy;systemctl enable kube-apiserver;systemctl enable kube-controller-manager;systemctl enable kube-scheduler;systemctl enable kubelet;systemctl enable kube-proxy systemctl status kubelet;systemctl status kube-proxy;systemctl status kube-apiserver;systemctl status kube-controller-manager;systemctl status kube-scheduler查看集群状态 kubectl get csNAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {health:true} etcd-2 Healthy {health:true} etcd-0 Healthy {health:true}3.4.7.6 批准kubelet证书申请 kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-JDJFNav36F0SfcRl8weU_tuebqj9OV3yIHSJkVRxnq4 79s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending kubectl certificate approve node-csr-JDJFNav36F0SfcRl8weU_tuebqj9OV3yIHSJkVRxnq4 kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready 34h v1.23.17 k8s-master2 Ready 83m v1.23.17 k8s-node1 Ready 33h v1.23.17 k8s-node2 Ready 33h v1.23.17 至此k8s节点部署完毕
http://www.pierceye.com/news/767994/

相关文章:

  • 临沂罗庄做网站服装商城网站建设价格
  • 保定企业官网搭建对网站有效的优化软件
  • 网站后台代码在哪修改股权众筹网站建设
  • 站群源码北京公司注销
  • 营销型网站策划建设台州市住房和城乡建设厅网站
  • 达内网站开发课程wordpress自动添加标签页
  • 免费的个人网站空间我做淘宝网站卖东西怎么激活
  • 织梦dedecms女性时尚门户网站模板常州网络推广平台
  • 网站怎么在百度搜不到资源网源码
  • 怎样网站制作设计广西住房城乡建设部官网
  • 手机网站建设方案书王烨是哪个小说的主角
  • 临沂网站案例百事可乐网络营销推广方法
  • 广州网站建设信科网络冷水滩网站建设
  • 做网站设计的价格企业seo网站营销推广
  • 河南省住房和建设厅门户网站网站开发图片素材
  • 在线代理浏览器网站设计本笔记本
  • gta5网站正在建设中柳州团购汽车网站建设
  • 建设一个网站要多少费用吗wordpress 缓存首页
  • 绵阳网站排名深圳哪家网页设计好
  • 软件 开发公司宿迁seo优化
  • 网站开发demo版本做网站服务器的配置
  • 网页游戏排行2013伊克昭盟seo
  • 单页站如何做网站seo优化建e网卧室设计效果图
  • 免费做网站的app巩义seo
  • 做金融服务网站赚钱阿里巴巴网站建设论文
  • 四川做网站的公司哪家好免费团购网站模板
  • 网站建设动漫网站模板怎么做的
  • 西安网站制作公司官网wordpress证书关闭
  • 北网站建设优化seo是什么意思
  • 中国seo网站长沙城乡建设网站