当前位置: 首页 > news >正文

长丰县住房和城乡建设局网站网站建设报价明细及方案

长丰县住房和城乡建设局网站,网站建设报价明细及方案,网站建设新技术,网站营销的优势目录 1、Kubernetes高可用项目介绍 2、项目架构设计 2.1、项目主机信息 2.2、项目架构图 1、Kubernetes高可用项目介绍 2、项目架构设计 2.1、项目主机信息 2.2、项目架构图 2.3、项目实施思路 3、项目实施过程 3.1、系统初始化 3.2、配置部署keepalived服务 3.3、…目录 1、Kubernetes高可用项目介绍 2、项目架构设计 2.1、项目主机信息 2.2、项目架构图 1、Kubernetes高可用项目介绍 2、项目架构设计 2.1、项目主机信息 2.2、项目架构图 2.3、项目实施思路 3、项目实施过程 3.1、系统初始化 3.2、配置部署keepalived服务 3.3、配置部署haproxy服务 3.4、配置部署Docker服务 3.5、部署kubelet kubeadm kubectl工具 3.6、部署Kubernetes Master 3.6、部署Kubernetes Master 3.7、安装集群网络 3.8、添加master节点 3.9、加入Kubernetes Node 3.10、测试Kubernetes集群 4、项目总结 1、Kubernetes高可用项目介绍 单master节点的可靠性不高并不适合实际的生产环境。Kubernetes 高可用集群是保证 Master 节点中 API Server 服务的高可用。API Server 提供了 Kubernetes 各类资源对象增删改查的唯一访问入口是整个 Kubernetes 系统的数据总线和数据中心。采用负载均衡Load Balance连接多个 Master 节点可以提供稳定容器云业务。 2、项目架构设计 2.1、项目主机信息 准备6台虚拟机3台master节点3台node节点保证master节点数为3的奇数。 硬件2核CPU、2G内存、硬盘20G 网络所有机器网络互通、可以访问外网 操作系统 IP地址 角色 主机名 CentOS7-x86-64 192.168.200.111 master k8s-master1 CentOS7-x86-64 192.168.200.112 master k8s-master2 CentOS7-x86-64 192.168.200.113 master k8s-master3 CentOS7-x86-64 192.168.200.114 node k8s-node1 CentOS7-x86-64 192.168.200.115 node k8s-node2 CentOS7-x86-64 192.168.200.116 node k8s-node3 192.168.200.154 VIP master.k8s.io 2.2、项目架构图 多master节点负载均衡的kubernetes集群。官网给出了两种拓扑结构堆叠control plane node和external etcd node本文基于第一种拓扑结构进行搭建。 1、Kubernetes高可用项目介绍 单master节点的可靠性不高并不适合实际的生产环境。Kubernetes 高可用集群是保证 Master 节点中 API Server 服务的高可用。API Server 提供了 Kubernetes 各类资源对象增删改查的唯一访问入口是整个 Kubernetes 系统的数据总线和数据中心。采用负载均衡Load Balance连接多个 Master 节点可以提供稳定容器云业务。 2、项目架构设计 2.1、项目主机信息 准备6台虚拟机3台master节点3台node节点保证master节点数为3的奇数。 硬件2核CPU、2G内存、硬盘20G、开启虚拟化 网络所有机器网络互通、可以访问外网 操作系统 IP地址 角色 主机名 CentOS7-x86-64 192.168.147.137 master k8s-master1 CentOS7-x86-64 192.168.147.139 master k8s-master2 CentOS7-x86-64 192.168.147.140 master k8s-master3 CentOS7-x86-64 192.168.147.141 node k8s-node1 CentOS7-x86-64 192.168.147.142 node k8s-node2 CentOS7-x86-64 192.168.147.143 node k8s-node3 192.168.147.154 VIP master.k8s.io 2.2、项目架构图 多master节点负载均衡的kubernetes集群。官网给出了两种拓扑结构堆叠control plane node和external etcd node本文基于第一种拓扑结构进行搭建。 堆叠control plane node external etcd node 2.3、项目实施思路 master节点需要部署etcd、apiserver、controller-manager、scheduler这4种服务其中etcd、controller-manager、scheduler这三种服务kubernetes自身已经实现了高可用在多master节点的情况下每个master节点都会启动这三种服务同一时间只有一个生效。因此要实现kubernetes的高可用只需要apiserver服务高可用。 keepalived是一种高性能的服务器高可用或热备解决方案可以用来防止服务器单点故障导致服务中断的问题。keepalived使用主备模式至少需要两台服务器才能正常工作。比如keepalived将三台服务器搭建成一个集群对外提供一个唯一IP正常情况下只有一台服务器上可以看到这个IP的虚拟网卡。如果这台服务异常那么keepalived会立即将IP移动到剩下的两台服务器中的一台上使得IP可以正常使用。 haproxy是一款提供高可用性、负载均衡以及基于TCP第四层和HTTP第七层应用的代理软件支持虚拟主机它是免费、快速并且可靠的一种解决方案。使用haproxy负载均衡后端的apiserver服务达到apiserver服务高可用的目的。 本文使用的keepalivedhaproxy方案使用keepalived对外提供稳定的入口使用haproxy对内均衡负载。因为haproxy运行在master节点上当master节点异常后haproxy服务也会停止为了避免这种情况我们在每一台master节点都部署haproxy服务达到haproxy服务高可用的目的。由于多master节点会出现投票竞选的问题因此master节点的数据最好是单数避免票数相同的情况。 3、项目实施过程 3.1、系统初始化 所有主机 关闭防火墙、selinux、swap [rootclient2 ~]# systemctl stop firewalld [rootclient2 ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [rootclient2 ~]# sed -i s/enforcing/disabled/ /etc/selinux/config [rootclient2 ~]# setenforce 0 [rootclient2 ~]# swapoff -a [rootclient2 ~]# sed -ri s/.*swap.*/#/ /etc/fstab修改主机名根据主机角色不同做相应修改 [rootclient2 ~]# hostnamectl set-hostname k8s-master1 [rootclient2 ~]# hostnamectl set-hostname k8s-master2 [rootclient2 ~]# hostnamectl set-hostname k8s-master3 [rootk8s-node1 ~]# hostnamectl set-hostname k8s-node1 [rootk8s-node1 ~]# hostnamectl set-hostname k8s-node2 [rootk8s-node1 ~]# hostnamectl set-hostname k8s-node3[rootk8s-node1 ~]# vim /etc/hosts 192.168.147.137 master 1.k8s.io k8s-master1 192.168.147.139 master 2.k8s.io k8s-master2 192.168.147.140 master 3.k8s.io k8s-master3 192.168.147.141 node1.k8s.io k8s-node1 192.168.147.142 node2.k8s.io k8s-node2 192.168.147.143 node3.k8s.io k8s-node3 192.168.147.154 master.k8s.io k8s-vip [rootk8s-node1 ~]# scp /etc/hosts 192.168.147.137:/etc/hosts [rootk8s-node1 ~]# scp /etc/hosts 192.168.147.139:/etc/hosts [rootk8s-node1 ~]# scp /etc/hosts 192.168.147.140:/etc/hosts [rootk8s-node1 ~]# scp /etc/hosts 192.168.147.142:/etc/hosts [rootk8s-node1 ~]# scp /etc/hosts 192.168.147.143:/etc/hosts将桥接的IPv4流量传递到iptables的链 [rootk8s-master1 ~]# cat EOF /etc/sysctl.conf net.bridge.bridge-nf-call-ip6tables 1 net.bridge.bridge-nf-call-iptables 1 EOF [rootk8s-master1 ~]# modprobe br_netfilter [rootk8s-master1 ~]# sysctl -p时间同步 [rootk8s-master1 ~]# yum install ntpdate -y [rootk8s-master1 ~]# ntpdate time.windows.com3.2、配置部署keepalived服务 安装Keepalived所有master主机 [rootk8s-master1 ~]# yum install -y keepalived k8s-master1节点配置  [rootk8s-master1 ~]# cat /etc/keepalived/keepalived.conf EOF ! Configuration File for keepalived global_defs {router_id k8s } vrrp_script check_haproxy {script killall -0 haproxyinterval 3weight -2fall 10rise 2 } vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111} virtual_ipaddress {192.168.147.154 } track_script {check_haproxy } } EOFk8s-master2节点配置 [rootk8s-master2 ~]# cat /etc/keepalived/keepalived.conf EOF ! Configuration File for keepalived global_defs {router_id k8s } vrrp_script check_haproxy {script killall -0 haproxyinterval 3weight -2fall 10rise 2 } vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 90adver_int 1authentication {auth_type PASSauth_pass 1111} virtual_ipaddress {192.168.147.154 } track_script {check_haproxy } } EOFk8s-master3节点配置 [rootk8s-master3 ~]# cat /etc/keepalived/keepalived.conf EOF ! Configuration File for keepalived global_defs {router_id k8s } vrrp_script check_haproxy {script killall -0 haproxyinterval 3weight -2fall 10rise 2 } vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 80adver_int 1authentication {auth_type PASSauth_pass 1111} virtual_ipaddress {192.168.147.154 } track_script {check_haproxy } } EOF启动和检查 所有master节点都要执行 [rootk8s-master1 ~]# systemctl start keepalived [rootk8s-master1 ~]# systemctl enable keepalived Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.查看启动状态 [rootk8s-master1 ~]# systemctl status keepalived ● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since 二 2023-08-15 13:38:02 CST; 10s agoMain PID: 18740 (keepalived)CGroup: /system.slice/keepalived.service├─18740 /usr/sbin/keepalived -D├─18741 /usr/sbin/keepalived -D└─18742 /usr/sbin/keepalived -D8月 15 13:38:04 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154 8月 15 13:38:04 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154 8月 15 13:38:04 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154 8月 15 13:38:04 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154 8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154 8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 f....154 8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154 8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154 8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154 8月 15 13:38:09 k8s-master1 Keepalived_vrrp[18742]: Sending gratuitous ARP on ens33 for 192.168.147.154 Hint: Some lines were ellipsized, use -l to show in full.启动完成后在master1查看网络信息 [rootk8s-master1 ~]# ip a s ens33 2: ens33: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:c7:3f:d6 brd ff:ff:ff:ff:ff:ffinet 192.168.147.137/24 brd 192.168.147.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.147.154/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::bd67:1ba:506d:b021/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::146a:2496:1fdc:4014/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::5d98:c5e3:98f8:181/64 scope link tentative noprefixroute dadfailed 3.3、配置部署haproxy服务 所有master主机安装haproxy [rootk8s-master1 ~]# yum install -y haproxy每台master节点中的配置均相同配置中声明了后端代理的每个master节点服务器指定了haproxy的端口为16443因此16443端口为集群的入口。 [rootk8s-master1 ~]# cat /etc/haproxy/haproxy.cfg EOF#-------------------------------# Global settings#-------------------------------globallog 127.0.0.1 local2chroot /var/lib/haproxypidfile /var/run/haproxy.pidmaxconn 4000user haproxygroup haproxydaemonstats socket /var/lib/haproxy/stats#--------------------------------# common defaults that all the listen and backend sections will# usr if not designated in their block#--------------------------------defaultsmode httplog globaloption httplogoption dontlognulloption http-server-closeoption forwardfor except 127.0.0.0/8option redispatchretries 3timeout http-request 10stimeout queue 1m timeout connect 10stimeout client 1mtimeout server 1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000#--------------------------------# kubernetes apiserver frontend which proxys to the backends#--------------------------------frontend kubernetes-apiservermode tcpbind *:16443option tcplogdefault_backend kubernetes-apiserver#---------------------------------#round robin balancing between the various backends#---------------------------------backend kubernetes-apiservermode tcpbalance roundrobinserver master1.k8s.io 192.168.147.137:6443 checkserver master2.k8s.io 192.168.147.139:6443 checkserver master3.k8s.io 192.168.147.140:6443 check#---------------------------------# collection haproxy statistics message#---------------------------------listen statsbind *:1080stats auth admin:awesomePasswordstats refresh 5sstats realm HAProxy\ Statisticsstats uri /admin?statsEOF启动和检查 所有master节点都要执行 [rootk8s-master1 ~]# systemctl start haproxy [rootk8s-master1 ~]# systemctl enable haproxy查看启动状态 [rootk8s-master1 ~]# systemctl status haproxy ● haproxy.service - HAProxy Load BalancerLoaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)Active: active (running) since 二 2023-08-15 13:43:11 CST; 15s agoMain PID: 18812 (haproxy-systemd)CGroup: /system.slice/haproxy.service├─18812 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid├─18814 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds└─18818 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds8月 15 13:43:11 k8s-master1 systemd[1]: Started HAProxy Load Balancer. 8月 15 13:43:11 k8s-master1 haproxy-systemd-wrapper[18812]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f... -Ds 8月 15 13:43:11 k8s-master1 haproxy-systemd-wrapper[18812]: [WARNING] 226/134311 (18814) : config : option forward...ode. 8月 15 13:43:11 k8s-master1 haproxy-systemd-wrapper[18812]: [WARNING] 226/134311 (18814) : config : option forward...ode. Hint: Some lines were ellipsized, use -l to show in full.检查端口 [rootk8s-master1 ~]# netstat -lntup|grep haproxy tcp 0 0 0.0.0.0:1080 0.0.0.0:* LISTEN 18818/haproxy tcp 0 0 0.0.0.0:16443 0.0.0.0:* LISTEN 18818/haproxy udp 0 0 0.0.0.0:40763 0.0.0.0:* 18814/haproxy 3.4、配置部署Docker服务 所有主机上分别部署 Docker 环境因为 Kubernetes 对容器的编排需要 Docker 的支持。 [rootk8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo [rootk8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2使用 YUM 方式安装 Docker 时推荐使用阿里的 YUM 源 [rootk8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [rootk8s-master ~]# yum clean all yum makecache fast [rootk8s-master ~]# yum -y install docker-ce [rootk8s-master ~]# systemctl start docker [rootk8s-master ~]# systemctl enable docker镜像加速器所有主机配置 [rootk8s-master ~]# cat END /etc/docker/daemon.json {registry-mirrors:[ https://nyakyfun.mirror.aliyuncs.com ] } END [rootk8s-master ~]# systemctl daemon-reload [rootk8s-master ~]# systemctl restart docker3.5、部署kubelet kubeadm kubectl工具 使用 YUM 方式安装Kubernetes时推荐使用阿里的yum。 所有主机配置 [rootk8s-master ~]# cat EOF /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled1 gpgcheck1 repo_gpgcheck1 gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF[rootk8s-master ~]# ls /etc/yum.repos.d/ [rootk8s-node3 ~]# ls /etc/yum.repos.d backup CentOS-Base.repo CentOS-Media.repo docker-ce.repo kubernetes.repo安装kubelet kubeadm kubectl 所有主机配置 [rootk8s-master ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0 [rootk8s-master ~]# systemctl enable kubelet3.6、部署Kubernetes Master 在具有vip的master上操作。此处的vip节点为k8s-master1。 创建kubeadm-config.yaml文件 [rootk8s-master1 ~]# cat kubeadm-config.yaml EOF apiServer:certSANs:- k8s-master1- k8s-master2- k8s-master3- master.k8s.io- 192.168.147.137- 192.168.147.139- 192.168.147.140- 192.168.147.154- 127.0.0.1extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta1 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: master.k8s.io:6443 controllerManager: {} dns:type: CoreDNS etcd:local:dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.0 networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.1.0.0/16 scheduler: {} EOF3.6、部署Kubernetes Master 在具有vip的master上操作。此处的vip节点为k8s-master1。 创建kubeadm-config.yaml文件 [rootk8s-master1 ~]# cat kubeadm-config.yaml EOF apiServer:certSANs:- k8s-master1- k8s-master2- k8s-master3- master.k8s.io- 192.168.147.137- 192.168.147.139- 192.168.147.140- 192.168.147.154- 127.0.0.1extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta1 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: master.k8s.io:6443 controllerManager: {} dns:type: CoreDNS etcd:local:dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.0 networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16serviceSubnet: 10.1.0.0/16 scheduler: {} EOF查看所需镜像信息 [rootk8s-master1 ~]# kubeadm config images list --config kubeadm-config.yaml W0815 13:55:35.933304 19444 common.go:77] your configuration file uses a deprecated API spec: kubeadm.k8s.io/v1beta1. Please use kubeadm config migrate --old-config old.yaml --new-config new.yaml, which will write the new, similar spec using a newer API version. registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0 registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0 registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0 registry.aliyuncs.com/google_containers/pause:3.2 registry.aliyuncs.com/google_containers/etcd:3.4.13-0 registry.aliyuncs.com/google_containers/coredns:1.7.0上传k8s所需的镜像并导入所有master主机 [rootk8s-master1 ~]# mkdir master [rootk8s-master1 ~]# cd master/ [rootk8s-master1 master]# rz -E rz waiting to receive. [rootk8s-master1 master]# ls coredns_1.7.0.tar kube-apiserver_v1.20.0.tar kube-proxy_v1.20.0.tar pause_3.2.tar etcd_3.4.13-0.tar kube-controller-manager_v1.20.0.tar kube-scheduler_v1.20.0.tar [rootk8s-master1 master]# ls | while read linedodocker load $linedone 225df95e717c: Loading layer 336.4kB/336.4kB 96d17b0b58a7: Loading layer 45.02MB/45.02MB[rootk8s-master1 ~]# scp master/* 192.168.147.139:/root/master [rootk8s-master1 ~]# scp master/* 192.168.147.140:/root/master [rootk8s-master2/3 master]# ls | while read linedodocker load $linedone 使用kubeadm命令初始化k8s [rootk8s-master1 ~]# kubeadm init --config kubeadm-config.yamlYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:kubeadm join master.k8s.io:6443 --token zus2jc.brtsxszpyv03a57j \--discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300a \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join master.k8s.io:6443 --token zus2jc.brtsxszpyv03a57j \--discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300 如果——初始化中的错误 [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1执行以下命令后重新执行初始化命令echo 1 /proc/sys/net/bridge/bridge-nf-call-iptables 根据初始化的结果操作 [rootk8s-master1 ~]# mkdir -p $HOME/.kube [rootk8s-master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [rootk8s-master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config查看集群状态 [rootk8s-master1 ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19 NAME STATUS MESSAGE ERROR scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {health:true} 注意出现以上错误情况是因为/etc/kubernetes/manifests/下的kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口为0导致的解决方式是注释掉对应的port即可 修改kube-controller-manager.yaml文件、kube-scheduler.yaml文件 [rootk8s-master1 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml - --leader-electtrue # - --port0- --requestheader-client-ca-file/etc/kubernetes/pki/front-proxy-ca.crt[rootk8s-master1 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml - --leader-electtrue # - --port0image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0查看集群状态 [rootk8s-master1 ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19 NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {health:true} 查看pod信息 [rootk8s-master1 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f89b7bc75-hdvw8 0/1 Pending 0 5m51s coredns-7f89b7bc75-jbn4h 0/1 Pending 0 5m51s etcd-k8s-master1 1/1 Running 0 6m kube-apiserver-k8s-master1 1/1 Running 0 6m kube-controller-manager-k8s-master1 1/1 Running 0 2m56s kube-proxy-x25rz 1/1 Running 0 5m51s kube-scheduler-k8s-master1 1/1 Running 0 2m17s查看节点信息 [rootk8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 NotReady control-plane,master 6m34s v1.20.03.7、安装集群网络 在k8s-master1节点执行 [rootk8s-master1 ~]# rz -E rz waiting to receive. [rootk8s-master1 ~]# ll 总用量 52512 -rw-r--r--. 1 root root 53746688 12月 16 2020 flannel_v0.12.0-amd64.tar -rw-r--r--. 1 root root 14366 11月 13 2020 kube-flannel.yml [rootk8s-master1 ~]# docker load flannel_v0.12.0-amd64.tar 256a7af3acb1: Loading layer 5.844MB/5.844MB d572e5d9d39b: Loading layer 10.37MB/10.37MB 57c10be5852f: Loading layer 2.249MB/2.249MB 7412f8eefb77: Loading layer 35.26MB/35.26MB 05116c9ff7bf: Loading layer 5.12kB/5.12kB Loaded image: quay.io/coreos/flannel:v0.12.0-amd64 [rootk8s-master1 ~]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17, unavailable in v1.22; use rbac.authorization.k8s.io/v1 ClusterRole clusterrole.rbac.authorization.k8s.io/flannel created Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17, unavailable in v1.22; use rbac.authorization.k8s.io/v1 ClusterRoleBinding clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created再次查看节点信息 [rootk8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 NotReady control-plane,master 8m27s v1.20.0 还是没有变成Ready下载cni网络插件 [rootk8s-master1 ~]# tar xf cni-plugins-linux-amd64-v0.8.6.tgz [rootk8s-master1 ~]# cp flannel /opt/cni/bin/ [rootk8s-master1 ~]# kubectl apply -f kube-flannel.yml configmap/kube-flannel-cfg unchanged daemonset.apps/kube-flannel-ds-amd64 unchanged daemonset.apps/kube-flannel-ds-arm64 unchanged daemonset.apps/kube-flannel-ds-arm unchanged daemonset.apps/kube-flannel-ds-ppc64le unchanged daemonset.apps/kube-flannel-ds-s390x unchanged [rootk8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready control-plane,master 11m v1.20.03.8、添加master节点 在k8s-master2和k8s-master3节点创建文件夹 [rootk8s-master2 ~]# mkdir -p /etc/kubernetes/pki/etcd[rootk8s-master3 ~]# mkdir -p /etc/kubernetes/pki/etcd在k8s-master1节点执行 从k8s-master1复制秘钥和相关文件到k8s-master2和k8s-master3 [rootk8s-master1 ~]# scp /etc/kubernetes/admin.conf root192.168.147.139:/etc/kubernetes [rootk8s-master1 ~]# scp /etc/kubernetes/admin.conf root192.168.147.140:/etc/kubernetes[rootk8s-master1 ~]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root192.168.147.139:/etc/kubernetes/pki [rootk8s-master1 ~]# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root192.168.147.140:/etc/kubernetes/pki[rootk8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.* root192.168.147.139:/etc/kubernetes/pki/etcd [rootk8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.* root192.168.147.140:/etc/kubernetes/pki/etcd将其他master节点加入集群 注意kubeadm init生成的token有效期只有1天生成不过期token [rootk8s-master1 ~]# kubeadm token create --ttl 0 --print-join-command kubeadm join master.k8s.io:6443 --token 4vd7c0.x8z96hhh4808n4fv --discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300a [rootk8s-master1 ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS p9u7gb.o9naimgqjauiuzr6 forever never authentication,signing none system:bootstrappers:kubeadm:default-node-token xhfagw.6wkdnkdrd2rhkbe9 23h 2023-08-16T14:03:3208:00 authentication,signing none system:bootstrappers:kubeadm:default-node-tokenk8s-master2和k8s-master3都需要加入 [rootk8s-master3 master]# kubeadm join master.k8s.io:6443 --token zus2jc.brtsxszpyv03a57j --discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300a --control-plane [preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected cgroupfs as the Docker cgroup driver. The recommended driver is systemd. Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml [kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml [kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.Run kubectl get nodes on the control-plane to see this node join the cluster.[rootk8s-master3 master]# mkdir -p $HOME/.kube [rootk8s-master3 master]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [rootk8s-master3 master]# sudo chown $(id -u):$(id -g) $HOME/.kube/config 如果master2/3加入时报错 [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists 是一个错误消息表示 /etc/kubernetes/pki/ca.crt 文件已经存在 直接删除改文件在执行命令即可 [rootk8s-master2 master]# rm -rf /etc/kubernetes/pki/ca.crt master2/3添加cni [rootk8s-master3 master]# tar -xf cni-plugins-linux-amd64-v0.8.6.tgz [rootk8s-master3 master]# rz -E rz waiting to receive. [rootk8s-master3 master]# cp flannel /opt/cni/bin/ [rootk8s-master3 master]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged configured Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17, unavailable in v1.22; use rbac.authorization.k8s.io/v1 ClusterRole clusterrole.rbac.authorization.k8s.io/flannel unchanged Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17, unavailable in v1.22; use rbac.authorization.k8s.io/v1 ClusterRoleBinding clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged serviceaccount/flannel unchanged configmap/kube-flannel-cfg unchanged daemonset.apps/kube-flannel-ds-amd64 unchanged daemonset.apps/kube-flannel-ds-arm64 unchanged daemonset.apps/kube-flannel-ds-arm unchanged daemonset.apps/kube-flannel-ds-ppc64le unchanged daemonset.apps/kube-flannel-ds-s390x unchangedmaster1查看nodes [rootk8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready control-plane,master 8m14s v1.20.0 k8s-master2 Ready control-plane,master 48s v1.20.0 k8s-master3 Ready control-plane,master 13s v1.20.03.9、加入Kubernetes Node 直接在node节点服务器上执行k8s-master1初始化成功后的消息即可 [rootk8s-node3 ~]# kubeadm join master.k8s.io:6443 --token zus2jc.brtsxszpyv03a57j \--discovery-token-ca-cert-hash sha256:20a551796d33309f20ad7579c710ea766ef39b64b98c37a4a4029a903f23300a[rootk8s-node1 ~]# docker load flannel_v0.12.0-amd64.tar Loaded image: quay.io/coreos/flannel:v0.12.0-amd64添加cni 同master操作 查看节点信息 [rootk8s-master1 demo]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready control-plane,master 45m v1.20.0 k8s-master2 Ready control-plane,master 37m v1.20.0 k8s-master3 Ready control-plane,master 37m v1.20.0 k8s-node1 Ready none 32m v1.20.0 k8s-node2 Ready none 31m v1.20.0 k8s-node3 Ready none 31m v1.20.0 3.10、测试Kubernetes集群 所有node主机导入测试镜像 [rootk8s-node1 ~]# docker load nginx-1.19.tar [rootk8s-node1 ~]# docker tag nginx nginx:1.19.6在Kubernetes集群中创建一个pod验证是否正常运行。 [rootk8s-master1 ~]# mkdir demo [rootk8s-master1 ~]# cd demo [rootk8s-master1 demo]# vim nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deploymentlabels:app: nginx spec:replicas: 3selector: matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.19.6ports:- containerPort: 80创建完 Deployment 的资源清单之后使用 create 执行资源清单来创建容器。通过 get pods 可以查看到 Pod 容器资源已经自动创建完成。 [rootk8s-master1 demo]# kubectl create -f nginx-deployment.yaml deployment.apps/nginx-deployment created[rootk8s-master1 demo]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-76ccf9dd9d-dhcl8 1/1 Running 0 11m nginx-deployment-76ccf9dd9d-psn8p 1/1 Running 0 11m nginx-deployment-76ccf9dd9d-xllhp 1/1 Running 0 11m[rootk8s-master1 demo]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployme-596f5df7f-8mhzz 1/1 Running 0 5m10s 10.244.4.4 k8s-node3 none none nginx-deployme-596f5df7f-ql7l7 1/1 Running 0 5m10s 10.244.4.3 k8s-node3 none none nginx-deployme-596f5df7f-x6pgv 1/1 Running 0 5m10s 10.244.4.2 k8s-node3 none none创建Service资源清单 在创建的 nginx-service 资源清单中定义名称为 nginx-service 的 Service、标签选择器为 app: nginx、type 为 NodePort 指明外部流量可以访问内部容器。在 ports 中定义暴露的端口库号列表对外暴露访问的端口是 80容器内部的端口也是 80。 [rootk8s-master1 demo]# vim nginx-service.yaml kind: Service apiVersion: v1 metadata:name: nginx-service spec:selector:app: nginxtype: NodePortports:- protocol: TCPport: 80 targetPort: 80[rootk8s-master1 demo]# kubectl create -f nginx-service.yaml service/nginx-service created [rootk8s-master1 demo]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.1.0.1 none 443/TCP 52m nginx-service NodePort 10.1.39.231 none 80:31418/TCP 14s [rootk8s-master1 demo]# 通过浏览器访问nginxhttp://master.k8s.io:31418 域名或者VIP地址 [rootk8s-master1 demo]# elinks --dump http://master.k8s.io:31418Welcome to nginx!If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.For online documentation and support please refer to [1]nginx.org.Commercial support is available at [2]nginx.com.Thank you for using nginx.ReferencesVisible links1. http://nginx.org/2. http://nginx.com/挂起k8s-master1节点刷新页面还是能访问nginx说明高可用集群部署成功。 检查会发现VIP已经转移到k8s-master2节点上 [rootk8s-master2 master]# ip a s ens33 2: ens33: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ae:1d:c6 brd ff:ff:ff:ff:ff:ffinet 192.168.147.139/24 brd 192.168.147.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.147.154/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::bd67:1ba:506d:b021/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::146a:2496:1fdc:4014/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::5d98:c5e3:98f8:181/64 scope link noprefixroute valid_lft forever preferred_lft forever 至此Kubernetes企业级高可用环境完美实现。 4、项目总结 1、集群中只要有一个master节点正常运行就可以正常对外提供业务服务。 2、如果需要在master节点使用kubectl相关的命令必须保证至少有2个master节点正常运行才可以使用不然会有 Unable to connect to the server: net/http: TLS handshake timeout 这样的错误。 3、Node节点故障时pod自动转移当pod所在的Node节点宕机后根据 controller-manager的–pod-eviction-timeout 配置默认是5分钟5分钟后k8s会把pod状态设置为unkown 然后在其它节点启动pod。当故障节点恢复后k8s会删除故障节点上面的unkown pod。如果你想立即强制迁移可以用 kubectl drain nodename 4、为了保证集群的高可用性建议master节点和node节点至少分别部署3台及以上且master节点应该部署基数个实例(3、5、7、9)。
http://www.pierceye.com/news/31386/

相关文章:

  • 如何自己搭建一个网站西安旅游攻略自由行
  • 天津网站设计公司价格企业级网站开发技术
  • 泉州做 php 网站c .net网站开发
  • 怎么做视频还有网站付费推广外包
  • 保险设计素材网站韩国电信 网站
  • 苏州网网站建设百度账号申请注册
  • 个人购物网站需要备案吗高端品牌网站制作
  • 网站建设 软件开发的公司哪家好网站 网安备案
  • 模板网站如何做seo机wordpress
  • 用thinkphp做音乐网站无锡网站建设方案
  • 用虚拟机做网站服务器网页制作员厂家
  • 网站运营刚做时的工作内容软件开发网站
  • 怎么查看网站是否被收录京东网站建设目标是什么
  • 高端网站开发程电力建设工程最好的网站
  • 域名服务网站建设科技公司销售网站建设考核指标
  • 广东手机网站建设公司注册公司流程和费用是多少
  • 嘉兴网站搜索优化wordpress 制作网页
  • 俄罗斯网站域名注册鄂州网站网站建设
  • 石家庄网站运营公司汉中市建设工程招投标交易中心官网
  • 电子商务网站总体规划的内容电影下载网站模板
  • 广州网站建设 名片制作 网站管理wordpress淘客导购文章
  • 网站做营销推广东莞农村商业银行
  • 美发企业网站模板群晖wordpress端口无法登陆
  • 阿里巴巴建设网站首页使用wordpress编辑
  • 在线直播网站开发实战项目做网站备案是承诺书在哪下载
  • 河东区建设局网站网站开发专利
  • 企业网站制作公司排名文章类网站源码
  • html5学习网站wordpress love shopping
  • 宁波网站排名优化费用网络推广方案写作七步法
  • 成都哪家做网站建设比较好建设项目环境影响网站