当前位置: 首页 > news >正文

网站建设报价方案doc丁香花在线视频观看免费

网站建设报价方案doc,丁香花在线视频观看免费,招远做网站哪家好,苏州电子商务网站设计文章目录 k8s概念安装部署-第一版无密钥配置与hosts与关闭swap开启ipv4转发安装前启用脚本开启ip_vs安装指定版本docker 安装kubeadm kubectl kubelet,此部分为基础构建模版 k8s一主一worker节点部署k8s三个master部署,如果负载均衡keepalived 不可用#xff0c;可以用单节点做… 文章目录 k8s概念安装部署-第一版无密钥配置与hosts与关闭swap开启ipv4转发安装前启用脚本开启ip_vs安装指定版本docker 安装kubeadm kubectl kubelet,此部分为基础构建模版 k8s一主一worker节点部署k8s三个master部署,如果负载均衡keepalived 不可用可以用单节点做实验忽略关于负载均衡的步骤虚拟负载均衡ip创建 安装部署第一版参考链接地址 安装kubernetes v1.23.5 版本集群安装前准备所有节点安装 containerd 安装apiserver高可用 Kubeadm 安装配置kubectl 安装master 节点配置node节点配置安装部署第二版参考链接 k8s概念 K8sMaster : 管理K8sNode的。 K8sNode:具有docker环境 和k8s组件kubelet、k-proxy 载有容器服务的工作节点。 Controller-manager: k8s 的大脑它通过 API Server监控和管理整个集群的状态并确保集群处于预期的工作状态。 API Server k8s API Server提供了k8s各类资源对象pod,RC,Service等的增删改查及watch等HTTP Rest接口是整个系统的数据总线和数据中心。 etcd 高可用强一致性的服务发现存储仓库kubernetes集群中etcd主要用于配置共享和服务发现 Scheduler 主要是为新创建的pod在集群中寻找最合适的node并将pod调度到K8sNode上。 kubelet: 作为连接Kubernetes Master和各Node之间的桥梁用于处理Master下发到本节点的任务管理 Pod及Pod中的容器 k-proxy 是 kubernetes 工作节点上的一个网络代理组件运行在每个节点上维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。监听 API server 中 资源对象的变化情况代理后端来为服务配置负载均衡。 Pod 一组容器的打包环境。在Kubernetes集群中Pod是所有业务类型的基础也是K8S管理的最小单位级它是一个或多个容器的组合。这些容器共享存储、网络和命名空间以及如何运行的规范。k8s 学校、pod 班级、容器 学生 安装部署-第一版 无密钥配置与hosts与关闭swap开启ipv4转发 首先将文件下载下来 cd /root yum install -y git git clone https://gitee.com/hanfeng_edu/mastering_kubernetes.git ...略... 公钥(id_dsa.pub)、私钥(id_dsa)、授权列表文件(authorized_keys) /etc/ssh/sshd_config 配置文件注意 ChallengeResponseAuthentication no PermitRootLogin yes# 一般只要这个就可以 PasswordAuthentication yes PubkeyAuthentication yes/etc/hosts 192.168.100.8 k8sMaster-1 192.168.100.9 k8sNode-1 192.168.100.10 k8sNode-2 安装前启用脚本 #!/bin/bash################# 系统环境配置 ###################### 关闭 Selinux/firewalld systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i s/SELINUXenforcing/SELINUXdisabled/g /etc/selinux/config# 关闭交换分区 swapoff -a cp /etc/{fstab,fstab.bak} cat /etc/fstab.bak | grep -v swap /etc/fstab# 设置 iptables echo vm.swappiness 0 net.bridge.bridge-nf-call-iptables 1 net.ipv4.ip_forward 1 net.bridge.bridge-nf-call-ip6tables 1/etc/sysctl.conf modprobe br_netfilter sysctl -p# 同步时间 yum install -y ntpdate ln -nfsv /usr/share/zoneinfo/Asia/Shanghai /etc/localtime开启ip_vs #!/bin/bashcat /etc/sysconfig/modules/ipvs.modules EOF ipvs_modulesip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack for kernel_module in \${ipvs_modules}; do/sbin/modinfo -F filename \${kernel_module} /dev/null 21if [ $? -eq 0 ]; then/sbin/modprobe \${kernel_module}fi done EOFchmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep ip_vs安装指定版本docker 参考https://docs.docker.com/engine/install/centos/ 移除老版本 yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine 安装所需依赖库 yum install –y yum-utils device-mapper-persistent-data lvm2 添加软件源信息 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 更新并安装Docker-CE yum makecache fast yum install docker-ce-18.06.3.ce-3.el7 docker-ce-cli-18.06.3.ce-3.el7 containerd.io -y配置Docker镜像加速器等 mkdir -p /etc/docker tee /etc/docker/daemon.json -EOF {registry-mirrors: [https://xxxxxx.aliyuncs.com] } EOF sudo systemctl daemon-reload sudo systemctl restart docker 安装kubeadm kubectl kubelet,此部分为基础构建模版 #!/bin/bash# 安装软件可能需要的依赖关系 yum install -y yum-utils device-mapper-persistent-data lvm2# 配置使用阿里云仓库安装Kubernetes工具 cat /etc/yum.repos.d/kubernetes.repo EOF [kubernetes] namekubernetes baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled1 gpgcheck0 EOF# 执行安装kubeadm, kubelet, kubectl工具 yum -y install kubeadm-1.17.0 kubectl-1.17.0 kubelet-1.17.0# 配置防火墙 sed -i 13i ExecStartPost/usr/sbin/iptables -P FORWARD ACCEPT /usr/lib/systemd/system/docker.service# 创建文件夹 if [ ! -d /etc/docker ];thenmkdir -p /etc/docker fi# 配置 docker 启动参数 cat /etc/docker/daemon.json EOF {registry-mirrors: [https://xxxx.mirror.aliyuncs.com],exec-opts: [native.cgroupdriversystemd],log-driver: json-file,log-opts: {max-size: 100m},storage-driver: overlay2} EOF# 配置开启自启 systemctl enable docker systemctl enable kubelet systemctl daemon-reload systemctl restart docker 安装完成之后如图所示 k8s一主一worker节点部署 1.在master节点配置K8S配置文件 cat /etc/kubernetes/kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.17.0 controlPlaneEndpoint: 192.168.100.8:6443 apiServer:certSANs:- 192.168.100.8 networking:podSubnet: 10.244.0.0/16 imageRepository: registry.aliyuncs.com/google_containers --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs上面配置文件中 192.168.100.8 是master 配置文件2. 执行如下命令初始化集群 # kubeadm init --config /etc/kubernetes/kubeadm-config.yaml # mkdir -p $HOME/.kube # cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config # curl -fsSL https://docs.projectcalico.org/v3.9/manifests/calico.yaml| sed s192.168.0.0/1610.244.0.0/16g | kubectl apply -f - configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created3. Worker节点加入master集群 # kubeadm join 192.168.100.8:6443 --token hrz6jc.8oahzhyv74yrpem5 \--discovery-token-ca-cert-hash sha256:25f51d27d64c55ea9d89d5af839b97d37dfaaf0413d00d481f7f59bd6556ee43 4. 查看集群状态 # kubectl get nodesk8s三个master部署,如果负载均衡keepalived 不可用可以用单节点做实验忽略关于负载均衡的步骤 虚拟负载均衡ip创建 1. 在三个master节点安装keepalived软件# yum install -y socat keepalived ipvsadm conntrack#!/bin/bashcat /etc/sysconfig/modules/ipvs.modules EOF ipvs_modulesip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack for kernel_module in \${ipvs_modules}; do/sbin/modinfo -F filename \${kernel_module} /dev/null 21if [ $? -eq 0 ]; then/sbin/modprobe \${kernel_module}fi done EOFchmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep ip_vs2. 创建如下keepalived的配置文件# cat /etc/keepalived/keepalived.conf global_defs {router_id LVS_DEVEL }vrrp_instance VI_1 {state MASTERinterface eth0virtual_router_id 80priority 100advert_int 1authentication {auth_type PASSauth_pass just0kk}virtual_ipaddress {192.168.100.199} }virtual_server 192.168.100.199 6443 {delay_loop 6lb_algo loadbalancelb_kind DRnet_mask 255.255.255.0persistence_timeout 0protocol TCPreal_server 192.168.100.13 6443 {weight 1SSL_GET {url {path /healthzstatus_code 200}connect_timeout 3nb_get_retry 3delay_before_retry 3}} }real_server 192.168.100.12 6443 {weight 1SSL_GET {url {path /healthzstatus_code 200}connect_timeout 3nb_get_retry 3delay_before_retry 3}} }real_server 192.168.100.14 6443 {weight 1SSL_GET {url {path /healthzstatus_code 200}connect_timeout 3nb_get_retry 3delay_before_retry 3}} }### 此部分要将配置文件分发到另外两台机器 # scp /etc/keepalived/keepalived.conf root192.168.100.12:/etc/keepalived/ keepalived.conf 100% 1257 1.7MB/s 00:00 # scp /etc/keepalived/keepalived.conf root192.168.100.13:/etc/keepalived/ 然后每一台机器的 priority 100 参数搞成不一样即可三台机器全部启动能ping通虚拟地址即为成功 # systemctl start keepalived # ping 192.168.100.199 PING 192.168.100.199 (192.168.100.199) 56(84) bytes of data. 64 bytes from 192.168.100.199: icmp_seq1 ttl64 time0.064 ms --- 192.168.100.199 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev 0.064/0.064/0.064/0.000 ms3. 创建k8s集群初始化配置文件在某一台主节点执行即可$ cat /etc/kubernetes/kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.17.0 controlPlaneEndpoint: 192.168.100.199:6443 apiServer:certSANs:- 192.168.100.12- 192.168.100.13- 192.168.100.14- 192.168.100.199 networking:podSubnet: 10.244.0.0/16 imageRepository: registry.aliyuncs.com/google_containers --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs4. 初始化k8s集群# kubeadm init --config /etc/kubernetes/kubeadm-config.yamlmkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config# curl -fsSL https://docs.projectcalico.org/v3.9/manifests/calico.yaml| sed s192.168.0.0/1610.244.0.0/16g | kubectl apply -f -查看容器状态 kubectl get pods -n kube-system # kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-7cc97544d-lx8zn 1/1 Running 0 67s calico-node-v8zql 1/1 Running 0 67s coredns-9d85f5447-s5q64 1/1 Running 0 77s coredns-9d85f5447-wv4c5 1/1 Running 0 77s etcd-gcp-honkong-k8s-doc04 1/1 Running 0 94s kube-apiserver-gcp-honkong-k8s-doc04 1/1 Running 0 94s kube-controller-manager-gcp-honkong-k8s-doc04 1/1 Running 0 94s kube-proxy-mg6ns 1/1 Running 0 77s kube-scheduler-gcp-honkong-k8s-doc04 1/1 Running 0 94s5. 各个master之间建立无密码可以互访然后执行如下# cat k8s-cluster-other-init.sh #!/bin/bash IPS(192.168.100.12 192.168.100.13) JOIN_CMDkubeadm token create --print-join-command 2 /dev/nullfor index in 0 1; doip${IPS[${index}]}ssh $ip mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crtscp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.keyscp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.keyscp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pubscp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crtscp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.keyscp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crtscp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.keyscp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.confscp /etc/kubernetes/admin.conf $ip:~/.kube/configssh ${ip} ${JOIN_CMD} --control-plane done 安装部署第一版参考链接地址 https://gitee.com/hanfeng_edu/mastering_kubernetes.git安装kubernetes v1.23.5 版本集群 安装前准备 hostnamectl set-hostname k8s-01 #所有机器按照要求修改 bash #刷新主机名 cat /etc/hosts EOF 192.168.100.18 k8s-01 192.168.100.19 k8s-02 192.168.100.20 k8s-03 192.168.100.21 k8s-04 192.168.100.22 k8s-05 EOF #设置k8s-01为分发机 (只需要在k8s-01服务器操作即可) wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo yum install -y expect# 修改服务器/etc/ssh/sshd_config PermitRootLogin yes# 一般只要这个就可以 PasswordAuthentication yes sed -i sPermitRootLogin \noPermitRootLogin \yesg /etc/ssh/sshd_config sed -i s#PasswordAuthentication \yesPasswordAuthentication \yesg /etc/ssh/sshd_config#分发公钥 ssh-keygen -t rsa -P -f /root/.ssh/id_rsa for i in k8s-01 k8s-02 k8s-03 k8s-04 k8s-05;do expect -c spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root$iexpect {\*yes/no*\ {send \yes\r\; exp_continue}\*password*\ {send \123456\r\; exp_continue}\*Password*\ {send \123456\r\;}} done 服务器密码123456 自行更改 systemctl stop firewalld systemctl disable firewalld iptables -F iptables -X iptables -F -t nat iptables -X -t nat iptables -P FORWARD ACCEPT swapoff -a sed -i / swap / s/^\(.*\)$/#\1/g /etc/fstab setenforce 0 sed -i s/^SELINUX.*/SELINUXdisabled/ /etc/selinux/config curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum clean all yum makecache yum -y install gcc gcc-c make autoconf libtool-ltdl-devel gd-devel freetype-devel libxml2-devel libjpeg-devel libpng-devel openssh-clients openssl-devel curl-devel bison patch libmcrypt-devel libmhash-devel ncurses-devel binutils compat-libstdc-33 elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel libgcj libtiff pam-devel libicu libicu-devel gettext-devel libaio-devel libaio libgcc libstdc libstdc-devel unixODBC unixODBC-devel numactl-devel glibc-headers sudo bzip2 mlocate flex lrzsz sysstat lsof setuptool system-config-network-tui system-config-firewall-tui ntsysv ntp pv lz4 dos2unix unix2dos rsync dstat iotop innotop mytop telnet iftop expect cmake nc gnuplot screen xorg-x11-utils xorg-x11-xinit rdate bc expat-devel compat-expat1 tcpdump sysstat man nmap curl lrzsz elinks finger bind-utils traceroute mtr ntpdate zip unzip vim wget net-toolsmodprobe br_netfilter modprobe ip_conntrack cat /etc/rc.sysinitEOF #!/bin/bash for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] $file done EOF echo modprobe br_netfilter /etc/sysconfig/modules/br_netfilter.modules echo modprobe ip_conntrack /etc/sysconfig/modules/ip_conntrack.modules chmod 755 /etc/sysconfig/modules/br_netfilter.modules chmod 755 /etc/sysconfig/modules/ip_conntrack.modules内核优化 cat kubernetes.conf EOF net.bridge.bridge-nf-call-iptables1 net.bridge.bridge-nf-call-ip6tables1 net.ipv4.ip_forward1 vm.swappiness0 # 禁止使用 swap 空间只有当系统 OOM 时才允许使用它 vm.overcommit_memory1 # 不检查物理内存是否够用 vm.panic_on_oom0 # 开启 OOM fs.inotify.max_user_instances8192 fs.inotify.max_user_watches1048576 fs.file-max52706963 fs.nr_open52706963 net.ipv6.conf.all.disable_ipv61 net.netfilter.nf_conntrack_max2310720 EOF cp kubernetes.conf /etc/sysctl.d/kubernetes.conf sysctl -p /etc/sysctl.d/kubernetes.conf #分发到所有节点 for i in k8s-02 k8s-03 k8s-04 k8s-05 doscp kubernetes.conf root$i:/etc/sysctl.d/ssh root$i sysctl -p /etc/sysctl.d/kubernetes.confssh root$i echo 1 /proc/sys/net/ipv4/ip_forward donebridge-nf 使得netfilter可以对Linux网桥上的 IPv4/ARP/IPv6 包过滤。比如设置net.bridge.bridge-nf-call-iptables1后二层的网桥在转发包时也会被 iptables的 FORWARD 规则所过滤。常用的选项包括net.bridge.bridge-nf-call-arptables是否在 arptables 的 FORWARD 中过滤网桥的 ARP 包 net.bridge.bridge-nf-call-ip6tables是否在 ip6tables 链中过滤 IPv6 包 net.bridge.bridge-nf-call-iptables是否在 iptables 链中过滤 IPv4 包 net.bridge.bridge-nf-filter-vlan-tagged是否在 iptables/arptables 中过滤打了 vlan 标签的包。 所有节点安装 cat /etc/sysconfig/modules/ipvs.modules EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack EOF chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs -e nf_conntrack yum install ipset -y yum install ipvsadm -ytimedatectl set-timezone Asia/Shanghai #将当前的 UTC 时间写入硬件时钟 timedatectl set-local-rtc 0 #重启依赖于系统时间的服务 systemctl restart rsyslog systemctl restart crond升级内核 rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm #默认安装为最新内核 yum --enablerepoelrepo-kernel install kernel-ml #修改内核顺序 grub2-set-default 0 grub2-mkconfig -o /etc/grub2.cfg #使用下面命令看看确认下是否启动默认内核指向上面安装的内核 grubby --default-kernel #这里的输出结果应该为我们升级后的内核信息 rebootyum update -y containerd 安装 [rootk8s-03 ~]# rpm -qa | grep libseccomp libseccomp-2.3.1-4.el7.x86_64 [rootk8s-03 ~]# rpm -e libseccomp-2.3.1-4.el7.x86_64 --nodepswget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm # 创建containerd 文件mkdir /etc/containerd -p # containerd 安装 wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gztar zxvf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C / #我们直接让它给我们对应的目录给替换掉 etc/ etc/systemd/ etc/systemd/system/ etc/systemd/system/containerd.service etc/crictl.yaml etc/cni/ etc/cni/net.d/ etc/cni/net.d/10-containerd-net.conflist usr/ usr/local/ usr/local/sbin/ usr/local/sbin/runc usr/local/bin/ usr/local/bin/crictl usr/local/bin/ctd-decoder usr/local/bin/ctr usr/local/bin/containerd-shim usr/local/bin/containerd usr/local/bin/containerd-shim-runc-v1 usr/local/bin/critest usr/local/bin/containerd-shim-runc-v2 usr/local/bin/containerd-stress opt/ opt/containerd/ opt/containerd/cluster/ opt/containerd/cluster/version opt/containerd/cluster/gce/ opt/containerd/cluster/gce/cni.template opt/containerd/cluster/gce/env opt/containerd/cluster/gce/configure.sh opt/containerd/cluster/gce/cloud-init/ opt/containerd/cluster/gce/cloud-init/node.yaml opt/containerd/cluster/gce/cloud-init/master.yaml opt/cni/ opt/cni/bin/ opt/cni/bin/firewall opt/cni/bin/portmap opt/cni/bin/host-local opt/cni/bin/ipvlan opt/cni/bin/host-device opt/cni/bin/sbr opt/cni/bin/vrf opt/cni/bin/static opt/cni/bin/tuning opt/cni/bin/bridge opt/cni/bin/macvlan opt/cni/bin/bandwidth opt/cni/bin/vlan opt/cni/bin/dhcp opt/cni/bin/loopback opt/cni/bin/ptp安装完成之后 要加环境变量 https://www.orchome.com/16586 参考这个 echo export PATH$PATH:/usr/local/bin /etc/profile . /etc/profilecontainerd config default /etc/containerd/config.toml# 替换默认pause镜像地址 apiserver高可用 #首先我们在原有的基础上添加一个host只需要在master节点上执行即可 cat /etc/hosts EOF 192.168.100.18 k8s-master-01 192.168.100.19 k8s-master-02 192.168.100.20 k8s-master-03 192.168.100.199 apiserver.cc100.cn EOF#编译安装nginx #安装依赖 yum install pcre pcre-devel openssl openssl-devel gcc gcc-c automake autoconf libtool make wget vim lrzsz -y wget https://nginx.org/download/nginx-1.20.2.tar.gz tar xf nginx-1.20.2.tar.gz cd nginx-1.20.2/ useradd nginx -s /sbin/nologin -M ./configure --prefix/opt/nginx/ --with-pcre --with-http_ssl_module --with-http_stub_status_module --with-stream --with-http_stub_status_module --with-http_gzip_static_module make make install #使用systemctl管理并设置开机启动 cat /usr/lib/systemd/system/nginx.serviceEOF # /usr/lib/systemd/system/nginx.service [Unit] DescriptionThe nginx HTTP and reverse proxy server Afternetwork.target sshd-keygen.service [Service] Typeforking EnvironmentFile/etc/sysconfig/sshd ExecStartPre/opt/nginx/sbin/nginx -t -c /opt/nginx/conf/nginx.conf ExecStart/opt/nginx/sbin/nginx -c /opt/nginx/conf/nginx.conf ExecReload/opt/nginx/sbin/nginx -s reload ExecStop/opt/nginx/sbin/nginx -s stop Restarton-failure RestartSec42s [Install] WantedBymulti-user.target EOF #开机启动 [rootk8s-01 nginx-1.20.2]# systemctl enable nginx --now Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.vim nginx.conf user nginx nginx; worker_processes auto; events {worker_connections 20240;use epoll; } error_log /var/log/nginx_error.log info; stream {upstream kube-servers {hash $remote_addr consistent;server k8s-master-01:6443 weight5 max_fails1 fail_timeout3s; #这里可以写IPserver k8s-master-02:6443 weight5 max_fails1 fail_timeout3s;server k8s-master-03:6443 weight5 max_fails1 fail_timeout3s;}server {listen 8443 reuseport;proxy_connect_timeout 3s;# 加大timeoutproxy_timeout 3000s;proxy_pass kube-servers;} } #分发到其它master节点 for i in k8s-02 k8s-03 doscp nginx.conf root$i:/opt/nginx/conf/ssh root$i systemctl restart nginx done修改配置文件router_id 节点IP mcast_src_ip 节点IP virtual_ipaddress VIP 请根据自己IP实际上情况修改cat /etc/keepalived/keepalived.conf EOF ! Configuration File for keepalived global_defs {router_id 192.168.100.20 #节点ipmaster每个节点配置自己的IP } vrrp_script chk_nginx {script /etc/keepalived/check_port.sh 8443interval 2weight -20 } vrrp_instance VI_1 {state MASTERinterface eth0virtual_router_id 251priority 100advert_int 1mcast_src_ip 192.168.100.20 #节点IPnopreemptauthentication {auth_type PASSauth_pass 11111111}track_script {chk_nginx}virtual_ipaddress {192.168.100.199 #VIP} } EOF #编写健康检查脚本 vim /etc/keepalived/check_port.sh CHK_PORT$1if [ -n $CHK_PORT ];thenPORT_PROCESSss -lt|grep $CHK_PORT|wc -lif [ $PORT_PROCESS -eq 0 ];thenecho Port $CHK_PORT Is Not Used,End.exit 1fielseecho Check Port Cant Be Empty!fi 启动keepalivedsystemctl enable --now keepalived测试vip是否正常ping vip ping apiserver.cc.com #我们的域名Kubeadm 安装配置 首先我们需要在k8s-01配置kubeadm源下面kubeadm操作只需要在k8s-01上即可 替换源 cat EOF /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttp://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled1 gpgcheck0 repo_gpgcheck0 gpgkeyhttp://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFyum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5 --disableexcludeskubernetes systemctl enable --now kubelet打印默认信息 kubeadm config print init-defaults kubeadm-init.yaml以下要进行自定义修改 [rootk8s-01 ~]# cat kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 192.168.100.18 #k8s-01 ip地址bindPort: 6443 nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: k8s-01taints: null --- apiServer:timeoutForControlPlane: 4m0sextraArgs:etcd-servers: https://192.168.100.18:2379,https://192.168.100.19:2379,https://192.168.100.20:2379 apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd:local:dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: 1.23.5 controlPlaneEndpoint: apiserver.cc100.cn:8443 #高可用地址我这里填写vip networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs # kube-proxy 模式 --- apiVersion: kubelet.config.k8s.io/v1beta1 authentication:anonymous:enabled: falsewebhook:cacheTTL: 0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.crt authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 0scacheUnauthorizedTTL: 0s clusterDNS: - 10.96.0.10 clusterDomain: cluster.local cpuManagerReconcilePeriod: 0s evictionPressureTransitionPeriod: 0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 0s imageMinimumGCAge: 0s kind: KubeletConfiguration cgroupDriver: systemd # 配置 cgroup driver logging: {} memorySwap: {} nodeStatusReportFrequency: 0s nodeStatusUpdateFrequency: 0s rotateCertificates: true runtimeRequestTimeout: 0s shutdownGracePeriod: 0s shutdownGracePeriodCriticalPods: 0s staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 0s syncFrequency: 0s volumeStatsAggPeriod: 0s检测一下语法 kubeadm init --config kubeadm-init.yaml --dry-run预拉取镜像 kubeadm config images list --config kubeadm-init.yaml wget https://d.frps.cn/file/kubernetes/image/k8s_all_1.23.5.tar ctr -n k8s.io i import k8s_all_1.23.5.tar#拷贝到其它节点 for i in k8s-02 k8s-03 k8s-04 k8s-05;doscp k8s_all_1.23.5.tar root$i:/root/ssh root$i ctr -n k8s.io i import k8s_all_1.23.5.tar donekubectl 安装 kubeadm不会安装或管理kubeletkubectl因此需要确保它们kubeadm和Kubernetes版本相匹配。如果不这样则存在版本偏差的风险。但是支持kubelet和k8s之间的一个小版本偏差但kubelet版本可能永远不会超过API Server版本#下载1.23.5 kubectl工具 [rootk8s-01 ~]# curl -LO https://dl.k8s.io/release/v1.23.5/bin/linux/amd64/kubectl [rootk8s-01 ~]# chmod x kubectl mv kubectl /usr/local/bin/ #检查kubectl工具版本号 [rootk8s-01 ~]# kubectl version --client --outputyaml clientVersion:buildDate: 2022-03-16T15:58:47Zcompiler: gcgitCommit: c285e781331a3785a7f436042c65c5641ce8a9e9gitTreeState: cleangitVersion: v1.23.5goVersion: go1.17.8major: 1minor: 23platform: linux/amd64 #拷贝kubectl到其它master节点 for i in k8s-02 k8s-03;doscp /usr/local/bin/kubectl root$i:/usr/local/bin/kubectlssh root$i chmod x /usr/local/bin/kubectl done初始化kubeadm init --config kubeadm-init.yaml --upload-certsYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of the control-plane node running the following command on each as root:kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a \--control-plane --certificate-key b67f07fdb20c39157f9a052902f48a83e1624dc4682ea9cd6db752bf31028a8dPlease note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs --upload-certs to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a 记住init后打印的token复制kubectl的kubeconfigkubectl的kubeconfig路径默认是~/.kube/config mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config 初始化的配置文件为保存在configmap里面 kubectl -n kube-system get cm kubeadm-config -o yaml查看版本 [rootk8s-01 kubernetes]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-01 Ready control-plane,master 2m1s v1.23.5 master 节点配置 cat EOF /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttp://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled1 gpgcheck0 repo_gpgcheck0 gpgkeyhttp://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF安装相关组件 yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5 --disableexcludeskubernetessystemctl enable --now kubelet其他节点加入主集群kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a \--control-plane --certificate-key b67f07fdb20c39157f9a052902f48a83e1624dc4682ea9cd6db752bf31028a8d添加完成之后显示 ...略... To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun kubectl get nodes to see this node join the cluster.[rootk8s-03 ~]# mkdir -p $HOME/.kube [rootk8s-03 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [rootk8s-03 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [rootk8s-03 ~]# [rootk8s-03 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-01 Ready control-plane,master 7m18s v1.23.5 k8s-02 Ready control-plane,master 65s v1.23.5 k8s-03 Ready control-plane,master 15s v1.23.5 node节点配置 cat EOF /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttp://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled1 gpgcheck0 repo_gpgcheck0 gpgkeyhttp://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFyum install -y kubeadm-1.23.5 --disableexcludeskubernetes systemctl enable kubelet.servicekubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a 加入主节点 在k8s-01 主节点中打出 kubeadm token create --print-join-command 安装部署第二版参考链接 https://i4t.com/5488.html
http://www.pierceye.com/news/156740/

相关文章:

  • 华为官方网站手机商城首页大淘客网站商品做淘口令
  • 建站公司网站的关键词怎么设置
  • 上海二手房网站制作游戏需要什么技术
  • 湖州市城市建设档案馆网站电子商务网站怎么建
  • 网站超级外链做企业网站大约多少钱
  • 中国网站建设市场分析桂林公司网站搭建
  • 阿里云 企业网站选哪种洞窝app是谁开发的
  • ppt模板做的好的网站有哪些wordpress 多站点设置
  • ui作品集 网站怎么做网站制作加我
  • 自助做网站wordpress更换本地主题
  • 凡科网多页网站怎样做一线城市做网站工资有多少
  • .asp网站怎么做需要一个网站
  • 免费网站代码大全网站开发费入什么费用
  • 网站域名注册多少钱搜索引擎优化工具深圳
  • 学建设网站去哪里学建网站要大约多少钱
  • 网站正则表达式怎么做网站维护一般需要多久
  • 北京网站优化价格有没有做花卉种子的网站啊
  • 资源型网站建设 需要多大硬盘vi设计方案模板
  • 网站怎么做图片放映效果代码怎么生成网站
  • 怎么写代码做网站建投商务网官网
  • 江西那家做网站公司好各类网站建设
  • 做网站和服务器的大小有关吗it培训课程
  • 湖南网站建设公司 搜搜磐石网络网站推广模板
  • 网站是软件吗页网站设计
  • 网站服务器搭建及配置的具体步骤如果自己制作网站
  • 湖北餐饮网站建设做排版的网站
  • 广东省建设教育协会官方网站首页世界上最有趣的网站
  • 平面构成作品网站手机网页qq登录
  • 厦门app开发网站开发公司电话重庆网站排名外包
  • 个人备案经营网站用自己网站做邮箱域名解析