当前位置: 首页 > news >正文

贴吧怎么做网站视频建设热电偶网站

贴吧怎么做网站视频,建设热电偶网站,网页游戏网站建设,易企秀网页制作官网入口一、安装负载均衡器 k8s负载均衡器 官方指南 1、准备三台机器 节点名称IPmaster-1192.168.1.11master-2192.168.1.12master-3192.168.1.13 2、在这三台机器分别安装haproxy和keepalived作为负载均衡器 # 安装haproxy sudo dnf install haproxy -y# 安装Keepalived sudo yum …一、安装负载均衡器 k8s负载均衡器 官方指南 1、准备三台机器 节点名称IPmaster-1192.168.1.11master-2192.168.1.12master-3192.168.1.13 2、在这三台机器分别安装haproxy和keepalived作为负载均衡器 # 安装haproxy sudo dnf install haproxy -y# 安装Keepalived sudo yum install epel-release -y sudo yum install keepalived -y# 查看安装成功信息 sudo dnf info haproxy sudo dnf info keepalived3、k8s负载均衡器配置文件 官方指南 按需替换成自己的机器ip和端口即可192.168.1.9 是为keepalived提供的虚拟ip只要该ip没有被占用均可从节点将MASTER改为BACKUP,priority 101改成100让MASTER占比大 3.1 /etc/keepalived/keepalived.conf ! /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs {router_id LVS_DEVEL } vrrp_script check_apiserver {script /etc/keepalived/check_apiserver.shinterval 3weight -2fall 10rise 2 }vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 101authentication {auth_type PASSauth_pass 42}virtual_ipaddress {192.168.1.9}track_script {check_apiserver} }3.2 /etc/keepalived/check_apiserver.sh #!/bin/sherrorExit() {echo *** $* 12exit 1 }curl -sfk --max-time 2 https://localhost:6553/healthz -o /dev/null || errorExit Error GET https://localhost:6553/healthz3.3 授予脚本权限 chmod x /etc/keepalived/check_apiserver.sh3.4 /etc/haproxy/haproxy.cfg # /etc/haproxy/haproxy.cfg #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- globallog stdout format raw local0daemon#--------------------------------------------------------------------- # common defaults that all the listen and backend sections will # use if not designated in their block #--------------------------------------------------------------------- defaultsmode httplog globaloption httplogoption dontlognulloption http-server-closeoption forwardfor except 127.0.0.0/8option redispatchretries 1timeout http-request 10stimeout queue 20stimeout connect 5stimeout client 35stimeout server 35stimeout http-keep-alive 10stimeout check 10s#--------------------------------------------------------------------- # apiserver frontend which proxys to the control plane nodes #--------------------------------------------------------------------- frontend apiserverbind *:6553mode tcpoption tcplogdefault_backend apiserverbackend#--------------------------------------------------------------------- # round robin balancing for apiserver #--------------------------------------------------------------------- backend apiserverbackendoption httpchkhttp-check connect sslhttp-check send meth GET uri /healthzhttp-check expect status 200mode tcpbalance roundrobinserver master-1 192.168.1.11:6443 check verify noneserver master-2 192.168.1.12:6443 check verify noneserver master-3 192.168.1.13:6443 check verify none# [...] 3.5 验证haproxy.cfg是否有语法错误并重启 haproxy -c -f /etc/haproxy/haproxy.cfgsystemctl restart haproxy systemctl restart keepalived 二、安装k8s集群 基础配置请参照我的上一篇单主节点执行 1、堆叠Stackedetcd 拓扑 直接执行初始化即可 优点操作简单节点数要求少 缺点堆叠集群存在耦合失败的风险。如果一个节点发生故障则 etcd 成员和控制平面实例都将丢失 并且冗余会受到影响。 kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \--apiserver-advertise-address192.168.1.11 \--control-plane-endpoint 192.168.1.9:6553 \--pod-network-cidr10.244.0.0/16 \--service-cidr 10.244.0.0/12 \--kubernetes-versionv1.23.8 \--upload-certs \--v62、外部 etcd 拓扑 优点拓扑结构解耦了控制平面和 etcd 成员。因此它提供了一种 HA 设置 其中失去控制平面实例或者 etcd 成员的影响较小并且不会像堆叠的 HA 拓扑那样影响集群冗余 缺点拓扑需要两倍于堆叠 HA 拓扑的主机数量。 具有此拓扑的 HA 集群至少需要三个用于控制平面节点的主机和三个用于 etcd 节点的主机 官方指南 2.1 准备三台机器 节点名称IPetcd-1192.168.1.3etcd-2192.168.1.4etcd-3192.168.1.5 2.2 每个etcd节点创建配置文件/etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf [Service] ExecStart # 将下面的 systemd 替换为你的容器运行时所使用的 cgroup 驱动。 # kubelet 的默认值为 cgroupfs。 # 如果需要的话将 --container-runtime-endpoint 的值替换为一个不同的容器运行时。 ExecStart/usr/bin/kubelet --address127.0.0.1 --pod-manifest-path/etc/kubernetes/manifests --cgroup-driversystemd Restartalways2.3 启动kubelet systemctl daemon-reload systemctl restart kubelet# 查看kubelet状态正常应变为running systemctl status kubelet2.4 使用以下脚本文件启动注意替换自己的IP和主机名 # 使用你的主机 IP 替换 HOST0、HOST1 和 HOST2 的 IP 地址在etcd-1 上执行以下命令 export HOST0192.168.1.3 export HOST1192.168.1.4 export HOST2192.168.1.5# 使用你的主机名更新 NAME0、NAME1 和 NAME2 export NAME0etcd-1 export NAME1etcd-2 export NAME2etcd-3# 创建临时目录来存储将被分发到其它主机上的文件 mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/HOSTS(${HOST0} ${HOST1} ${HOST2}) NAMES(${NAME0} ${NAME1} ${NAME2})for i in ${!HOSTS[]}; do HOST${HOSTS[$i]} NAME${NAMES[$i]} cat EOF /tmp/${HOST}/kubeadmcfg.yaml --- apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration nodeRegistration:name: ${NAME} localAPIEndpoint:advertiseAddress: ${HOST} --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration etcd:local:serverCertSANs:- ${HOST}peerCertSANs:- ${HOST}extraArgs:initial-cluster: ${NAMES[0]}https://${HOSTS[0]}:2380,${NAMES[1]}https://${HOSTS[1]}:2380,${NAMES[2]}https://${HOSTS[2]}:2380initial-cluster-state: newname: ${NAME}listen-peer-urls: https://${HOST}:2380listen-client-urls: https://${HOST}:2379advertise-client-urls: https://${HOST}:2379initial-advertise-peer-urls: https://${HOST}:2380 EOF done2.5 在任意etcd节点生成证书 kubeadm init phase certs etcd-ca #这一操作创建如下两个文件 #/etc/kubernetes/pki/etcd/ca.crt #/etc/kubernetes/pki/etcd/ca.key2.6 为每个成员创建证书 kubeadm init phase certs etcd-server --config/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config/tmp/${HOST2}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST2}/ # 清理不可重复使用的证书 find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -deletekubeadm init phase certs etcd-server --config/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config/tmp/${HOST1}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST1}/ find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -deletekubeadm init phase certs etcd-server --config/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config/tmp/${HOST0}/kubeadmcfg.yaml # 不需要移动 certs 因为它们是给 HOST0 使用的# 清理不应从此主机复制的证书 find /tmp/${HOST2} -name ca.key -type f -delete find /tmp/${HOST1} -name ca.key -type f -delete 2.7 证书已生成现在必须将它们移动到对应的主机。复制tmp下各自节点证书目录pki至/etc/kubernetes/ 2.8 在对应的etcd节点分别执行按需取用和替换自己的etcd节点IP # 镜像处理 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0sudo systemctl daemon-reload sudo systemctl restart kubelet kubeadm init phase etcd local --config/tmp/192.168.1.3/kubeadmcfg.yaml #kubeadm init phase etcd local --config/tmp/192.168.1.4/kubeadmcfg.yaml #kubeadm init phase etcd local --config/tmp/192.168.1.5/kubeadmcfg.yaml2.9 验证etcd集群 # 验证集群状态 docker run --rm -it \--net host \-v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 etcdctl \--cert /etc/kubernetes/pki/etcd/peer.crt \--key /etc/kubernetes/pki/etcd/peer.key \--cacert /etc/kubernetes/pki/etcd/ca.crt \--endpoints https://192.168.1.3:2379 endpoint health --cluster3、 配置完etcd集群就在第一个节点配置k8s集群启动文件 config kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers localAPIEndpoint:advertiseAddress: 192.168.1.11 uploadCerts: true --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kubernetesVersion: v1.23.8 controlPlaneEndpoint: 192.168.1.9:6553 networking:podSubnet: 10.244.0.0/16serviceSubnet: 10.244.0.0/12 etcd:external:endpoints:- https://192.168.1.3:2379- https://192.168.1.4:2379- https://192.168.1.5:2379caFile: /etc/kubernetes/pki/etcd/ca.crtcertFile: /etc/kubernetes/pki/apiserver-etcd-client.crtkeyFile: /etc/kubernetes/pki/apiserver-etcd-client.key4、从任意etcd节点复制/etc/kubernetes/pki目录文件到初始化集群的k8s节点 kubeadm init --config kubeadm-config.yaml --upload-certs --v6# 主节点加入 kubeadm join 192.168.1.9:6553 --token a26srm.c7sssutz83mz94lq \--discovery-token-ca-cert-hash sha256:560139f5ea4b8d3a279de53d9d5d503d41c29394c3ba46a4f312f361708b8b71 \--control-plane --certificate-key b6e4df72059c9893d2be4d0e5b7fa2e7c466e0400fe39bd244d0fbf7f3e9c04c# 从节点加入 kubeadm join 192.168.1.9:6553 --token a26srm.c7sssutz83mz94lq \--discovery-token-ca-cert-hash sha256:560139f5ea4b8d3a279de53d9d5d503d41c29394c3ba46a4f312f361708b8b71安装flannel网络插件 apiVersion: v1 kind: Namespace metadata:labels:k8s-app: flannelpod-security.kubernetes.io/enforce: privilegedname: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata:labels:k8s-app: flannelname: flannelnamespace: kube-flannel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:k8s-app: flannelname: flannel rules: - apiGroups:- resources:- podsverbs:- get - apiGroups:- resources:- nodesverbs:- get- list- watch - apiGroups:- resources:- nodes/statusverbs:- patch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:labels:k8s-app: flannelname: flannel roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel subjects: - kind: ServiceAccountname: flannelnamespace: kube-flannel --- apiVersion: v1 data:cni-conf.json: |{name: cbr0,cniVersion: 0.3.1,plugins: [{type: flannel,delegate: {hairpinMode: true,isDefaultGateway: true}},{type: portmap,capabilities: {portMappings: true}}]}net-conf.json: |{Network: 10.244.0.0/16,EnableNFTables: false,Backend: {Type: vxlan}} kind: ConfigMap metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-cfgnamespace: kube-flannel --- apiVersion: apps/v1 kind: DaemonSet metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-dsnamespace: kube-flannel spec:selector:matchLabels:app: flannelk8s-app: flanneltemplate:metadata:labels:app: flannelk8s-app: flanneltier: nodespec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxcontainers:- args:- --ip-masq- --kube-subnet-mgrcommand:- /opt/bin/flanneldenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: 5000image: registry.cn-hangzhou.aliyuncs.com/1668334351/flannel:v0.26.4name: kube-flannelresources:requests:cpu: 100mmemory: 50MisecurityContext:capabilities:add:- NET_ADMIN- NET_RAWprivileged: falsevolumeMounts:- mountPath: /run/flannelname: run- mountPath: /etc/kube-flannel/name: flannel-cfg- mountPath: /run/xtables.lockname: xtables-lockhostNetwork: trueinitContainers:- args:- -f- /flannel- /opt/cni/bin/flannelcommand:- cpimage: registry.cn-hangzhou.aliyuncs.com/1668334351/flannel-cni-plugin:v1.6.2name: install-cni-pluginvolumeMounts:- mountPath: /opt/cni/binname: cni-plugin- args:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistcommand:- cpimage: registry.cn-hangzhou.aliyuncs.com/1668334351/flannel:v0.26.4name: install-cnivolumeMounts:- mountPath: /etc/cni/net.dname: cni- mountPath: /etc/kube-flannel/name: flannel-cfgpriorityClassName: system-node-criticalserviceAccountName: flanneltolerations:- effect: NoScheduleoperator: Existsvolumes:- hostPath:path: /run/flannelname: run- hostPath:path: /opt/cni/binname: cni-plugin- hostPath:path: /etc/cni/net.dname: cni- configMap:name: kube-flannel-cfgname: flannel-cfg- hostPath:path: /run/xtables.locktype: FileOrCreatename: xtables-lock kubectl apply -f kube-flannel.yml
http://www.pierceye.com/news/244700/

相关文章:

  • 网站上的产品板块广州展厅设计公司有哪些
  • 网站建设源代码交付网站系统制作教程视频教程
  • 做网站刷赞qq怎么赚钱网站特效js代码
  • 电子商务网站开发进什么科目网络推广怎么学
  • 网站做百度推广要多少钱电商网站制作
  • 交互设计网站推荐网上推广公司
  • 网站建设数据库搭建网站开发外包维护合同
  • 大网站怎样选域名ui设计的就业前景
  • 青岛网站推广外包推广平台怎么做
  • 陇南建设网站网站建设大作业选题
  • 外包做的网站 需要要源代码吗福建省法冶建设知识有奖网站
  • 设计网站价格表dns解析失败登录不了网站
  • 代理网址网站与做机器人有关的网站
  • 优惠卷网站怎么做推广歌手网站建设
  • 网站服务器开发西安app软件开发公司
  • 化妆品产品的自建网站哟哪些怎么做提升网站转化率
  • 上海餐饮网站建设百度本地推广
  • 全返网站建设做pc端网站信息
  • 做团购网站需要什么网站建设与管理好处
  • 厦门seo优泰安网站seo推广
  • 做网站如何盈利建站优化信息推广
  • 大气的网站首页网络推广公司优化客
  • 网站建设要经历哪些步骤电商仓储代发招商合作
  • 网站开发如何搭建框架潍坊网站建设公司
  • 免费网页制作网站建设2015年做啥网站致富
  • 个人网站制作基本步骤江阴网站的建设
  • 英文网站名需要斜体吗宁波seo外包费用
  • 网站设计价格公司门户网站建设
  • wordpress如何修改文章路径哈尔滨个人优化排名
  • 拓者设计吧网站科技基金