网站开发虚拟主机是什么,百度搜索推广登录入口,wordpress导入大小,自己做网站哪种好做目录
安装文件准备
主机准备
主机配置
修改主机名#xff08;三个节点分别执行#xff09;
配置hosts#xff08;所有节点#xff09;
关闭防火墙、selinux、swap、dnsmasq(所有节点)
安装依赖包#xff08;所有节点#xff09;
系统参数设置(所有节点)
时间同步…目录
安装文件准备
主机准备
主机配置
修改主机名三个节点分别执行
配置hosts所有节点
关闭防火墙、selinux、swap、dnsmasq(所有节点)
安装依赖包所有节点
系统参数设置(所有节点)
时间同步(所有节点)
配置ipvs功能(所有节点)
安装docker所有节点
卸载老版本
安装docker
安装依赖
安装
测试启动
添加 system启动 配置cgroupd
k8s准备和安装
准备镜像所有节点
修改镜像版本所有节点 安装 kubeadmkubelet 和 kubectl所有节点
安装 mastermaster节点
安装kubernets nodenode节点
安装kubernets 网络插件 calicomaster节点操作
kubenertes使用与测试
安装kuboard
内建用户库方式安装
访问 Kuboard v3.x
kubernetes方式安装
访问 Kuboard
卸载
参考文献与常见错误见参考文献 安装文件准备 主机准备
主机配置
172.171.16.147 crawler-k8s-master
172.171.16.148 crawler-k8s-node1
172.171.16.149 crawler-k8s-node2 修改主机名三个节点分别执行
172.171.16.147
hostnamectl set-hostname crawler-k8s-master
172.171.16.148
hostnamectl set-hostname crawler-k8s-node1
172.171.16.149
hostnamectl set-hostname crawler-k8s-node2 查看主机名
hostnamectl #查看主机名
配置hosts所有节点
配置 /etc/hosts 文件
cat /etc/hosts EOF
172.171.16.147 crawler-k8s-master
172.171.16.148 crawler-k8s-node1
172.171.16.149 crawler-k8s-node2
EOF关闭防火墙、selinux、swap、dnsmasq(所有节点)
关闭防火墙
systemctl stop firewalld
systemctl disable firewalld 关闭selinux
sed -i s/enforcing/disabled/ /etc/selinux/config #永久
setenforce 0 #临时 关闭swapk8s禁止虚拟内存以提高性能
sed -ri s/.*swap.*/#/ /etc/fstab #永久
swapoff -a #临时
//关闭dnsmasq否则可能导致docker容器无法解析域名
service dnsmasq stop
systemctl disable dnsmaq
安装依赖包所有节点
yum -y update
yum install wget -y yum install vim -y
yum -y install conntranck ipvsadm ipset jq sysstat curl iptables libseccomp
系统参数设置(所有节点)
//制作配置文件 设置网桥参数
mkdir /etc/sysctl.d
vim /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
net.ipv4.ip_forward 1
vm.swappiness0
vm.overcommit_memory 1
vm.panic_on_oom 0
fs.inotify.max_user_watches 89100
/生效文件
sysctl -p /etc/sysctl.d/kubernetes.conf
如果报错
[rootcrawler-k8s-master ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录
//加载网桥过滤模块
modprobe br_netfilter
然后再次
sysctl -p /etc/sysctl.d/kubernetes.conf
时间同步(所有节点)
//安装时间同步服务
yum -y install chrony
//开启服务
systemctl start chronyd
systemctl enable chronyd
配置ipvs功能(所有节点)
在kubernetes中service有两种代理模型一种是基于iptables的一种是基于ipvs的两者比较的话ipvs的性能明显要高一些但是如果要使用它需要手动载入ipvs模块
//添加需要加载的模块写入脚本文件
vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
//为脚本文件添加执行权限
chmod x /etc/sysconfig/modules/ipvs.modules
//执行脚本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules
备注如果报错可能是需要将 modprobe -- nf_conntrack_ipv4 改为modprobe -- nf_conntrack
安装docker所有节点
卸载老版本
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
安装docker
docker 安装包在准备文件的 docker目录下上传到服务器
安装依赖
导入相关依赖至各个节点
安装相关依赖
rpm -ivh containerd.io-1.6.10-3.1.el7.x86_64.rpm --force --nodepsrpm -ivh container-selinux-2.138.0-1.p01.ky10.noarch.rpm --force --nodepsrpm -ivh docker-ce-20.10.21-3.el7.x86_64.rpm --force --nodepsrpm -ivh docker-ce-cli-20.10.21-3.el7.x86_64.rpm --force --nodepssrpm -ivh docker-ce-cli-20.10.21-3.el7.x86_64.rpm --force --nodepssrpm -ivh docker-compose-1.22.0-4.ky10.noarch.rpm --force --nodepsrpm -ivh docker-scan-plugin-0.21.0-3.el7.x86_64.rpm --force --nodepsrpm -ivh libsodium-1.0.16-7.ky10.x86_64.rpm --force --nodepsrpm -ivh python3-bcrypt-3.1.4-8.ky10.x86_64.rpm --force --nodepsrpm -ivh python3-cached_property-1.5.1-1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-docker-4.0.2-1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-dockerpty-0.4.1-1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-docker-pycreds-0.4.0-1.1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-docopt-0.6.2-11.ky10.noarch.rpm --force --nodepsrpm -ivh python3-ipaddress-1.0.23-1.ky10.noarch.rpm --force --nodepsrpm -ivh python3-jsonschema-2.6.0-6.ky10.noarch.rpm --force --nodepsrpm -ivh python3-paramiko-2.4.3-1.ky10.ky10.noarch.rpm --force --nodepsrpm -ivh python3-pyasn1-0.3.7-8.ky10.noarch.rpm --force --nodepsrpm -ivh python3-pyyaml-5.3.1-4.ky10.x86_64.rpm --force --nodepsrpm -ivh python3-texttable-1.4.0-2.ky10.noarch.rpm --force --nodepsrpm -ivh python3-websocket-client-0.47.0-6.ky10.noarch.rpm --force --nodepsrpm -ivh fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpmrpm -ivh slirp4netns-0.4.3-4.el7_8.x86_64.rpm
安装
tar xf docker-20.10.9.tgzmv docker/* /usr/bin/
测试启动
dockerd
添加 system启动
编辑docker的系统服务文件
vim /usr/lib/systemd/system/docker.service
[Unit]DescriptionDocker Application Container EngineDocumentationhttps://docs.docker.comAfternetwork-online.target firewalld.serviceWantsnetwork-online.target[Service]TypenotifyExecStart/usr/bin/dockerdExecReload/bin/kill -s HUP $MAINPIDLimitNOFILEinfinityLimitNPROCinfinityTimeoutStartSec0DelegateyesKillModeprocessRestarton-failureStartLimitBurst3StartLimitInterval60s[Install]WantedBymulti-user.target 设置自启动
systemctl start docker systemctl enable docker 配置cgroupd
vim /etc/docker/daemon.json
{exec-opts: [native.cgroupdriversystemd]
}
//设置开机启动systemctl start dockersystemctl enable docker//重启dockersystemctl daemon-reloadsystemctl restart docker
k8s准备和安装
安装包在 准备文件的 docker-images下上传到服务器/home 下 准备镜像所有节点
解压
cd /home/docker-images/tar -zxvf kubeadm-images-1.18.0.tar.gz -C /home/docker-images/kubeadm-images-1.18.0 制作加载镜像脚本
vim load-image.sh
#!/bin/bash
ls /home/docker-images/kubeadm-images-1.18.0 /home/images-list.txt
cd /home/docker-images/kubeadm-images-1.18.0
docker load -i /home/docker-images/cni.tar
docker load -i /home/docker-images/node.tar
docker load -i /home/docker-images/kuboard.tar
for i in $(cat /home/images-list.txt)
dodocker load -i $idone 然后导入镜像
chmod 7 load-image.sh
./load-image.sh
修改镜像版本所有节点
修改K8S 1.23.7版本所需版本的images
修改命令docker tag 【镜像ID】【镜像名称】:【tag版本信息】
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.25.4 registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.7
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.25.4 registry.aliyuncs.com/google_containers/kube-proxy:v1.23.7
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.25.4 registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.7
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.25.4 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.7
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.5-0 registry.aliyuncs.com/google_containers/etcd:3.5.5-0
docker tag registry.aliyuncs.com/google_containers/pause:3.8 registry.aliyuncs.com/google_containers/pause:3.8
docker tag registry.aliyuncs.com/google_containers/coredns:v1.9.3 registry.aliyuncs.com/google_containers/coredns:v1.9.3这样就准备好了所有的镜像 安装 kubeadmkubelet 和 kubectl所有节点
安装包在 准备文件的 k8s下上传到服务器/home 下 工具说明
kubeadm部署集群用的命令kubelet在集群中每台机器上都要运行的组件负责管理pod、容器的什么周期kubectl集群管理工具配置阿里云源
安装
cd /home/k8srpm -ivh *.rpm设置开机自启动
systemctl start kubelet systemctl enable kubelet 安装 mastermaster节点
kubeadm init --apiserver-advertise-address172.171.16.147 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.7 --service-cidr10.96.0.0/16 --pod-network-cidr10.244.0.0/16日志如下
[init] Using Kubernetes version: v1.23.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using kubeadm config images pull
[certs] Using certificateDir folder /etc/kubernetes/pki
[certs] Generating ca certificate and key
[certs] Generating apiserver certificate and key
[certs] apiserver serving cert is signed for DNS names [crawler-k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.171.16.147]
[certs] Generating apiserver-kubelet-client certificate and key
[certs] Generating front-proxy-ca certificate and key
[certs] Generating front-proxy-client certificate and key
[certs] Generating etcd/ca certificate and key
[certs] Generating etcd/server certificate and key
[certs] etcd/server serving cert is signed for DNS names [crawler-k8s-master localhost] and IPs [172.171.16.147 127.0.0.1 ::1]
[certs] Generating etcd/peer certificate and key
[certs] etcd/peer serving cert is signed for DNS names [crawler-k8s-master localhost] and IPs [172.171.16.147 127.0.0.1 ::1]
[certs] Generating etcd/healthcheck-client certificate and key
[certs] Generating apiserver-etcd-client certificate and key
[certs] Generating sa key and public key
[kubeconfig] Using kubeconfig folder /etc/kubernetes
[kubeconfig] Writing admin.conf kubeconfig file
[kubeconfig] Writing kubelet.conf kubeconfig file
[kubeconfig] Writing controller-manager.conf kubeconfig file
[kubeconfig] Writing scheduler.conf kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder /etc/kubernetes/manifests
[control-plane] Creating static Pod manifest for kube-apiserver
[control-plane] Creating static Pod manifest for kube-controller-manager
[control-plane] Creating static Pod manifest for kube-scheduler
[etcd] Creating static Pod manifest for local etcd in /etc/kubernetes/manifests
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory /etc/kubernetes/manifests. This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.507186 seconds
[upload-config] Storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
[kubelet] Creating a ConfigMap kubelet-config-1.23 in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The kubelet-config-1.23 naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just kubelet-config. Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node crawler-k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node crawler-k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i4dp7i.7t1j8ezmgwkj1gio
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the cluster-info ConfigMap in the kube-public namespace
[kubelet-finalize] Updating /etc/kubernetes/kubelet.conf to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.171.16.147:6443 --token i4dp7i.7t1j8ezmgwkj1gio \--discovery-token-ca-cert-hash sha256:9fb74686ff3bea5769e5ed466dbb2c32ed3fc920374ff2175b39b8162ac27f8f 在 master上进一步执行上面提示的命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config安装kubernets nodenode节点
将 node 添加到集群中
kubeadm join 172.171.16.147:6443 --token i4dp7i.7t1j8ezmgwkj1gio \--discovery-token-ca-cert-hash sha256:9fb74686ff3bea5769e5ed466dbb2c32ed3fc920374ff2175b39b8162ac27f8f
然后显示日志
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run kubectl get nodes on the control-plane to see this node join the cluster.
安装kubernets 网络插件 calicomaster节点操作
安装包在 准备文件的 k8s/calico.yaml下上传到服务器/home 下
下载 calico文档 https://docs.projectcalico.org/manifests/calico.yaml
修改文件中的镜像地址
grep image calico.yamlsed -i s#docker.io##g calico.yaml kubectl apply -f calico.yaml 可能出现的问题
1修改 CALICO_IPV4POOL_CIDR 参数为
kubeadm init --apiserver-advertise-address172.171.16.147 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.23.7 --service-cidr10.96.0.0/16 --pod-network-cidr10.244.0.0/16
中的--pod-network-cidr值 2修改 IP_AUTODETECTION_METHOD 的值为网卡名称没有这个参数就不用修改
ip a 查看网卡名称
kubenertes使用与测试 kubectl create deployment nginx --imagenginx #部署nginx
kubectl expose deployment nginx --port80 --typeNodePort #暴露端口
kubectl get pod,svc #查看服务状态
部署完成
安装kuboard 安装包在 准备文件的docker-images/kuboard.tar下上传到服务器/home 下
安装包在 准备文件的kuboard下上传到服务器/home 下
上面提到的两个目录下都有 cd /home/docker-images
dpcker load -i kuboard.tar 内建用户库方式安装
官网安装地址安装 Kuboard v3 - 内建用户库 | Kuboard sudo docker run -d \--restartunless-stopped \--namekuboard \-p 80:80/tcp \-p 10081:10081/tcp \-e KUBOARD_ENDPOINThttp://172.171.16.147:80 \-e KUBOARD_AGENT_SERVER_TCP_PORT10081 \-v /root/kuboard-data:/data \eipwork/kuboard:v3# 也可以使用镜像 swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3 可以更快地完成镜像下载。# 请不要使用 127.0.0.1 或者 localhost 作为内网 IP \# Kuboard 不需要和 K8S 在同一个网段Kuboard Agent 甚至可以通过代理访问 Kuboard Server \WARNING
KUBOARD_ENDPOINT 参数的作用是让部署到 Kubernetes 中的 kuboard-agent 知道如何访问 Kuboard ServerKUBOARD_ENDPOINT 中也可以使用外网 IPKuboard 不需要和 K8S 在同一个网段Kuboard Agent 甚至可以通过代理访问 Kuboard Server建议在 KUBOARD_ENDPOINT 中使用域名如果使用域名必须能够通过 DNS 正确解析到该域名如果直接在宿主机配置 /etc/hosts 文件将不能正常运行
参数解释
建议将此命令保存为一个 shell 脚本例如 start-kuboard.sh后续升级 Kuboard 或恢复 Kuboard 时需要通过此命令了解到最初安装 Kuboard 时所使用的参数第 4 行将 Kuboard Web 端口 80 映射到宿主机的 80 端口您可以根据自己的情况选择宿主机的其他端口第 5 行将 Kuboard Agent Server 的端口 10081/tcp 映射到宿主机的 10081 端口您可以根据自己的情况选择宿主机的其他端口第 6 行指定 KUBOARD_ENDPOINT 为 http://内网IP如果后续修改此参数需要将已导入的 Kubernetes 集群从 Kuboard 中删除再重新导入第 7 行指定 KUBOARD_AGENT_SERVER 的端口为 10081此参数与第 5 行中的宿主机端口应保持一致修改此参数不会改变容器内监听的端口 10081例如如果第 5 行为 -p 30081:10081/tcp 则第 7 行应该修改为 -e KUBOARD_AGENT_SERVER_TCP_PORT30081第 8 行将持久化数据 /data 目录映射到宿主机的 /root/kuboard-data 路径请根据您自己的情况调整宿主机路径
其他参数
在启动命令行中增加环境变量 KUBOARD_ADMIN_DERAULT_PASSWORD可以设置 admin 用户的初始默认密码。
访问 Kuboard v3.x
在浏览器输入 http://172.171.16.147:80 即可访问 Kuboard v3.x 的界面登录方式
用户名 admin密 码 Kuboard123 kubernetes方式安装
安装包在 准备文件的 kuboard下上传到服务器/home 下
参考文献安装 Kuboard v3 - kubernetes | Kuboard 执行 Kuboard v3 在 K8S 中的安装 kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
# 您也可以使用下面的指令唯一的区别是该指令使用华为云的镜像仓库替代 docker hub 分发 Kuboard 所需要的镜像
# kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3-swr.yaml等待 Kuboard v3 就绪
执行指令 watch kubectl get pods -n kuboard等待 kuboard 名称空间中所有的 Pod 就绪如下所示
如果结果中没有出现 kuboard-etcd-xxxxx 的容器请查看 常见错误 中关于 缺少 Master Role 的描述。 [rootnode1 ~]# kubectl get pods -n kuboard
NAME READY STATUS RESTARTS AGE
kuboard-agent-2-65bc84c86c-r7tc4 1/1 Running 2 28s
kuboard-agent-78d594567-cgfp4 1/1 Running 2 28s
kuboard-etcd-fh9rp 1/1 Running 0 67s
kuboard-etcd-nrtkr 1/1 Running 0 67s
kuboard-etcd-ader3 1/1 Running 0 67s
kuboard-v3-645bdffbf6-sbdxb 1/1 Running 0 67s访问 Kuboard 在浏览器中打开链接 http://your-node-ip-address:30080 输入初始用户名和密码并登录 用户名 admin密码 Kuboard123
浏览器兼容性
请使用 Chrome / FireFox / Safari / Edge 等浏览器不兼容 IE 以及以 IE 为内核的浏览器
添加新的集群
Kuboard v3 是支持 Kubernetes 多集群管理的在 Kuboard v3 的首页里点击 添加集群 按钮在向导的引导下可以完成集群的添加向 Kuboard v3 添加新的 Kubernetes 集群时请确保 您新添加集群可以访问到当前集群 Master 节点 内网IP 的 30080 TCP、30081 TCP、30081 UDP 端口如果您打算新添加到 Kuboard 中的集群与当前集群不在同一个局域网请咨询 Kuboard 团队帮助您解决问题。
卸载 执行 Kuboard v3 的卸载 kubectl delete -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml清理遗留数据 在 master 节点以及带有 k8s.kuboard.cn/roleetcd 标签的节点上执行 rm -rf /usr/share/kuboard参考文献与常见错误见参考文献
Kubeadm部署k8s集群
Kubernetes安装和试用
kube-flannel.yml(已修改镜像下载数据源)
Linux高级---k8s搭建之使用calico网络插件