专业网站是什么意思,网页制作软件dw与python软件对比,免费推广网站注册入口,怎么自做网站0 环境准备 
节点数量: 3 台虚拟机 centos7硬件配置: 2G或更多的RAM#xff0c;2个CPU或更多的CPU#xff0c;硬盘至少30G 以上网络要求: 多个节点之间网络互通#xff0c;每个节点能访问外网 
1 集群规划 
k8s-node1#xff1a;10.0.0.32k8s-node2#xff1a;10.0.3.231k…0 环境准备 
节点数量: 3 台虚拟机 centos7硬件配置: 2G或更多的RAM2个CPU或更多的CPU硬盘至少30G 以上网络要求: 多个节点之间网络互通每个节点能访问外网 
1 集群规划 
k8s-node110.0.0.32k8s-node210.0.3.231k8s-node310.0.1.149 
2 设置主机名 
hostnamectl set-hostname k8s-node1  
hostnamectl set-hostname k8s-node2
hostnamectl set-hostname k8s-node33 同步 hosts 文件 如果 DNS 不支持主机名称解析还需要在每台机器的 /etc/hosts 文件中添加主机名和 IP 的对应关系 cat  /etc/hosts EOF
10.0.0.32 k8s-node1
10.0.3.231 k8s-node2
10.0.1.149 k8s-node3
EOF4 关闭防火墙 
systemctl stop firewalld  systemctl disable firewalld5 关闭 SELINUX 注意: ARM 架构请勿执行,执行会出现 ip 无法获取问题! setenforce 0  sed -i s/SELINUXenforcing/SELINUXdisabled/g /etc/selinux/config6 关闭 swap 分区 
swapoff -a  sed -ri s/.*swap.*/#/ /etc/fstab7 同步时间 
yum install ntpdate -y
ntpdate time.windows.commodprobe br_netfilter
echo modprobe br_netfilter  /etc/profile
tee /etc/sysctl.d/k8s.conf  EOF
net.bridge.bridge-nf-call-ip6tables  1
net.bridge.bridge-nf-call-iptables  1
EOF8 安装 containerd 
wget https://github.com/containerd/containerd/releases/download/v1.7.3/cri-containerd-1.7.3-linux-amd64.tar.gz
tar xf cri-containerd-1.7.11-linux-amd64.tar.gz  -C /
mkdir /etc/containerd
containerd config default  /etc/containerd/config.toml
vim /etc/containerd/config.toml # 修改配置文件sandbox_image  registry.aliyuncs.com/k8sxio/pause:3.9 
# 开机启动
systemctl enable --now containerd
# 版本验证
containerd --version 
8.1 安装libseccomp 
wget https://github.com/opencontainers/runc/releases/download/v1.1.5/libseccomp-2.5.4.tar.gz
tar xf libseccomp-2.5.4.tar.gz
cd libseccomp-2.5.4/
yum install gperf -y
./configure
make  make install
find / -name libseccomp.so8.1 安装runc 
wget https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64
chmod x runc.amd64
#查找containerd安装时已安装的runc所在的位置然后替换
which runc
#替换containerd安装已安装的runc
mv runc.amd64 /usr/local/sbin/runc
#执行runc命令如果有命令帮助则为正常
runc如果运行runc命令时提示runc: error while loading shared libraries: libseccomp.so.2: cannot open shared object file: No such file or directory则表明runc没有找到libseccomp需要检查libseccomp是否安装本次安装默认就可以查询到。 
8.2 安装docker 
配置docker 加速 
mkdir /etc/docker/
cat EOF  /etc/docker/daemon.json
{registry-mirrors: [https://q3rmdln3.mirror.aliyuncs.com],insecure-registries:[http://192.168.100.20:5000]
}
EOF安装docker 
yum -y install yun-utils device-mapper-persistent-data lvm2
yum -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fastyum list docker-ce showduplicates | sort -r
yum install docker-ce -y
systemctl enable docker  systemctl start docker
docker --version9 添加源 
查看源 
$ yum repolist备分本地源 
mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/bak/
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo获取阿里yum源配置 
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo配置 kubernetes 源 
cat  /etc/yum.repos.d/kubernetes.repo  EOF
[k8s]
namek8s
enabled1
gpgcheck0
baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF将 sandbox_image 镜像源设置为阿里云 google_containers 镜像源所有节点 
# 导出默认配置config.toml这个文件默认是不存在的
containerd config default  /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml
sudo sed -i s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g  /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml配置 containerd cgroup 驱动程序 systemd所有节点 kubernets 自 1.24.0 后就不再使用 docker.shim替换采用 containerd 作为容器运行时端点。因此需要安装 containerd在 docker 的基础下安装上面安装 docker 的时候就自动安装了 containerd 了。这里的 docker 只是作为客户端而已。容器引擎还是 containerd  
sed -i s#SystemdCgroup  false#SystemdCgroup  true#g /etc/containerd/config.toml
# 应用所有更改后,重新启动containerd
systemctl restart containerd更新batch 
yum clean all # 清除系统所有的yum缓存 
yum makecache # 生成yum缓存11 安装 k8s 
# 安装最新版本
$ yum install -y kubelet kubeadm kubectl# 指定版本安装
# yum install -y kubelet-1.26.0 kubectl-1.26.0 kubeadm-1.26.0# 启动 kubelet
$ sudo systemctl enable kubelet  sudo systemctl start kubelet  sudo systemctl status kubelet12 初始化集群 
注意: 初始化 k8s 集群仅仅需要再在 master 节点进行集群初始化! 
kubeadm init \--apiserver-advertise-address10.0.0.32 \--image-repository registry.aliyuncs.com/google_containers \--service-cidr10.96.0.0/16 \--pod-network-cidr10.244.0.0/16 \--v511.3 集群配置文件 
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG/root/.kube/config12.1 node加入集群 
kubeadm join 10.0.0.32:6443 --token sorvas.aogvsfw5ok3n7agc \--discovery-token-ca-cert-hash sha256:fa4449876b266e9767a47deee6ba1eec0dc3532f62a1c9dffcd543639cbf696c \--ignore-preflight-errorsall \--cri-socket unix:///var/run/containerd/containerd.sock13 配置集群网络 
方式0 
# Needs manual creation of namespace to avoid helm error
kubectl create ns kube-flannel
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforceprivilegedhelm repo add flannel https://flannel-io.github.io/flannel/
helm install flannel --set podCidr10.244.0.0/16 --namespace kube-flannel flannel/flannel最后的效果 
[rootk8s-node1 k8s]# kubectl get pod -A
NAMESPACE      NAME                                READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-ffvvm               1/1     Running   0          18m
kube-flannel   kube-flannel-ds-g4n6k               1/1     Running   0          18m
kube-flannel   kube-flannel-ds-l2f4b               1/1     Running   0          18m
kube-system    coredns-66f779496c-lrzwk            1/1     Running   0          20m
kube-system    coredns-66f779496c-mtdx5            1/1     Running   0          20m
kube-system    etcd-k8s-node1                      1/1     Running   8          20m
kube-system    kube-apiserver-k8s-node1            1/1     Running   5          20m
kube-system    kube-controller-manager-k8s-node1   1/1     Running   2          20m
kube-system    kube-proxy-m7z2m                    1/1     Running   0          19m
kube-system    kube-proxy-mv8p8                    1/1     Running   0          19m
kube-system    kube-proxy-zvfdg                    1/1     Running   0          20m
kube-system    kube-scheduler-k8s-node1            1/1     Running   6          20m[rootk8s-node1 k8s]# kubectl get node -owide
NAME        STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                 CONTAINER-RUNTIME
k8s-node1   Ready    control-plane   21m   v1.28.2   10.0.0.32     none        CentOS Linux 7 (Core)   3.10.0-1160.102.1.el7.x86_64   containerd://1.7.11
k8s-node2   Ready    none          20m   v1.28.2   10.0.3.231    none        CentOS Linux 7 (Core)   3.10.0-1160.102.1.el7.x86_64   containerd://1.7.11
k8s-node3   Ready    none          20m   v1.28.2   10.0.1.149    none        CentOS Linux 7 (Core)   3.10.0-1160.102.1.el7.x86_64   containerd://1.7.11
[rootk8s-node1 k8s]#