当前位置: 首页 > news >正文

青岛seo网站建设公司龙溪网站建设哪家便宜

青岛seo网站建设公司,龙溪网站建设哪家便宜,哪个网站教做衣服,免费企业网站模板html1. 环境准备 系统要求#xff1a;至少 2GB RAM#xff08;建议 4GB 或更多#xff09;#xff0c;网络连接。 节点准备#xff1a;至少 3 台机器#xff0c;1 台作为 Master 节点#xff0c;2 台作为 Worker 节点。 安装sudo apt update apt install sudo设置主机名至少 2GB RAM建议 4GB 或更多网络连接。 节点准备至少 3 台机器1 台作为 Master 节点2 台作为 Worker 节点。 安装sudo apt update apt install sudo设置主机名在每台机器上 sudo hostnamectl set-hostname 主机名替换 主机名 为 k8s-master、k8s-node1、k8s-node2 配置 /etc/hosts在所有节点上 将所有节点的 IP 地址和主机名添加到 /etc/hosts 文件中。 rootk8s-node1:~# echo “192.168.0.147 k8s-master” /etc/hosts rootk8s-node1:~# echo “192.168.0.217 k8s-node1” /etc/hosts 更新系统 sudo apt update sudo apt upgrade -y2. 安装 containerd 在所有节点上执行以下步骤 安装 containerd sudo apt install -y containerd2.2 更新containered到最新版本1.7 默认安装的版本是1.4如果不更新后面 init 的时候会报如下错误 [ERROR CRI]: container runtime is not running: output: time2024-02-03T22:17:0908:00 levelfatal msgvalidate service connection: CRI v1 runtime API is not implemented for endpoint \unix:///var/run/containerd/containerd.sock\: rpc error: code Unimplemented desc unknown service runtime.v1.RuntimeService , error: exit status 1[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors... 到 https://github.com/containerd/containerd/releases 下载最新版本 解压缩文件 首先您需要解压下载的压缩包。打开终端切换到包含下载文件的目录然后运行 tar xzvf containerd-1.7.13-linux-amd64.tar.gz sudo mv bin/* /usr/bin/containerd --version 可以查看版本号为1.7 2.3 配置 containerd 生成默认配置文件 sudo mkdir -p /etc/containerd sudo containerd config default | sudo tee /etc/containerd/config.toml /dev/null修改配置文件 nano /etc/containerd/config.toml文件中 sandbox_image做如下修改因为后面init的时候指定的是阿里云的sandbox_image registry.aliyuncs.com/google_containers/pause:3.9[plugins.io.containerd.grpc.v1.cri.containerd.runtimes.runc.options]SystemdCgroup true #这个很重要否则k8s启动起来后会自动停止kubectl get pods -n kube-system 也会出现监听端口6443访问失败的报错启用并启动 containerd sudo systemctl restart containerd sudo systemctl enable containerdsudo systemctl status containerd 可查看状态 3. 安装 Kubernetes 在所有节点上执行以下步骤 安装必需的包首先确保你的系统安装了 apt-transport-https、ca-certificates 和 curl sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl添加 Kubernetes 的 GPG 密钥 curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -添加 Kubernetes 仓库 echo deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main | sudo tee /etc/apt/sources.list.d/kubernetes.list更新软件包列表 sudo apt-get update安装 kubeadm、kubelet 和 kubectl sudo apt-get install -y kubelet1.28.2-00 kubeadm1.28.2-00 kubectl1.28.2-00 sudo apt-mark hold kubelet kubeadm kubectl安装配置br_netfilter 模块 sudo modprobe br_netfilter确保 IP 转发被启用 echo net.ipv4.ip_forward1 | sudo tee -a /etc/sysctl.confecho net.bridge.bridge-nf-call-iptables1 | sudo tee -a /etc/sysctl.confsudo sysctl -p # 4. 初始化 Kubernetes 集群(在 Master 节点上执行) 4.2 初始化集群(Master节点) 直接执行下面的会报错 sudo kubeadm init --pod-network-cidr10.244.0.0/16报错 [ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.28.6: output: E0212 19:15:37.560180 22897 remote_image.go:171] P应该执行下面的 sudo kubeadm init --pod-network-cidr10.244.0.0/16 --image-repositoryregistry.aliyuncs.com/google_containers --kubernetes-versionv1.28.2以下是 kubeadm init 命令的输出。 rootecs-2144:~# sudo kubeadm init --pod-network-cidr10.244.0.0/16 --image-repositoryregistry.aliyuncs.com/google_containerssudo kubeadm init --pod-network-cidr10.244.0.0/16 --image-repositoryregistry.aliyuncs.com/google_containers --kubernetes-versionv1.28.6 unknown command kubeadm for kubeadm init To see the stack trace of this error execute with --v5 or higher rootecs-2144:~# sudo kubeadm init --pod-network-cidr10.244.0.0/16 --image-repositoryregistry.aliyuncs.com/google_containers --kubernetes-versionv1.28.6 [init] Using Kubernetes version: v1.28.6 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using kubeadm config images pull [certs] Using certificateDir folder /etc/kubernetes/pki [certs] Generating ca certificate and key [certs] Generating apiserver certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.147] [certs] Generating apiserver-kubelet-client certificate and key [certs] Generating front-proxy-ca certificate and key [certs] Generating front-proxy-client certificate and key [certs] Generating etcd/ca certificate and key [certs] Generating etcd/server certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.147 127.0.0.1 ::1] [certs] Generating etcd/peer certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.147 127.0.0.1 ::1] [certs] Generating etcd/healthcheck-client certificate and key [certs] Generating apiserver-etcd-client certificate and key [certs] Generating sa key and public key [kubeconfig] Using kubeconfig folder /etc/kubernetes [kubeconfig] Writing admin.conf kubeconfig file [kubeconfig] Writing kubelet.conf kubeconfig file [kubeconfig] Writing controller-manager.conf kubeconfig file [kubeconfig] Writing scheduler.conf kubeconfig file [etcd] Creating static Pod manifest for local etcd in /etc/kubernetes/manifests [control-plane] Using manifest folder /etc/kubernetes/manifests [control-plane] Creating static Pod manifest for kube-apiserver [control-plane] Creating static Pod manifest for kube-controller-manager [control-plane] Creating static Pod manifest for kube-scheduler [kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env [kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory /etc/kubernetes/manifests. This can take up to 4m0s [apiclient] All control plane components are healthy after 4.001658 seconds [upload-config] Storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace [kubelet] Creating a ConfigMap kubelet-config in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: c5ir0f.h8x43oj54kb1gppe [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the cluster-info ConfigMap in the kube-public namespace [kubelet-finalize] Updating /etc/kubernetes/kubelet.conf to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.0.147:6443 --token c5ir0f.h8x43oj54kb1gppe \--discovery-token-ca-cert-hash sha256:42dc8386b03f8c6c415e06153c4b978e2020ca48d19b7b8b383d1c5d311a36e7 5. 设置 kubectl仅限 Master 节点 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config6. 安装网络插件在 Master 节点上 kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml --request-timeout0#不加–request-timeout‘0’ 可能会导致超时 如果出现连接连接端口错误 运行 journalctl -u kubelet 可以看日志如果看到以下错误 errfailed to load kubelet config file, path: /var/lib/kubelet/config.yaml, err可能是没有授权访问权限运行以下即可 sudo chown root:root /var/lib/kubelet/config.yaml sudo chmod 644 /var/lib/kubelet/config.yaml然后重新运行kubelet sudo systemctl restart kubelet //经测试只需要这一行即可 sudo systemctl status kubelet sudo systemctl restart containerd sudo systemctl status containerd 然后重新运行上面的 kubectl apply … 可能还会出现报错 unable to connect to the server: net/http: TLS handshake timeout 重新执行一遍一般就会成功 7. 将 Worker 节点加入集群 在每个 Worker 节点上运行在初始化 Master 节点时得到的 kubeadm join 命令。 效果如下 rootecs-7d63:~# kubeadm join 192.168.0.147:6443 --token lj3ooj.2x39tu70gyx5uj3v --discovery-token-ca-cert-hash sha256:7ce5191c1581dfcee7b33457bdd9341fa1ee128a19ac248c8daf9e69a57a8b18 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml [kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml [kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.Run kubectl get nodes on the control-plane to see this node join the cluster.验证集群状态 在 M aster 节点上运行 kubectl get nodes你应该能看到所有节点的状态为 Ready。 支持基础安装完成下一步就是配置k8s 开放端口 API Server: 6443: Kubernetes API server。这是最关键的端口用于集群管理通信。 etcd: 2379-2380: 用于etcd服务器客户端API通信。只有Kubernetes的API server才需要访问etcd所以这些端口只需要在Master节点之间开放。 Kubelet: 10250: Kubelet API。这个端口用于API server获取节点和Pod的信息。 Kube-proxy: Kubernetes中的Controller Manager和Scheduler虽然主要与API Server进行通信但它们也监听在特定端口上主要用于健康检查和指标收集。这些端口主要用于集群内部通信而不是外部访问。下面是Controller Manager和Scheduler所监听的端口 Kubernetes Controller Manager 默认端口: 10252: 用于健康检查和指标metrics的非安全HTTP访问。 安全端口: 当配置了安全访问例如通过启用HTTPS或认证和授权Controller Manager可以配置为通过安全端口提供服务但这需要手动配置包括证书和相关的安全设置。 Kubernetes Scheduler 默认端口: 10251: 用于健康检查和指标的非安全HTTP访问。 排错 列出所有系统pod kubectl get pods -n kube-system calico-kube-controllers-7ddc4f45bc-snh9v 1/1 Running 1 (2m10s ago) 158m calico-node-5mnpd 1/1 Running 1 (2m10s ago) 158m calico-node-s6w74 1/1 Running 0 156m coredns-66f779496c-cvwjx 1/1 Running 1 (2m10s ago) 171m coredns-66f779496c-qx7fr 1/1 Running 1 (2m10s ago) 171m etcd-k8s-master 1/1 Running 1 (2m10s ago) 171m kube-apiserver-k8s-master 1/1 Running 1 (2m10s ago) 171m kube-controller-manager-k8s-master 1/1 Running 1 (2m10s ago) 171m kube-proxy-k7c6l 1/1 Running 1 (2m10s ago) 171m kube-proxy-stft6 1/1 Running 0 156m kube-scheduler-k8s-master 1/1 Running 1 (2m10s ago) 171m 找出名称后可以查看该pod的日志 kubectl logs -n kube-system 调用 kubectl get pods -n kube-system 如果响应 rootk8s-master:~# kubectl get pods -n kube-system The connection to the server 192.168.0.147:6443 was refused - did you specify the right host or port? 说明kubelet停了需要调用 sudo systemctl restart kubelet 重启 journalctl -u kubelet 可查看kubelet日志
http://www.pierceye.com/news/116237/

相关文章:

  • 婚庆策划公司加盟江门关键词优化价格
  • 百度网站入口ps网页设计实验报告
  • 做网站准备材料怎么做优化网站排名
  • asp技校网站手游网页版
  • 网站建设合同要交印花税吗烟台网站的建设
  • 可以做锚文本链接的网站广告公司创意广告语
  • 建设网站的题目旅游网页素材
  • 做网站很难吗新手学做网站 pdf
  • 建设电影推荐网站的项目背景网站开发的公司电话
  • 建设银行 福建分行招聘网站cctv5体育现场直播
  • 网站那个做的比较好的微信辅助网站制作
  • 网站设计大全推荐wordpress后台登录
  • 网站运营与数据分析网站开发学习什么
  • 上海网站备案在哪里查询网站建设哪家稳妥
  • 建设网站做什么赚钱网站制作.
  • 小企业公司网站怎么建做网站英文编辑有前途吗
  • 优化图片传网站wordpress背景图片
  • 网站服务器哪家好些外包做网站不付尾款
  • 建站系统wordpress下载哪个公司的微信商城系统
  • 网站建设app开发合同深圳企业网站制作设计方案
  • 免费网站整站模板下载寻找做网站
  • 做此广告的网站做家纺的网站
  • 湖南畅想网站建设个人网站建设基本定位
  • 建站公司外包钓鱼网站怎么做
  • 个人网站logo需要备案吗鑫灵锐做网站多少钱
  • .xyz做网站怎么样网站产品预算
  • 建网站先要申请网址吗做网站给文件不侵权
  • 一元夺宝网站建设Wordpress 普通图片裁剪
  • 网站推广都有哪些自己有网站怎么优化
  • 宠物交易网站模板更改wordpress后台登录地址