深圳网站设计公司让您放心省心,网站开发工具js,邮箱注册申请官网,北京网站模仿node节点选择器#xff0c;污点、容忍度、亲和性 node节点选择器nodeName#xff0c;指定pod节点运行在哪个具体node上nodeSelector#xff1a;指定pod调度到具有哪些标签的node节点上 亲和性node节点亲和性使用requiredDuringSchedulingIgnoredDuringExecution硬亲和性使用… node节点选择器污点、容忍度、亲和性 node节点选择器nodeName指定pod节点运行在哪个具体node上nodeSelector指定pod调度到具有哪些标签的node节点上 亲和性node节点亲和性使用requiredDuringSchedulingIgnoredDuringExecution硬亲和性使用preferredDuringSchedulingIgnoredDuringExecution软亲和性weight权重 Pod节点亲和性pod自身的亲和性调度有两种表示形式定义两个pod第一个pod做为基准第二个pod跟着它走 pod节点反亲和性定义两个pod第一个pod做为基准第二个pod跟它调度节点相反topologykey 位置拓扑键 node节点选择器 我们在创建pod资源的时候pod会根据schduler进行调度那么默认会调度到随机的一个工作节点如果我们想要pod调度到指定节点或者调度到一些具有相同特点的node节点怎么办呢 可以使用pod中的nodeName或者nodeSelector字段指定要调度到的node节点 nodeName指定pod节点运行在哪个具体node上 把tomcat.tar.gz上传到k8snode1和k8snode2手动解压 ctr -nk8s.io images import tomcat.tar.gz 把busybox.tar.gz上传到k8snode1和k8snode2手动解压 ctr -nk8s.io images import busybox.tar.gzvim pod-node.yaml apiVersion: v1
kind: Pod
metadata:name: demo-podnamespace: defaultlabels:app: myappenv: dev
spec:nodeName: k8snode1containers:- name: tomcat-pod-javaports:- containerPort: 8080image: tomcat:8.5-jre8-alpineimagePullPolicy: IfNotPresent- name: busyboximage: busybox:latestcommand:- /bin/sh- -c
- sleep 3600kubectl apply -f pod-node.yaml查看pod调度到哪个节点 kubectl get pods -o wideNAME READY STATUS RESTARTS
demo-pod 1/1 Running 0 k8snode1 nodeSelector指定pod调度到具有哪些标签的node节点上 同一个yaml文件里定义pod资源如果同时定义了nodeName和NodeSelector那么条件必须都满足才可以有一个不满足都会调度失败 给node节点打标签打个具有diskceph的标签 kubectl label nodes k8snode2 diskceph定义pod的时候指定要调度到具有diskceph标签的node上 vim pod-1.yaml apiVersion: v1
kind: Pod
metadata:name: demo-pod-1namespace: defaultlabels:app: myappenv: dev
spec:nodeSelector:disk: cephcontainers:- name: tomcat-pod-javaports:- containerPort: 8080image: tomcat:8.5-jre8-alpineimagePullPolicy: IfNotPresentkubectl apply -f pod-1.yaml查看pod调度到哪个节点 kubectl get pods -o wideNAME READY STATUS RESTARTS
demo-pod-1 1/1 Running 0 k8snode2做完上面实验需要把default名称空间下的pod全都删除kubectl delete pods pod名字 kubectl delete pods demo-pod-1删除node节点打的标签 kubectl label nodes k8snode2 disk-亲和性
node节点亲和性 node节点亲和性调度nodeAffinity查看官方介绍 kubectl explain pods.spec.affinity KIND: Pod
VERSION: v1
RESOURCE: affinity Object
DESCRIPTION:If specified, the pods scheduling constraintsAffinity is a group of affinity scheduling rules.
FIELDS:nodeAffinity Object #节点亲和性podAffinity Object #pod亲和性podAntiAffinity Object #pod反亲和性kubectl explain pods.spec.affinity.nodeAffinityKIND: Pod
VERSION: v1
RESOURCE: nodeAffinity Object
DESCRIPTION:Describes node affinity scheduling rules for the pod.Node affinity is a group of node affinity scheduling rules.
FIELDS:preferredDuringSchedulingIgnoredDuringExecution []ObjectrequiredDuringSchedulingIgnoredDuringExecution Objectprefered表示有节点尽量满足这个位置定义的亲和性这不是一个必须的条件软亲和性
require表示必须有节点满足这个位置定义的亲和性这是个硬性条件硬亲和性使用requiredDuringSchedulingIgnoredDuringExecution硬亲和性 把myapp-v1.tar.gz上传到k8snode2和k8snode1上手动解压 ctr -nk8s.io images import myapp-v1.tar.gz vim pod-nodeaffinity-demo.yaml apiVersion: v1
kind: Pod
metadata:name: pod-node-affinity-demonamespace: defaultlabels:app: myapptier: frontend
spec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: zoneoperator: Invalues: - foo- barcontainers:- name: myappimage: docker.io/ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent我们检查当前节点中有任意一个节点拥有zone标签的值是foo或者bar就可以把pod调度到这个node节点的foo或者bar标签上的节点上 kubectl apply -f pod-nodeaffinity-demo.yaml kubectl get pods -o wide | grep pod-nodepod-node-affinity-demo 0/1 Pending 0 k8snode1 status的状态是pending上面说明没有完成调度因为没有一个拥有zone的标签的值是foo或者bar而且使用的是硬亲和性必须满足条件才能完成调度 给这个k8snode1节点打上标签zonefoo在查看 kubectl label nodes k8snode1 zonefookubectl get pods -o wide pod-node-affinity-demo 1/1 Running 0 k8snode1 删除pod-nodeaffinity-demo.yaml kubectl delete -f pod-nodeaffinity-demo.yaml使用preferredDuringSchedulingIgnoredDuringExecution软亲和性
vim pod-nodeaffinity-demo-2.yaml apiVersion: v1
kind: Pod
metadata:name: pod-node-affinity-demo-2namespace: defaultlabels:app: myapptier: frontend
spec:containers:- name: myappimage: docker.io/ikubernetes/myapp:v1imagePullPolicy: IfNotPresentaffinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- preference:matchExpressions: - key: zone1operator: Invalues:- foo1- bar1weight: 10- preference:matchExpressions:- key: zone2operator: Invalues:- foo2- bar2weight: 20kubectl apply -f pod-nodeaffinity-demo-2.yamlkubectl get pods -o wide |grep demo-2pod-node-affinity-demo-2 1/1 Running 0 k8snode1上面说明软亲和性是可以运行这个pod的尽管没有运行这个pod的节点定义的zone1标签 Node节点亲和性针对的是pod和node的关系Pod调度到node节点的时候匹配的条件 测试完成删除 pod-nodeaffinity-demo-2.yaml kubectl delete -f pod-nodeaffinity-demo-2.yamlweight权重 weight是相对权重权重越高pod调度的几率越大 假如给xianchaonode1和xianchaonode2都打上标签 kubectl label nodes k8snode1 zone1foo1
kubectl label nodes k8snode2 zone2foo2vim pod-nodeaffinity-demo-2.yamlapiVersion: v1
kind: Pod
metadata:name: pod-node-affinity-demo-2namespace: defaultlabels:app: myapptier: frontend
spec:containers:- name: myappimage: docker.io/ikubernetes/myapp:v1imagePullPolicy: IfNotPresentaffinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- preference:matchExpressions: - key: zone1operator: Invalues:- foo1- bar1weight: 10- preference:matchExpressions:- key: zone2operator: Invalues:- foo2- bar2weight: 20kubectl apply -f pod-nodeaffinity-demo-2.yamlpod在定义node节点亲和性的时候k8sode1和k8snode2都满足条件都可以调度pod且都具有标签pod在匹配zone2foo2的权重高那么pod就会优先调度到k8snode2上 删除对应标签和pod kubectl delete -f pod-nodeaffinity-demo.yaml
kubectl delete -f pod-nodeaffinity-demo-2.yaml查看是否还有对应pod未删除 kubectl get pods -owide Pod节点亲和性
pod自身的亲和性调度有两种表示形式
podaffinitypod和pod更倾向腻在一起把相近的pod结合到相近的位置如同一区域同一机架这样的话pod和pod之间更好通信比方说有两个机房这两个机房部署的集群有1000台主机那么我们希望把nginx和tomcat都部署同一个地方的node节点上可以提高通信效率podunaffinitypod和pod更倾向不腻在一起如果部署两套程序那么这两套程序更倾向于反亲和性这样相互之间不会有影响。第一个pod随机选则一个节点做为评判后续的pod能否到达这个pod所在的节点上的运行方式这就称为pod亲和性我们怎么判定哪些节点是相同位置的哪些节点是不同位置的我们在定义pod亲和性时需要有一个前提哪些pod在同一个位置哪些pod不在同一个位置这个位置是怎么定义的标准是什么以节点名称为标准这个节点名称相同的表示是同一个位置节点名称不相同的表示不是一个位置。
kubectl explain pods.spec.affinity.podAffinityKIND: Pod
VERSION: v1
RESOURCE: podAffinity Object
DESCRIPTION:Describes pod affinity scheduling rules (e.g. co-locate this pod in thesame node, zone, etc. as some other pod(s)).Pod affinity is a group of inter pod affinity scheduling rules.
FIELDS:preferredDuringSchedulingIgnoredDuringExecution []ObjectrequiredDuringSchedulingIgnoredDuringExecution []ObjectrequiredDuringSchedulingIgnoredDuringExecution 硬亲和性
preferredDuringSchedulingIgnoredDuringExecution软亲和性kubectl explain pods.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecutionKIND: Pod
VERSION: v1
RESOURCE: requiredDuringSchedulingIgnoredDuringExecution []Object
DESCRIPTION:
FIELDS:labelSelector Objectnamespaces []stringtopologyKey string -required-topologyKey
位置拓扑的键这个是必须字段
怎么判断是不是同一个位置
rackrack1
rowrow1
使用rack的键是同一个位置
使用row的键是同一个位置
labelSelector
我们要判断pod跟别的pod亲和跟哪个pod亲和需要靠labelSelector通过labelSelector选则一组能作为亲和对象的pod资源
namespace
labelSelector需要选则一组资源那么这组资源是在哪个名称空间中呢通过namespace指定如果不指定namespaces那么就是当前创建pod的名称空间 定义两个pod第一个pod做为基准第二个pod跟着它走 查看默认名称空间有哪些pod,把看到的pod删除让默认名称空间没有pod kubectl get pods创建的pod必须与拥有appmyapp标签的pod在一个节点上 vim pod-required-affinity-demo-1.yaml apiVersion: v1
kind: Pod
metadata:name: pod-firstlabels:app2: myapp2tier: frontend
spec:containers:- name: myappimage: ikubernetes/myapp:v1imagePullPolicy: IfNotPresent
kubectl apply -f pod-required-affinity-demo-1.yamlvim pod-required-affinity-demo-2.yaml apiVersion: v1
kind: Pod
metadata:name: pod-secondlabels:app: backendtier: db
spec:containers:- name: busyboximage: busybox:latestimagePullPolicy: IfNotPresentcommand: [sh,-c,sleep 3600]affinity:podAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- {key: app2, operator: In, values: [myapp2]}topologyKey: kubernetes.io/hostname
kubectl apply -f pod-required-affinity-demo-2.yaml 第一个pod调度到哪第二个pod也调度到哪这就是pod节点亲和性 kubectl get pods -o widepod-first running k8snode2
pod-second running k8snode2删除测试pod kubectl delete -f pod-required-affinity-demo-1.yaml
kubectl delete -f pod-required-affinity-demo-2.yamlpod节点反亲和性
定义两个pod第一个pod做为基准第二个pod跟它调度节点相反
vim pod-required-anti-affinity-demo-1.yamlapiVersion: v1
kind: Pod
metadata:name: pod-firstlabels:app1: myapp1tier: frontend
spec:containers:- name: myappimage: ikubernetes/myapp:v1imagePullPolicy: IfNotPresentkubectl apply -f pod-required-anti-affinity-demo-1.yamlvim pod-required-anti-affinity-demo-2.yamlapiVersion: v1
kind: Pod
metadata:name: pod-secondlabels:app: backendtier: db
spec:containers:- name: busyboximage: busybox:latestimagePullPolicy: IfNotPresentcommand: [sh,-c,sleep 3600]affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- {key: app1, operator: In, values: [myapp1]}topologyKey: kubernetes.io/hostnamekubectl apply -f pod-required-anti-affinity-demo-2.yaml显示两个pod不在一个node节点上这就是pod节点反亲和性 kubectl get pods -o widepod-first running k8snode1
pod-second running k8snode2 删除测试用pod kubectl delete -f pod-required-anti-affinity-demo-1.yaml
kubectl delete -f pod-required-anti-affinity-demo-2.yamltopologykey 位置拓扑键
kubectl label nodes k8snode2 zonefoo
kubectl label nodes k8snode1 zonefoovim pod-first-required-anti-affinity-demo-1.yamlapiVersion: v1
kind: Pod
metadata:name: pod-firstlabels:app3: myapp3tier: frontend
spec:containers:- name: myappimage: ikubernetes/myapp:v1imagePullPolicy: IfNotPresentkubectl apply -f pod-first-required-anti-affinity-demo-1.yamlvim pod-second-required-anti-affinity-demo-1.yaml apiVersion: v1
kind: Pod
metadata:name: pod-secondlabels:app: backendtier: db
spec:containers:- name: busyboximage: busybox:latestimagePullPolicy: IfNotPresentcommand: [sh,-c,sleep 3600]affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- {key: app3 ,operator: In, values: [myapp3]}topologyKey: zonekubectl apply -f pod-second-required-anti-affinity-demo-1.yamlkubectl get pods -o wide 显示如下 pod-first running k8snode1
pod-second pending none第二个pod是pending因为两个节点是同一个位置现在没有不是同一个位置的了而且我们要求反亲和性所以就会处于pending状态如果在反亲和性这个位置把required改成preferred那么也会运行。 kubectl delete -f pod-first-required-anti-affinity-demo-1.yaml
kubectl delete -f pod-second-required-anti-affinity-demo-1.yaml移除标签 kubectl label nodes k8snode1 zone-
kubectl label nodes k8snode2 zone-podaffinitypod节点亲和性pod倾向于哪个podpoduntiaffinity:pod反亲和性nodeaffinitynode节点亲和性pod倾向于哪个node