当前位置: 首页 > news >正文

营销成功案例网站北京开发网站公司

营销成功案例网站,北京开发网站公司,十堰网站建设有哪些公司,淘宝做图网站好Docker 运维管理 一、Swarm集群管理1.1 Swarm的核心概念1.1.1 集群1.1.2 节点1.1.3 服务和任务1.1.4 负载均衡 1.2 Swarm安装准备工作创建集群添加工作节点到集群发布服务到集群扩展一个或多个服务从集群中删除服务ssh免密登录 二、Docker Compose与 Swarm 一起使用 Compose 三… Docker 运维管理 一、Swarm集群管理1.1 Swarm的核心概念1.1.1 集群1.1.2 节点1.1.3 服务和任务1.1.4 负载均衡 1.2 Swarm安装准备工作创建集群添加工作节点到集群发布服务到集群扩展一个或多个服务从集群中删除服务ssh免密登录 二、Docker Compose与 Swarm 一起使用 Compose 三、配置私有仓库Harbor3.1 环境准备3.2 安装 Docker3.3 安装 docker-compose3.4 准备 Harbor3.5 配置证书3.6 部署配置 Harbor3.7 配置启动服务3.8 定制本地仓库3.9 测试本地仓库 一、Swarm集群管理 docker swarm 是 docker 官方提供的一套容器编排系统是 Docker 公司推出的官方容器集群平台。基于 Go 语言 实现。 架构如下 1.1 Swarm的核心概念 1.1.1 集群 一个集群由多个 Docker 主机组成这些 Docker 主机以集群模式运行并充当 Manager 和 Worker 1.1.2 节点 swarm 是一系列节点的集合节点可以是一台裸机或者一台虚拟机。一个节点能扮演一个或者两个角色 Manager 或者 Worker Manager 节点 Docker Swarm 集群需要至少一个 manager 节点节点之间使用 Raft consensus protocol 进行协同工作。 通常第一个启用 docker swarm 的节点将成为 leader后来加入的都是 follower。当前的 leader 如果挂掉剩余的节点将重新选举出一个新的 leader。 每一个 manager 都有一个完整的当前集群状态的副本可以保证 manager 的高可用 Worker 节点 worker 节点是运行实际应用服务的容器所在的地方。理论上一个 manager 节点也能同时成为 worker 节点但在生产环境中不建议这样做。 worker 节点之间通过 control plane 进行通信这种通信使用 gossip 协议并且是 异步 的。 1.1.3 服务和任务 services(服务) swarm service 是一个抽象的概念它只是一个对运行在 swarm 集群上的应用服务所期望状态的描述。它就像一个描述了下面物品的清单列表一样 服务名称使用哪个镜像来创建容器要运行多少个副本服务的容器要连接到哪个网络上应该映射哪些端口 task(任务) 在Docker Swarm中task 是一个部署的最小单元task与容器是 一对一 的关系。stack(堆栈) stack 是描述一系列相关 services 的集合。我们通过在一个 YAML 文件中来定义一个 stack 1.1.4 负载均衡 集群管理器可以自动为服务分配一个已发布端口也可以为该服务配置一个已发布端口 可以指定任何未使用的端口。如果未指定端口则集群管理器会为服务分配30000-32767 范围内的端口。 1.2 Swarm安装 准备工作 [rootdocker ~]# hostnamectl hostname manager [rootdocker ~]# exit [rootmanager ~]# nmcli c modify ens160 ipv4.addresses IP地址/24 [rootmanager ~]# init 6 [rootmanager ~]# cat /etc/hosts EOF192.168.98.47 manager1192.168.98.48 worker1192.168.98.49 worker2EOF [rootmanager ~]# ping -c 3 worker1 PING worker1 (192.168.98.48) 56(84) bytes of data. 64 bytes from worker1 (192.168.98.48): icmp_seq1 ttl64 time0.678 ms 64 bytes from worker1 (192.168.98.48): icmp_seq2 ttl64 time0.461 ms 64 bytes from worker1 (192.168.98.48): icmp_seq3 ttl64 time0.353 ms--- worker1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2035ms rtt min/avg/max/mdev 0.353/0.497/0.678/0.135 ms [rootmanager ~]# ping -c 3 worker2 PING worker2 (192.168.98.49) 56(84) bytes of data. 64 bytes from worker2 (192.168.98.49): icmp_seq1 ttl64 time0.719 ms 64 bytes from worker2 (192.168.98.49): icmp_seq2 ttl64 time0.300 ms 64 bytes from worker2 (192.168.98.49): icmp_seq3 ttl64 time0.417 ms--- worker2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2089ms rtt min/avg/max/mdev 0.300/0.478/0.719/0.176 ms [rootmanager ~]# ping -c 3 manager1 PING manager1 (192.168.98.47) 56(84) bytes of data. 64 bytes from manager1 (192.168.98.47): icmp_seq1 ttl64 time0.034 ms 64 bytes from manager1 (192.168.98.47): icmp_seq2 ttl64 time0.038 ms 64 bytes from manager1 (192.168.98.47): icmp_seq3 ttl64 time0.035 ms--- manager1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2042ms rtt min/avg/max/mdev 0.034/0.035/0.038/0.001 ms[rootworker1 ~]# ping -c 3 worker1 PING worker1 (192.168.98.48) 56(84) bytes of data. 64 bytes from worker1 (192.168.98.48): icmp_seq1 ttl64 time0.023 ms 64 bytes from worker1 (192.168.98.48): icmp_seq2 ttl64 time0.024 ms 64 bytes from worker1 (192.168.98.48): icmp_seq3 ttl64 time0.035 ms--- worker1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2056ms rtt min/avg/max/mdev 0.023/0.027/0.035/0.005 ms [rootworker1 ~]# ping -c 3 worker2 PING worker2 (192.168.98.49) 56(84) bytes of data. 64 bytes from worker2 (192.168.98.49): icmp_seq1 ttl64 time0.405 ms 64 bytes from worker2 (192.168.98.49): icmp_seq2 ttl64 time0.509 ms 64 bytes from worker2 (192.168.98.49): icmp_seq3 ttl64 time0.381 ms--- worker2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2065ms rtt min/avg/max/mdev 0.381/0.431/0.509/0.055 ms[rootworker2 ~]# ping -c 3 worker1 PING worker1 (192.168.98.48) 56(84) bytes of data. 64 bytes from worker1 (192.168.98.48): icmp_seq1 ttl64 time0.304 ms 64 bytes from worker1 (192.168.98.48): icmp_seq2 ttl64 time0.346 ms 64 bytes from worker1 (192.168.98.48): icmp_seq3 ttl64 time0.460 ms--- worker1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2030ms rtt min/avg/max/mdev 0.304/0.370/0.460/0.065 ms [rootworker2 ~]# ping -c 3 worker2 PING worker2 (192.168.98.49) 56(84) bytes of data. 64 bytes from worker2 (192.168.98.49): icmp_seq1 ttl64 time0.189 ms 64 bytes from worker2 (192.168.98.49): icmp_seq2 ttl64 time0.079 ms 64 bytes from worker2 (192.168.98.49): icmp_seq3 ttl64 time0.055 ms--- worker2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2038ms rtt min/avg/max/mdev 0.055/0.107/0.189/0.058 ms# 查看版本 docker run --rm swarm -v swarm version 1.2.9 (527a849)镜像 [rootmanager ~]# ls anaconda-ks.cfg [rootmanager ~]# ls anaconda-ks.cfg swarm_1.2.9.tar.gz [rootmanager ~]# docker load -i swarm_1.2.9.tar.gz 6104cec23b11: Loading layer 12.44MB/12.44MB 9c4e304108a9: Loading layer 281.1kB/281.1kB a8731583ab53: Loading layer 2.048kB/2.048kB Loaded image: swarm:1.2.9 [rootmanager ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE swarm 1.2.9 1a5eb59a410f 4 years ago 12.7MB [rootmanager ~]# ls anaconda-ks.cfg swarm_1.2.9.tar.gz# 远程传输至worker1和woker2 scp swarm_1.2.9.tar.gz root192.168.98.48:~ [rootworker1 ~]# ls anaconda-ks.cfg swarm_1.2.9.tar.gz 创建集群 语法格式 docker swarm init --advertise MANAGER-IP #manager主机的IPmanager [rootmanager ~]# docker swarm init --advertise-addr 192.168.98.47 # --advertise-addr 参数表示其它 swarm 中的 worker 节点使用此 ip 地址与 manager 联系 Swarm initialized: current node (bsicfg4mvo18a0tv0z17crtnh) is now a manager.To add a worker to this swarm, run the following command:#token令牌令牌不是永久有效的有时效性docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377 # 将这个命令在 worker 主机上执行 To add a manager to this swarm, run docker swarm join-token manager and follow the instructions.# 查看节点状态都是ready [rootmanager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION bsicfg4mvo18a0tv0z17crtnh * manager Ready Active Leader 28.0.4 xtc6pax5faoobzqo341vf72rv worker1 Ready Active 28.0.4 ik0tqz8axejwu82mmukwjo47m worker2 Ready Active 28.0.4#查看集群状态 docker info #将节点强制驱除集群 docker swarm leave --force[rootmanager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS [rootmanager ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES#查看节点信息 docker node ls #查看集群状态 docker info #将节点强制驱除集群 docker swarm leave --force 添加工作节点到集群 创建了一个集群与管理器节点就可以添加工作节点 添加工作节点 worker1 [rootworker1 ~]# docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377 This node joined a swarm as a worker. 添加工作节点 worker2 [rootworker2 ~]# docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377 This node joined a swarm as a worker.如果忘记了 token 的值在管理节点 192.168.98.47(manager)主机上执行 令牌不是永久有效的有时效性 # 在管理者节点上写 [rootmanager ~]# docker swarm join-token worker To add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377发布服务到集群 # 查看节点状态都是ready [rootmanager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION bsicfg4mvo18a0tv0z17crtnh * manager Ready Active Leader 28.0.4 xtc6pax5faoobzqo341vf72rv worker1 Ready Active 28.0.4 ik0tqz8axejwu82mmukwjo47m worker2 Ready Active 28.0.4manager # docker服务创建 --replicas 1副本数量# 在管理节点 192.168.98.47(manager)主机上执行 #容器名称 #容器镜像名称 [rootmanager ~]# docker service create --replicas 1 --name nginx2 -p 80:80 nginx:1.27.4#创建一个容器作为服务名称叫做nginx2类似docker run如果没有nginx:1.27.4这个文件就会自动去拉取没有自动拉取就执行以下指令三台主机都要拉取nginx_1.27.4 [rootmanager ~]# docker load -i nginx_1.27.4.tar.gz 7914c8f600f5: Loading layer 77.83MB/77.83MB 9574fd0ae014: Loading layer 118.3MB/118.3MB 17129ef2de1a: Loading layer 3.584kB/3.584kB 320c70dd6b6b: Loading layer 4.608kB/4.608kB 2ef6413cdcb5: Loading layer 2.56kB/2.56kB d6266720b0a6: Loading layer 5.12kB/5.12kB 1fb7f1e96249: Loading layer 7.168kB/7.168kB Loaded image: nginx:1.27.4 [rootmanager ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nginx 1.27.4 97662d24417b 2 months ago 192MB swarm 1.2.9 1a5eb59a410f 4 years ago 12.7MB [rootmanager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS o9atmzaj9on8 nginx2 replicated 1/1 nginx:1.27.4 *:80-80/tcp[rootmanager ~]# docker service create --replicas 1 --name nginx2 -p 80:80 nginx:1.27.4 z0j2olxqwrlpm84uho1w1m20p overall progress: 1 out of 1 tasks 1/1: running verify: Service z0j2olxqwrlpm84uho1w1m20p converged scp rootmanagere:~/nginx 1.27.4.tar.gz scp root192.168.98.47:~/nginx 1.27.4.tar.gz# --pretty打印得好看一点 [rootmanager ~]# docker service inspect --pretty nginx2ID: o9atmzaj9on8tf9ffmp8k7bun #容器ID Name: nginx2 #容器名称 Service Mode: Replicated #模式ReplicatedReplicas: 1 #副本数 Placement: UpdateConfig:Parallelism: 1 #限制变形的方式有一个On failure: pauseMonitoring Period: 5sMax failure ratio: 0Update order: stop-first RollbackConfig:Parallelism: 1On failure: pauseMonitoring Period: 5sMax failure ratio: 0Rollback order: stop-first ContainerSpec: #镜像的名称nginx:1.27.4Image: nginx:1.27.4sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eabInit: false Resources: Endpoint Mode: vip Ports:PublishedPort 80Protocol tcpTargetPort 80PublishMode ingress[rootmanager ~]# docker service ps nginx2 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS id3xjfro1bdl nginx2.1 nginx:1.27.4 manager Running Running 5 minutes ago # ID 名称 镜像 在哪个节点上 是否在运行 运行时间扩展一个或多个服务 # scale伸缩 [rootmanager ~]# docker service scale nginx25 nginx2 scaled to 5 overall progress: 5 out of 5 tasks 1/5: running 2/5: running 3/5: running 4/5: running 5/5: running verify: Service nginx2 converged [rootmanager ~]# docker service ps nginx2 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS rbmlfw6oiyot nginx2.1 nginx:1.27.4 worker1 Running Running 9 minutes ago uvko24qqf1ps nginx2.2 nginx:1.27.4 manager Running Running 2 minutes ago kwjys4eldv2d nginx2.3 nginx:1.27.4 manager Running Running 2 minutes ago onnmzewg4ou4 nginx2.4 nginx:1.27.4 worker2 Running Running 2 minutes ago v0we91sbj7p5 nginx2.5 nginx:1.27.4 worker1 Running Running 2 minutes ago [rootmanager ~]# docker service scale nginx21 nginx2 scaled to 1 overall progress: 1 out of 1 tasks 1/1: running verify: Service nginx2 converged [rootmanager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS z0j2olxqwrlp nginx2 replicated 1/1 nginx:1.27.4 *:80-80/tcp [rootmanager ~]# docker service scale nginx23 nginx2 scaled to 3 overall progress: 3 out of 3 tasks 1/3: running [] 2/3: running [] 3/3: running [] verify: Service nginx2 converged [rootmanager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS o9atmzaj9on8 nginx2 replicated 3/3 nginx:1.27.4 *:80-80/tcp [rootmanager ~]# docker service ps nginx2 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS id3xjfro1bdl nginx2.1 nginx:1.27.4 manager Running Running 15 minutes ago hx7sqjr88ac8 nginx2.2 nginx:1.27.4 worker2 Running Running 2 minutes ago p7x2lyo6vdk8 nginx2.5 nginx:1.27.4 worker1 Running Running 2 minutes ago 更新服务 [rootmanager ~]# docker service update --publish-rm 80:80 --publish-add 88:80 nginx2 nginx2 overall progress: 3 out of 3 tasks 1/3: running [] 2/3: running [] 3/3: running [] verify: Service nginx2 converged [rootmanager ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 78adf323559b nginx:1.27.4 /docker-entrypoint.… 20 seconds ago Up 17 seconds 80/tcp nginx2.1.ds6dy33b6m62kzajs9frvdw4g [rootmanager ~]# docker service ps nginx2 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS ds6dy33b6m62 nginx2.1 nginx:1.27.4 manager Running Running 19 seconds ago id3xjfro1bdl \_ nginx2.1 nginx:1.27.4 manager Shutdown Shutdown 20 seconds ago rpnodskwk4f7 nginx2.2 nginx:1.27.4 worker2 Running Running 23 seconds ago hx7sqjr88ac8 \_ nginx2.2 nginx:1.27.4 worker2 Shutdown Shutdown 24 seconds ago shdko7kq7vvw nginx2.5 nginx:1.27.4 worker1 Running Running 16 seconds ago p7x2lyo6vdk8 \_ nginx2.5 nginx:1.27.4 worker1 Shutdown Shutdown 16 seconds ago [rootmanager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS o9atmzaj9on8 nginx2 replicated 3/3 nginx:1.27.4 *:88-80/tcpworker1 [rootworker1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 21c117b44558 nginx:1.27.4 /docker-entrypoint.… About a minute ago Up About a minute 80/tcp nginx2.5.shdko7kq7vvwqsmzgtm5790s2 worker2 [rootworker2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a761908c5f2d nginx:1.27.4 /docker-entrypoint.… About a minute ago Up About a minute 80/tcp nginx2.2.rpnodskwk4f79yxpf0tctb5n8从集群中删除服务 [rootmanager ~]# docker service rm nginx2 nginx2 [rootmanager ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [rootmanager ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS[rootworker1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[rootworker2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESssh免密登录 ssh-keygenssh-copy-id rootworker1ssh-copy-id rootworker2ssh worker1exit [rootmanager ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): #回车 Enter passphrase (empty for no passphrase): #回车 Enter same passphrase again: #回车 Your identification has been saved in /root/.ssh/id_rsa Your public key has been saved in /root/.ssh/id_rsa.pub The key fingerprint is: SHA256:6/aBJGgBhXrJ5l541rWVPguq1ao5QLDqe6LfztIxBAw rootmanager The keys randomart image is: ---[RSA 3072]---- |Eo.o. | |. . | | o. . | |o * .o . o | |. oo...S | |. .* .ooo | |.. * o....o | | oooo.o. .. | |oooB..... | ----[SHA256]-----[rootmanager ~]# ssh-copy-id rootworker1 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: /root/.ssh/id_rsa.pub /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys rootworker1s password: Number of key(s) added: 1Now try logging into the machine, with: ssh rootworker1 and check to make sure that only the key(s) you wanted were added.[rootmanager ~]# ssh-copy-id rootworker2 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: /root/.ssh/id_rsa.pub The authenticity of host worker2 (192.168.98.49) cant be established. ED25519 key fingerprint is SHA256:I3/lsrnTEnXOE3LFvTLRUXAJAhSVrIEWtqTnleRz9w. This host key is known by the following other names/addresses:~/.ssh/known_hosts:1: manager~/.ssh/known_hosts:4: worker1 Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys rootworker2s password: Number of key(s) added: 1Now try logging into the machine, with: ssh rootworker2 and check to make sure that only the key(s) you wanted were added.[rootmanager ~]# ssh worker1 Register this system with Red Hat Insights: rhc connectExample: # rhc connect --activation-key key --organization orgThe rhc client and Red Hat Insights will enable analytics and additional management capabilities on your system. View your connected systems at https://console.redhat.com/insightsYou can learn more about how to register your system using rhc at https://red.ht/registration Last login: Thu May 8 16:36:42 2025 from 192.168.98.1 [rootworker1 ~]# exit logout Connection to worker1 closed.二、Docker Compose 拉取 docker-compose-linux-x86_64 文件 创建 docker-compose.yaml 文件 [rootdocker ~]# mv docker-compose-linux-x86_64 /usr/bin/docker-compose [rootdocker ~]# ll total 4 -rw-------. 1 root root 989 Feb 27 16:19 anaconda-ks.cfg [rootdocker ~]# chmod x /usr/bin/docker-compose [rootdocker ~]# ll /usr/bin/docker-compose -rwxr-xr-x. 1 root root 73699264 May 10 09:55 /usr/bin/docker-compose [rootdocker ~]# docker-compose --version Docker Compose version v2.35.1 [rootdocker ~]# vim docker-compose.yaml [rootdocker ~]# ls anaconda-ks.cfg docker-compose.yaml复制会话创建需要的宿主机目录 [rootdocker ~]# mkdir -p /opt/{nginx,mysql,redis} [rootdocker ~]# mkdir /opt/nginx/{conf,html} [rootdocker ~]# mkdir /opt/mysql/data [rootdocker ~]# mkdir /opt/redis/data [rootdocker ~]# tree /opt执行 [rootdocker ~]# docker-compose up -d WARN[0000] /root/docker-compose.yaml: the attribute version is obsolete, it will be ignored, please remove it to avoid potential confusion [] Running 26/26? mysql Pulled 91.7s ? c2eb5d06bfea Pull complete 16.4s ? ba361f0ba5e7 Pull complete 16.4s ? 0e83af98b000 Pull complete 16.4s ? 770e931107be Pull complete 16.6s ? a2be1b721112 Pull complete 16.6s ? 68c594672ed3 Pull complete 16.6s ? cfd201189145 Pull complete 55.2s ? e9f009c5b388 Pull complete 55.3s ? 61a291920391 Pull complete 87.0s ? c8604ede059a Pull complete 87.0s ? redis Pulled 71.0s ? cd07ede39ddc Pull complete 40.5s ? 63df650ee4e0 Pull complete 43.1s ? c175c1c9487d Pull complete 55.0s ? 91cf9601b872 Pull complete 55.0s ? 4f4fb700ef54 Pull complete 55.0s ? c70d7dc4bd70 Pull complete 55.0s ? nginx Pulled 53.6s ? 254e724d7786 Pull complete 20.5s ? 913115292750 Pull complete 31.3s ? 3e544d53ce49 Pull complete 32.7s ? 4f21ed9ac0c0 Pull complete 36.7s ? d38f2ef2d6f2 Pull complete 39.7s ? 40a6e9f4e456 Pull complete 42.7s ? d3dc5ec71e9d Pull complete 45.3s [] Running 3/4? Network root_default Created 0.0s ? Container mysql Starting 0.3s ? Container redis Started 0.3s ? Container nginx Started 0.3s Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting /opt/mysql/my.cnf to rootfs at /etc/my.cnf: create mountpoint for /etc/my.cnf mount: cannot create subdirectories in /var/lib/docker/overlay2/1a5867fcf9c1f650da4bc51387cbd7621f1464eb2a1ce8d90f90f43733b34602/merged/etc/my.cnf: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type [rootdocker ~]# vim docker-compose.yaml [rootdocker ~]# cat docker-compose.yaml version: 3.9 services: nginx:image: nginx:1.27.5container_name: nginxports:- 80:80volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmlmysql:image: mysql:9.3.0container_name: mysqlrestart: alwaysports:- 3306:3306volumes:- /opt/mysql/data:/var/lib/mysql#- /opt/mysql/my.cnf:/etc/my.cnfcommand:- character-set-serverutf8mb4- collation-serverutf8mb4_general_ciredis:image: redis:8.0.0container_name: redisports:- 6379:6379volumes:- /opt/redis/data:/data[rootdocker ~]# docker-compose up -d WARN[0000] /root/docker-compose.yaml: the attribute version is obsolete, it will be ignored, please remove it to avoid potential confusion [] Running 3/3? Container redis Running 0.0s ? Container mysql Started 0.2s ? Container nginx Running [rootdocker ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE redis 8.0.0 d62dbaef1b81 5 days ago 128MB nginx 1.27.5 a830707172e8 3 weeks ago 192MB mysql 9.3.0 2c849dee4ca9 3 weeks ago 859MB [rootdocker ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4390422bf4b1 mysql:9.3.0 docker-entrypoint.s… About a minute ago Restarting (127) 7 seconds ago mysql 18713204332c nginx:1.27.5 /docker-entrypoint.… 2 minutes ago Up 2 minutes 0.0.0.0:80-80/tcp, [::]:80-80/tcp nginx be2489a46e42 redis:8.0.0 docker-entrypoint.s… 2 minutes ago Up 2 minutes 0.0.0.0:6379-6379/tcp, [::]:6379-6379/tcp redis访问 [rootdocker ~]# curl localhost curl: (56) Recv failure: Connection reset by peer [rootdocker ~]# echo index.html /opt/nginx/html/index.html [rootdocker ~]# curl http://192.168.98.149 curl: (7) Failed to connect to 192.168.98.149 port 80: Connection refused[rootdocker ~]# cd /opt/nginx/conf/ [rootdocker conf]# ls [rootdocker conf]# vim web.conf [rootdocker conf]# cat web.conf server {listen 80;server_name 192.168.98.149;root /opt/nginx/html; }[rootdocker conf]# curl http://192.168.98.149 html headtitle404 Not Found/title/head body centerh1404 Not Found/h1/center hrcenternginx/1.27.5/center /body /html[rootdocker conf]# vim web.conf [rootdocker conf]# cat web.conf server {listen 80;server_name 192.168.98.149;root /usr/share/nginx/html; } [rootdocker conf]# docker restart nginx nginx [rootdocker conf]# curl http://192.168.98.149 index.html 必须在有 docker-compose.yaml 文件下的路径执行 [rootdocker ~]# docker-compose ps WARN[0000] /root/docker-compose.yaml: the attribute version is obsolete, it will be ignored, please remove it to avoid potential confusion NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS mysql mysql:9.3.0 docker-entrypoint.s… mysql 21 minutes ago Restarting (127) 52 seconds ago nginx nginx:1.27.5 /docker-entrypoint.… nginx 23 minutes ago Up 11 minutes 0.0.0.0:80-80/tcp, [::]:80-80/tcp redis redis:8.0.0 docker-entrypoint.s… redis 23 minutes ago Up 23 minutes 0.0.0.0:6379-6379/tcp, [::]:6379-6379/tcp解决警告删除 docker-compose.yaml 文件第一行的 version: “3.9” [rootdocker ~]# vim docker-compose.yaml [rootdocker ~]# cat docker-compose.yaml services: nginx:image: nginx:1.27.5container_name: nginxports:- 80:80volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmlmysql:image: mysql:9.3.0container_name: mysqlrestart: alwaysports:- 3306:3306volumes:- /opt/mysql/data:/var/lib/mysql#- /opt/mysql/my.cnf:/etc/my.cnfcommand:- character-set-serverutf8mb4- collation-serverutf8mb4_general_ciredis:image: redis:8.0.0container_name: redisports:- 6379:6379volumes:- /opt/redis/data:/data [rootdocker ~]# docker-compose ps NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS mysql mysql:9.3.0 docker-entrypoint.s… mysql 24 minutes ago Restarting (127) 3 seconds ago nginx nginx:1.27.5 /docker-entrypoint.… nginx 26 minutes ago Up 41 seconds 0.0.0.0:80-80/tcp, [::]:80-80/tcp redis redis:8.0.0 docker-entrypoint.s… redis 26 minutes ago Up 26 minutes 0.0.0.0:6379-6379/tcp, [::]:6379-6379/tcp [rootdocker ~]# mkdir composetest [rootdocker ~]# cd composetest/ [rootdocker composetest]# ls [rootdocker composetest]# vim app.py [rootdocker composetest]# cat app.py import timeimport redis from flask import Flaskapp Flask(__name__) cache redis.Redis(hostredis, port6379)def get_hit_count():retries 5while True:try:return cache.incr(hits)except redis.exceptions.ConnectionError as exc:if retries 0:raise excretries - 1time.sleep(0.5)app.route(/) def hello():count get_hit_count()return Hello World! I have been seen {} times.\n.format(count)if __name__ __main__:app.run(host0.0.0.0, debugTrue)[rootdocker composetest]# pip freeze requirements.txt [rootdocker composetest]# cat requirements.txt flask redis[rootdocker yum.repos.d]# mount /dev/sr0 /mnt mount: /mnt: WARNING: source write-protected, mounted read-only. [rootdocker yum.repos.d]# dnf install pip -y Updating Subscription Management repositories. Unable to read consumer identityThis system is not registered with an entitlement server. You can use rhc or subscription-manager to register.BaseOS 2.7 MB/s | 2.7 kB 00:00 AppStream 510 kB/s | 3.2 kB 00:00 Docker CE Stable - x86_64 4.2 kB/s | 3.5 kB 00:00 Docker CE Stable - x86_64 31 B/s | 55 B 00:01 Errors during downloading metadata for repository docker-ce-stable:- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/e0bf6fe4688b6f32c32f16998b1da1027a1ebfcec723b7c6c09e032effdb248a-primary.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/65c4f66e2808d328890505c3c2f13bb35a96f457d1c21a6346191c4dc07e6080-updateinfo.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/2fb592ad5a8fa5136a1d5992ce0a70f344c84f1601f0792d9573f8c5de73dffa-filelists.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ] Error: Failed to download metadata for repo docker-ce-stable: Yum repo downloading error: Downloading error(s): repodata/e0bf6fe4688b6f32c32f16998b1da1027a1ebfcec723b7c6c09e032effdb248a-primary.xml.gz - Cannot download, all mirrors were already tried without success; repodata/2fb592ad5a8fa5136a1d5992ce0a70f344c84f1601f0792d9573f8c5de73dffa-filelists.xml.gz - Cannot download, all mirrors were already tried without success [rootdocker yum.repos.d]# dnf install pip -y Updating Subscription Management repositories. Unable to read consumer identityThis system is not registered with an entitlement server. You can use rhc or subscription-manager to register.Docker CE Stable - x86_64 4.5 kB/s | 3.5 kB 00:00 Docker CE Stable - x86_64 ...... Complete![rootdocker ~]# ls anaconda-ks.cfg composetest docker-compose.yaml [rootdocker ~]# docker-compose down [] Running 4/4✔ Container redis Removed 0.1s ✔ Container nginx Removed 0.1s ✔ Container mysql Removed 0.0s ✔ Network root_default Removed 0.1s [rootdocker ~]# docker-compose ps NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS [rootdocker ~]# cd composetest/ [rootdocker composetest]# ls [rootdocker composetest]# vim Dockerfile [rootdocker composetest]# vim docker-compose.yaml [rootdocker composetest]# cat Dockerfile FROM python:3.12-alpine ADD . /code WORKDIR /code RUN pip install -r requirements.txt CMD [python, app.py] [rootdocker composetest]# cat docker-compose.yaml services:web:build: .ports:- 5000:5000redis:image: redis:alpine# 复制会话访问 [rootdocker composetest]# curl http://192.168.98.149:5000 Hello World! I have been seen 1 times. 执行命令启动复制会话访问成功后 exit 退出 [rootdocker composetest]# docker-compose up Compose can now delegate builds to bake for better performance.To do so, set COMPOSE_BAKEtrue. [] Building 50.1s (10/10) FINISHED docker:default [web internal] load build definition from Dockerfile 0.0s transferring dockerfile: 208B 0.0s [web internal] load metadata for docker.io/library/python:3.12-alpine 3.6s [web internal] load .dockerignore 0.0s transferring context: 2B 0.0s [web internal] load build context 0.0s transferring context: 392B 0.0s [web 1/4] FROM docker.io/library/python:3.12-alpinesha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b 6.9s resolve docker.io/library/python:3.12-alpinesha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b59fe 0.0s sha256:d96ff845ea26c45b2499dfa5fb9ef5b354ba49ee6025d35c4ba81403072c10ad 252B / 252B 5.2s sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b59fe07535a5 9.03kB / 9.03kB 0.0s sha256:b18b7c0ecd765d0608e439663196f2cfe4b572dccfffe1490530f7398df4b271 1.74kB / 1.74kB 0.0s sha256:81ce3817028c37c988e38333dd7283eab7662ce7dd43b5f019f86038c71f75ba 5.33kB / 5.33kB 0.0s sha256:d39e98ccbc35a301f215a15d73fd647ad7044124c3835da293d82cbb765cab03 460.20kB / 460.20kB 2.2s sha256:a597717b83500ddd11c9b9eddc7094663b2a376c4c5cb0078b433e11a82e70d0 13.66MB / 13.66MB 6.2s extracting sha256:d39e98ccbc35a301f215a15d73fd647ad7044124c3835da293d82cbb765cab03 0.1s extracting sha256:a597717b83500ddd11c9b9eddc7094663b2a376c4c5cb0078b433e11a82e70d0 0.6s extracting sha256:d96ff845ea26c45b2499dfa5fb9ef5b354ba49ee6025d35c4ba81403072c10ad 0.0s [web 2/4] ADD . /code 0.1s [web 3/4] WORKDIR /code 0.0s [web 4/4] RUN pip install -r requirements.txt 39.3s [web] exporting to image 0.1s exporting layers 0.1s writing image sha256:32303d47b57e0c674340d90a76bb82776024626e64d95e04e0ff5d7e893caa80 0.0s naming to docker.io/library/composetest-web 0.0s [web] resolving provenance for metadata file 0.0s [] Running 4/4 ✔ web Built 0.0s ✔ Network composetest_default Created 0.0s ✔ Container composetest-redis-1 Created 0.0s ✔ Container composetest-web-1 Created 0.0s Attaching to redis-1, web-1 redis-1 | Starting Redis Server redis-1 | 1:C 10 May 2025 08:28:20.677 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add vm.overcommit_memory 1 to /etc/sysctl.conf and then reboot or run the command sysctl vm.overcommit_memory1 for this to take effect. redis-1 | 1:C 10 May 2025 08:28:20.680 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo redis-1 | 1:C 10 May 2025 08:28:20.680 * Redis version8.0.0, bits64, commit00000000, modified1, pid1, just started redis-1 | 1:C 10 May 2025 08:28:20.680 * Configuration loaded redis-1 | 1:M 10 May 2025 08:28:20.683 * monotonic clock: POSIX clock_gettime redis-1 | 1:M 10 May 2025 08:28:20.693 * Running modestandalone, port6379. redis-1 | 1:M 10 May 2025 08:28:20.709 * bf RedisBloom version 7.99.90 (Gitunknown) redis-1 | 1:M 10 May 2025 08:28:20.709 * bf Registering configuration options: [ redis-1 | 1:M 10 May 2025 08:28:20.709 * bf { bf-error-rate : 0.01 } redis-1 | 1:M 10 May 2025 08:28:20.709 * bf { bf-initial-size : 100 } redis-1 | 1:M 10 May 2025 08:28:20.709 * bf { bf-expansion-factor : 2 } redis-1 | 1:M 10 May 2025 08:28:20.709 * bf { cf-bucket-size : 2 } redis-1 | 1:M 10 May 2025 08:28:20.709 * bf { cf-initial-size : 1024 } redis-1 | 1:M 10 May 2025 08:28:20.709 * bf { cf-max-iterations : 20 } redis-1 | 1:M 10 May 2025 08:28:20.709 * bf { cf-expansion-factor : 1 } redis-1 | 1:M 10 May 2025 08:28:20.709 * bf { cf-max-expansions : 32 } redis-1 | 1:M 10 May 2025 08:28:20.709 * bf ] redis-1 | 1:M 10 May 2025 08:28:20.709 * Module bf loaded from /usr/local/lib/redis/modules//redisbloom.so redis-1 | 1:M 10 May 2025 08:28:20.799 * search Redis version found by RedisSearch : 8.0.0 - oss redis-1 | 1:M 10 May 2025 08:28:20.800 * search RediSearch version 8.0.0 (GitHEAD-61787b7) redis-1 | 1:M 10 May 2025 08:28:20.803 * search Low level api version 1 initialized successfully redis-1 | 1:M 10 May 2025 08:28:20.808 * search gc: ON, prefix min length: 2, min word length to stem: 4, prefix max expansions: 200, query timeout (ms): 500, timeout policy: fail, cursor read size: 1000, cursor max idle (ms): 300000, max doctable size: 1000000, max number of search results: 1000000, redis-1 | 1:M 10 May 2025 08:28:20.810 * search Initialized thread pools! redis-1 | 1:M 10 May 2025 08:28:20.810 * search Disabled workers threadpool of size 0 redis-1 | 1:M 10 May 2025 08:28:20.815 * search Subscribe to config changes redis-1 | 1:M 10 May 2025 08:28:20.815 * search Enabled role change notification redis-1 | 1:M 10 May 2025 08:28:20.815 * search Cluster configuration: AUTO partitions, type: 0, coordinator timeout: 0ms redis-1 | 1:M 10 May 2025 08:28:20.816 * search Register write commands redis-1 | 1:M 10 May 2025 08:28:20.816 * Module search loaded from /usr/local/lib/redis/modules//redisearch.so redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries RedisTimeSeries version 79991, git_shade1ad5089c15c42355806bbf51a0d0cf36f223f6 redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries Redis version found by RedisTimeSeries : 8.0.0 - oss redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries Registering configuration options: [ redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries { ts-compaction-policy : } redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries { ts-num-threads : 3 } redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries { ts-retention-policy : 0 } redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries { ts-duplicate-policy : block } redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries { ts-chunk-size-bytes : 4096 } redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries { ts-encoding : compressed } redis-1 | 1:M 10 May 2025 08:28:20.828 * timeseries { ts-ignore-max-time-diff: 0 } redis-1 | 1:M 10 May 2025 08:28:20.829 * timeseries { ts-ignore-max-val-diff : 0.000000 } redis-1 | 1:M 10 May 2025 08:28:20.829 * timeseries ] redis-1 | 1:M 10 May 2025 08:28:20.833 * timeseries Detected redis oss redis-1 | 1:M 10 May 2025 08:28:20.837 * Module timeseries loaded from /usr/local/lib/redis/modules//redistimeseries.so redis-1 | 1:M 10 May 2025 08:28:20.886 * ReJSON Created new data type ReJSON-RL redis-1 | 1:M 10 May 2025 08:28:20.900 * ReJSON version: 79990 git sha: unknown branch: unknown redis-1 | 1:M 10 May 2025 08:28:20.900 * ReJSON Exported RedisJSON_V1 API redis-1 | 1:M 10 May 2025 08:28:20.900 * ReJSON Exported RedisJSON_V2 API redis-1 | 1:M 10 May 2025 08:28:20.900 * ReJSON Exported RedisJSON_V3 API redis-1 | 1:M 10 May 2025 08:28:20.900 * ReJSON Exported RedisJSON_V4 API redis-1 | 1:M 10 May 2025 08:28:20.900 * ReJSON Exported RedisJSON_V5 API redis-1 | 1:M 10 May 2025 08:28:20.900 * ReJSON Enabled diskless replication redis-1 | 1:M 10 May 2025 08:28:20.904 * ReJSON Initialized shared string cache, thread safe: false. redis-1 | 1:M 10 May 2025 08:28:20.904 * Module ReJSON loaded from /usr/local/lib/redis/modules//rejson.so redis-1 | 1:M 10 May 2025 08:28:20.904 * search Acquired RedisJSON_V5 API redis-1 | 1:M 10 May 2025 08:28:20.907 * Server initialized redis-1 | 1:M 10 May 2025 08:28:20.910 * Ready to accept connections tcp web-1 | * Serving Flask app app web-1 | * Debug mode: on web-1 | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. web-1 | * Running on all addresses (0.0.0.0) web-1 | * Running on http://127.0.0.1:5000 web-1 | * Running on http://172.18.0.3:5000 web-1 | Press CTRLC to quit web-1 | * Restarting with stat web-1 | * Debugger is active! web-1 | * Debugger PIN: 102-900-685 web-1 | 192.168.98.149 - - [10/May/2025 08:31:06] GET / HTTP/1.1 200 -w Enable Watch failed to solve: process /bin/sh -c pip install -r requirements.txt did not complete successfully: exit code: 1[rootdocker composetest]# [rootdocker composetest]# pip install --upgrade pip Requirement already satisfied: pip in /usr/lib/python3.9/site-packages (21.3.1) 与 Swarm 一起使用 Compose 使用 docker service create 一次只能部署一个服务使用 docker-compose.yml 我们可以一次启动多个关联的服务。 [rootdocker ~]# ls anaconda-ks.cfg composetest docker-compose.yaml [rootdocker ~]# mkdir dcswarm [rootdocker ~]# ls anaconda-ks.cfg composetest dcswarm docker-compose.yaml[rootdocker ~]# cd dcswarm/ [rootdocker dcswarm]# vim docker-compose.yaml [rootdocker dcswarm]# cat docker-compose.yaml services:nginx:image: nginx:1.27.5ports:- 80:80- 443:443volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmldeploy:mode: replicatedreplicas: 2mysql:image: mysql:9.3.0ports:- 3306:3306command:- character-set-serverutf8mb4- collation-serverutf8mb4_general_cienvironment:MYSQL_ROOT_PASSWORD: 123456volumes:- /opt/mysql/data:/var/lib/mysqldeploy:mode: replicatedreplicas: 2[rootdocker dcswarm]# docker stack deploy -c docker-compose.yaml webdocker swarm initdocker swarm join --token [rootdocker ~]# scp dcswarm/* root192.168.98.47:~ root192.168.98.47s password: docker-compose.yaml [rootmanager ~]# ls anaconda-ks.cfg docker-compose.yaml nginx_1.27.4.tar.gz swarm_1.2.9.tar.gz [rootmanager ~]# mkdir dcswarm [rootmanager ~]# mv docker-compose.yaml dcswarm/ [rootmanager ~]# cd dcswarm/ [rootmanager dcswarm]# ls docker-compose.yaml [rootmanager dcswarm]# docker stack deploy -c docker-compose.yaml web Since --detachfalse was not specified, tasks will be created in the background. In a future release, --detachfalse will become the default. Creating network web_default Creating service web_mysql Creating service web_nginx 三、配置私有仓库Harbor 3.1 环境准备 Harbor(港湾)是一个用于 存储 和 分发 Docker 镜像的企业级 Registry 服务器。 修改主机名、IP地址开启路由转发/etc/sysctl.conf net.ipv4.ip_forward 1配置主机映射/etc/hosts # 修改主机名、IP地址 [rootdocker ~]# hostnamectl hostname harbor [rootdocker ~]# ip ad 1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens160: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:24 brd ff:ff:ff:ff:ff:ffaltname enp3s0inet 192.168.86.132/24 brd 192.168.86.255 scope global dynamic noprefixroute ens160valid_lft 1672sec preferred_lft 1672secinet6 fe80::20c:29ff:fef5:e524/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: ens224: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:2e brd ff:ff:ff:ff:ff:ffaltname enp19s0inet 192.168.98.159/24 brd 192.168.98.255 scope global dynamic noprefixroute ens224valid_lft 1672sec preferred_lft 1672secinet6 fe80::fd7d:606d:1a1b:d3cc/64 scope link noprefixroute valid_lft forever preferred_lft forever 4: docker0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc noqueue state DOWN group default link/ether ae:a7:3e:3a:39:bb brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever [rootdocker ~]# nmcli c show NAME UUID TYPE DEVICE Wired connection 1 110c742f-bd12-3ba3-b671-1972a75aa2e6 ethernet ens224 ens160 d622d6da-1540-371d-8def-acd3db9bd38d ethernet ens160 lo d20cef01-6249-4012-908c-f775efe44118 loopback lo docker0 b023990a-e131-4a68-828c-710158f77a50 bridge docker0 [rootdocker ~]# nmcli c m Wired connection 1 connection.id ens224 [rootdocker ~]# nmcli c show NAME UUID TYPE DEVICE ens224 110c742f-bd12-3ba3-b671-1972a75aa2e6 ethernet ens224 ens160 d622d6da-1540-371d-8def-acd3db9bd38d ethernet ens160 lo d20cef01-6249-4012-908c-f775efe44118 loopback lo docker0 b023990a-e131-4a68-828c-710158f77a50 bridge docker0 [rootdocker ~]# nmcli c m ens224 ipv4.method manual ipv4.addresses 192.168.98.20/24 ipv4.gateway 192.168.98.2 ipv4.dns 223.5.5.5 connection.autoconnect yes [rootdocker ~]# nmcli c up ens224 [rootharbor ~]# nmcli c m ens160 ipv4.method manual ipv4.addresses 192.168.86.20/24 ipv4.gateway 192.168.86.200 ipv4.dns 223.5.5.5 8.8.8.8 connection.autoconnect yes [rootharbor ~]# nmcli c up ens160 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6) [rootharbor ~]# ip ad 1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens160: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:24 brd ff:ff:ff:ff:ff:ffaltname enp3s0inet 192.168.86.20/24 brd 192.168.86.255 scope global noprefixroute ens160valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fef5:e524/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: ens224: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:2e brd ff:ff:ff:ff:ff:ffaltname enp19s0inet 192.168.98.20/24 brd 192.168.98.255 scope global noprefixroute ens224valid_lft forever preferred_lft foreverinet6 fe80::fd7d:606d:1a1b:d3cc/64 scope link noprefixroute valid_lft forever preferred_lft forever 4: docker0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc noqueue state DOWN group default link/ether ae:a7:3e:3a:39:bb brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever# 开启路由转发 [rootharbor ~]# echo net.ipv4.ip_forward1 /etc/sysctl.conf [rootharbor ~]# sysctl -p net.ipv4.ip_forward 1 # 配置主机映射 [rootharbor ~]# vim /etc/hosts [rootharbor ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.86.11 k8s-master01 m1 192.168.86.12 k8s-node01 n1 192.168.86.13 k8s-node02 n2 192.168.86.20 harbor.registry.com harbor3.2 安装 Docker 添加 Docker 源安装 Docker配置 Docker启动 Docker验证 Docker [rootharbor ~]# vim /etc/docker/daemon.json [rootharbor ~]# cat /etc/docker/daemon.json {default-ipc-mode: shareable, #ipc模式打开data-root: /data/docker, #指定docker数据放在哪个目录exec-opts: [native.cgroupdriversystemd], #指定cgroup的驱动方式是systemdlog-driver: json-file, #格式jsonlog-opts: {max-size: 100m,max-file: 50},insecure-registries: [https://harbor.registry.com], #自己的仓库的地址私有仓库registry-mirrors:[ #拉取镜像公共仓库https://docker.m.daocloud.io,https://docker.imgdb.de,https://docker-0.unsee.tech,https://docker.hlmirror.com,https://docker.1ms.run,https://func.ink,https://lispy.org,https://docker.xiaogenban1993.com] } [rootharbor ~]# mkdir -p /data/docker -p [rootharbor ~]# systemctl restart docker [rootharbor ~]# ls /data/docker/ buildkit containers engine-id image network overlay2 plugins runtimes swarm tmp volumes 3.3 安装 docker-compose 下载安装赋权验证 [rootharbor ~]# mv docker-compose-linux-x86_64 /usr/bin/docker-compose [rootharbor ~]# chmod x /usr/bin/do docker dockerd dockerd-rootless.sh domainname docker-compose dockerd-rootless-setuptool.sh docker-proxy [rootharbor ~]# chmod x /usr/bin/docker-compose [rootharbor ~]# docker-compose --version Docker Compose version v2.35.1 [rootharbor ~]# cd /data/ [rootharbor data]# mv /root/harbor-offline-installer-v2.13.0.tgz . [rootharbor data]# ls docker harbor-offline-installer-v2.13.0.tgz [rootharbor data]# tar -xzf harbor-offline-installer-v2.13.0.tgz [rootharbor data]# ls docker harbor harbor-offline-installer-v2.13.0.tgz [rootharbor data]# rm -f *.tgz [rootharbor data]# ls docker harbor [rootharbor data]# cd harbor/ [rootharbor harbor]# ls common.sh harbor.v2.13.0.tar.gz harbor.yml.tmpl install.sh LICENSE prepare3.4 准备 Harbor 下载 Harbor解压文件 3.5 配置证书 生成 CA 证书生成服务器证书向 Harbor 和 Docker 提供证书 3.6 部署配置 Harbor 配置 Harbor加载 harbor 镜像检查安装环境启动 Harbor查看启动的容器 3.7 配置启动服务 停止 Harbor编写服务文件启动 Harbor 服务 3.8 定制本地仓库 配置映射配置仓库 3.9 测试本地仓库 拉取镜像镜像打标签登录仓库推送镜像拉取镜像 配置证书 https://goharbor.io/docs/2.13.0/install-config/installation-prereqs/ [rootharbor harbor]# mkdir ssl [rootharbor harbor]# cd ssl [rootharbor ssl]# openssl genrsa -out ca.key 4096 [rootharbor ssl]# ls ca.key [rootharbor ssl]# openssl req -x509 -new -nodes -sha512 -days 3650 \-subj /CCN/STChongqing/LBanan/Oexample/OUPersonal/CNMyPersonal Root CA \-key ca.key \-out ca.crt [rootharbor ssl]# ls ca.crt ca.key[rootharbor ssl]# openssl genrsa -out harbor.registry.com.key 4096 [rootharbor ssl]# ls ca.crt ca.key harbor.registry.com.key [rootharbor ssl]# openssl req -sha512 -new \-subj /CCN/STChongqing/LBanan/Oexample/OUPersonal/CNharbor.registry.com \-key harbor.registry.com.key \-out harbor.registry.com.csr [rootharbor ssl]# ls ca.crt ca.key harbor.registry.com.csr harbor.registry.com.key[rootharbor ssl]# cat v3.ext -EOF authorityKeyIdentifierkeyid,issuer basicConstraintsCA:FALSE keyUsage digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage serverAuth subjectAltName alt_names[alt_names] DNS.1harbor.registry.com DNS.2harbor.registry DNS.3harbor EOF [rootharbor ssl]# ls ca.crt ca.key harbor.registry.com.csr harbor.registry.com.key v3.ext[rootharbor ssl]# openssl x509 -req -sha512 -days 3650 \-extfile v3.ext \-CA ca.crt -CAkey ca.key -CAcreateserial \-in harbor.registry.com.csr \-out harbor.registry.com.crt Certificate request self-signature ok subjectCCN, STChongqing, LBanan, Oexample, OUPersonal, CNharbor.registry.com [rootharbor ssl]# ls ca.crt ca.key ca.srl harbor.registry.com.crt harbor.registry.com.csr harbor.registry.com.key v3.ext [rootharbor ssl]# mkdir /data/cert [rootharbor ssl]# cp harbor.registry.com.crt /data/cert/ cp harbor.registry.com.key /data/cert/ [rootharbor ssl]# ls /data/cert/ harbor.registry.com.crt harbor.registry.com.key [rootharbor ssl]# openssl x509 -inform PEM -in harbor.registry.com.crt -out harbor.registry.com.cert [rootharbor ssl]# ls ca.crt ca.srl harbor.registry.com.crt harbor.registry.com.key ca.key harbor.registry.com.cert harbor.registry.com.csr v3.ext[rootharbor ssl]# mkdir -p /etc/docker/certs.d/harbor.registry.com:443 [rootharbor ssl]# cp harbor.registry.com.cert /etc/docker/certs.d/harbor.registry.com:443/ [rootharbor ssl]# cp harbor.registry.com.key /etc/docker/certs.d/harbor.registry.com:443/ [rootharbor ssl]# cp ca.crt /etc/docker/certs.d/harbor.registry.com:443/ [rootharbor ssl]# systemctl restart docker [rootharbor ssl]# systemctl status docker ● docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled)Active: active (running) since Sun 2025-05-11 10:42:37 CST; 1min 25s ago TriggeredBy: ● docker.socketDocs: https://docs.docker.comMain PID: 3322 (dockerd)Tasks: 10Memory: 29.7MCPU: 353msCGroup: /system.slice/docker.service└─3322 /usr/bin/dockerd -H fd:// --containerd/run/containerd/containerd.sockMay 11 10:42:36 harbor dockerd[3322]: time2025-05-11T10:42:36.46644122708:00 levelinfo msgCreating a contai[rootharbor ssl]# cd .. [rootharbor harbor]# ls common.sh harbor.v2.13.0.tar.gz harbor.yml.tmpl install.sh LICENSE prepare ssl [rootharbor harbor]# cp harbor.yml.tmpl harbor.yml [rootharbor harbor]# vim harbor.yml [rootharbor harbor]# cat harbor.yml # Configuration file of Harbor# The IP address or hostname to access admin UI and registry service. # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients. hostname: harbor.registry.com #修改# http related config http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80# https related config https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /data/cert/harbor.registry.com.crt #修改private_key: /data/cert/harbor.registry.com.key #修改 ............... [rootharbor harbor]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE [rootharbor harbor]# docker load -i harbor.v2.13.0.tar.gz 874b37071853: Loading layer [] ........ 832349ff3d50: Loading layer [] 38.95MB/38.95MB Loaded image: goharbor/harbor-exporter:v2.13.0 [rootharbor harbor]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE goharbor/harbor-exporter v2.13.0 0be56feff492 4 weeks ago 127MB goharbor/redis-photon v2.13.0 7c0d9781ab12 4 weeks ago 166MB goharbor/trivy-adapter-photon v2.13.0 f2b4d5497558 4 weeks ago 381MB goharbor/harbor-registryctl v2.13.0 bbd957df71d6 4 weeks ago 162MB goharbor/registry-photon v2.13.0 fa23989bf194 4 weeks ago 85.9MB goharbor/nginx-photon v2.13.0 c922d86a7218 4 weeks ago 151MB goharbor/harbor-log v2.13.0 463b8f469e21 4 weeks ago 164MB goharbor/harbor-jobservice v2.13.0 112a1616822d 4 weeks ago 174MB goharbor/harbor-core v2.13.0 b90fcb27fd54 4 weeks ago 197MB goharbor/harbor-portal v2.13.0 858f92a0f5f9 4 weeks ago 159MB goharbor/harbor-db v2.13.0 13a2b78e8616 4 weeks ago 273MB goharbor/prepare v2.13.0 2380b5a4f127 4 weeks ago 205MB [rootharbor harbor]# ls common.sh harbor.v2.13.0.tar.gz harbor.yml harbor.yml.tmpl install.sh LICENSE prepare ssl [rootharbor harbor]# ./prepare prepare base dir is set to /data/harbor Generated configuration file: /config/portal/nginx.conf Generated configuration file: /config/log/logrotate.conf Generated configuration file: /config/log/rsyslog_docker.conf Generated configuration file: /config/nginx/nginx.conf Generated configuration file: /config/core/env Generated configuration file: /config/core/app.conf Generated configuration file: /config/registry/config.yml Generated configuration file: /config/registryctl/env Generated configuration file: /config/registryctl/config.yml Generated configuration file: /config/db/env Generated configuration file: /config/jobservice/env Generated configuration file: /config/jobservice/config.yml copy /data/secret/tls/harbor_internal_ca.crt to shared trust ca dir as name harbor_internal_ca.crt ... ca file /hostfs/data/secret/tls/harbor_internal_ca.crt is not exist copy to shared trust ca dir as name storage_ca_bundle.crt ... copy None to shared trust ca dir as name redis_tls_ca.crt ... Generated and saved secret to file: /data/secret/keys/secretkey Successfully called func: create_root_cert Generated configuration file: /compose_location/docker-compose.yml Clean up the input dir# 启动harbor [rootharbor harbor]# ./install.sh[Step 0]: checking if docker is installed ...Note: docker version: 28.0.4[Step 1]: checking docker-compose is installed ...Note: Docker Compose version v2.34.0[Step 2]: loading Harbor images ... Loaded image: goharbor/harbor-db:v2.13.0 Loaded image: goharbor/harbor-jobservice:v2.13.0 Loaded image: goharbor/harbor-registryctl:v2.13.0 Loaded image: goharbor/redis-photon:v2.13.0 Loaded image: goharbor/trivy-adapter-photon:v2.13.0 Loaded image: goharbor/nginx-photon:v2.13.0 Loaded image: goharbor/registry-photon:v2.13.0 Loaded image: goharbor/prepare:v2.13.0 Loaded image: goharbor/harbor-portal:v2.13.0 Loaded image: goharbor/harbor-core:v2.13.0 Loaded image: goharbor/harbor-log:v2.13.0 Loaded image: goharbor/harbor-exporter:v2.13.0[Step 3]: preparing environment ...[Step 4]: preparing harbor configs ... prepare base dir is set to /data/harbor Clearing the configuration file: /config/portal/nginx.conf ....... Generated configuration file: /config/portal/nginx.conf ....... Generated configuration file: /config/jobservice/env Generated configuration file: /config/jobservice/config.yml copy /data/secret/tls/harbor_internal_ca.crt to shared trust ca dir as name harbor_internal_ca.crt ... ca file /hostfs/data/secret/tls/harbor_internal_ca.crt is not exist copy to shared trust ca dir as name storage_ca_bundle.crt ... copy None to shared trust ca dir as name redis_tls_ca.crt ... loaded secret from file: /data/secret/keys/secretkey Generated configuration file: /compose_location/docker-compose.yml Clean up the input dirNote: stopping existing Harbor instance ...[Step 5]: starting Harbor ... [] Running 10/10 #10个容器成功运行✔ Network harbor_harbor Created 0.0s ......... 1.3s ✔ ----Harbor has been installed and started successfully.---- 配置启动服务 # 会删容器但是不会删除镜像 [rootharbor harbor]# docker-compose down [] Running 10/10✔ Container harbor-jobservice Removed 0.1s ✔ Container registryctl Removed 0.1s ✔ Container nginx Removed 0.1s ✔ Container harbor-portal Removed 0.1s ✔ Container harbor-core Removed 0.1s ✔ Container registry Removed 0.1s ✔ Container redis Removed 0.1s ✔ Container harbor-db Removed 0.2s ✔ Container harbor-log Removed 10.1s ✔ Network harbor_harbor Removed 0.1s [rootharbor harbor]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [rootharbor harbor]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE goharbor/harbor-exporter v2.13.0 0be56feff492 4 weeks ago 127MB goharbor/redis-photon v2.13.0 7c0d9781ab12 4 weeks ago 166MB goharbor/trivy-adapter-photon v2.13.0 f2b4d5497558 4 weeks ago 381MB goharbor/harbor-registryctl v2.13.0 bbd957df71d6 4 weeks ago 162MB goharbor/registry-photon v2.13.0 fa23989bf194 4 weeks ago 85.9MB goharbor/nginx-photon v2.13.0 c922d86a7218 4 weeks ago 151MB goharbor/harbor-log v2.13.0 463b8f469e21 4 weeks ago 164MB goharbor/harbor-jobservice v2.13.0 112a1616822d 4 weeks ago 174MB goharbor/harbor-core v2.13.0 b90fcb27fd54 4 weeks ago 197MB goharbor/harbor-portal v2.13.0 858f92a0f5f9 4 weeks ago 159MB goharbor/harbor-db v2.13.0 13a2b78e8616 4 weeks ago 273MB goharbor/prepare v2.13.0 2380b5a4f127 4 weeks ago 205MB# 编写服务文件必须有这三个板块 [rootharbor harbor]# vim /usr/lib/systemd/system/harbor.service [rootharbor harbor]# cat /usr/lib/systemd/system/harbor.service [Unit] #定义服务启动的依赖关系和顺序、服务的描述信息 DescriptionHarbor Afterdocker.service systemd-networkd.service systemd-resolved.service # 定义顺序...之后不是强依赖 Requiresdocker.service #必须先启动这个服务强依赖 Documentationhttp://github.com/vmware/harbor[Service] #定义服务的启动、停止、重启 Typesimple #服务启动的进程启动方式 Restarton-failure #如果失败了就重启 RestartSec5 #重启时间 ExecStart/usr/bin/docker-compose --file /data/harbor/docker-compose.yml up #启动时需要执行的指令 # /usr/bin/docker-compose为刚刚安装的路径 ExecStop/usr/bin/docker-compose --file /data/harbor/docker-compose.yml down[Install] WantedBymulti-user.target/usr/bin/docker-compose为刚刚安装的路径 [rootharbor ~]# cd /data/harbor/ [rootharbor harbor]# ls common docker-compose.yml harbor.yml install.sh prepare common.sh harbor.v2.13.0.tar.gz harbor.yml.tmpl LICENSE ssl [rootharbor harbor]# cat docker-compose.yml services:log:image: goharbor/harbor-log:v2.13.0container_name: harbor-logrestart: alwayscap_drop:- ALLcap_add:- CHOWN- DAC_OVERRIDE- SETGID- SETUIDvolumes:- /var/log/harbor/:/var/log/docker/:z- type: bindsource: ./common/config/log/logrotate.conftarget: /etc/logrotate.d/logrotate.conf- type: bindsource: ./common/config/log/rsyslog_docker.conftarget: /etc/rsyslog.d/rsyslog_docker.confports:- 127.0.0.1:1514:10514networks:- harborregistry:image: goharbor/registry-photon:v2.13.0container_name: registryrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/registry:/storage:z- ./common/config/registry/:/etc/registry/:z- type: bindsource: /data/secret/registry/root.crttarget: /etc/registry/root.crt- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- loglogging:driver: syslogoptions:syslog-address: tcp://localhost:1514tag: registryregistryctl:image: goharbor/harbor-registryctl:v2.13.0container_name: registryctlenv_file:- ./common/config/registryctl/envrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/registry:/storage:z- ./common/config/registry/:/etc/registry/:z- type: bindsource: ./common/config/registryctl/config.ymltarget: /etc/registryctl/config.yml- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- loglogging:driver: syslogoptions:syslog-address: tcp://localhost:1514tag: registryctlpostgresql:image: goharbor/harbor-db:v2.13.0container_name: harbor-dbrestart: alwayscap_drop:- ALLcap_add:- CHOWN- DAC_OVERRIDE- SETGID- SETUIDvolumes:- /data/database:/var/lib/postgresql/data:znetworks:harbor:env_file:- ./common/config/db/envdepends_on:- loglogging:driver: syslogoptions:syslog-address: tcp://localhost:1514tag: postgresqlshm_size: 1gbcore:image: goharbor/harbor-core:v2.13.0container_name: harbor-coreenv_file:- ./common/config/core/envrestart: alwayscap_drop:- ALLcap_add:- SETGID- SETUIDvolumes:- /data/ca_download/:/etc/core/ca/:z- /data/:/data/:z- ./common/config/core/certificates/:/etc/core/certificates/:z- type: bindsource: ./common/config/core/app.conftarget: /etc/core/app.conf- type: bindsource: /data/secret/core/private_key.pemtarget: /etc/core/private_key.pem- type: bindsource: /data/secret/keys/secretkeytarget: /etc/core/key- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:harbor:depends_on:- log- registry- redis- postgresqllogging:driver: syslogoptions:syslog-address: tcp://localhost:1514tag: coreportal:image: goharbor/harbor-portal:v2.13.0container_name: harbor-portalrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUID- NET_BIND_SERVICEvolumes:- type: bindsource: ./common/config/portal/nginx.conftarget: /etc/nginx/nginx.confnetworks:- harbordepends_on:- loglogging:driver: syslogoptions:syslog-address: tcp://localhost:1514tag: portaljobservice:image: goharbor/harbor-jobservice:v2.13.0container_name: harbor-jobserviceenv_file:- ./common/config/jobservice/envrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/job_logs:/var/log/jobs:z- type: bindsource: ./common/config/jobservice/config.ymltarget: /etc/jobservice/config.yml- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- corelogging:driver: syslogoptions:syslog-address: tcp://localhost:1514tag: jobserviceredis:image: goharbor/redis-photon:v2.13.0container_name: redisrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/redis:/var/lib/redisnetworks:harbor:depends_on:- loglogging:driver: syslogoptions:syslog-address: tcp://localhost:1514tag: redisproxy:image: goharbor/nginx-photon:v2.13.0container_name: nginxrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUID- NET_BIND_SERVICEvolumes:- ./common/config/nginx:/etc/nginx:z- /data/secret/cert:/etc/cert:z- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harborports:- 80:8080- 443:8443depends_on:- registry- core- portal- loglogging:driver: syslogoptions:syslog-address: tcp://localhost:1514tag: proxy networks:harbor:external: false
http://www.pierceye.com/news/866848/

相关文章:

  • google网站推广网站自助平台
  • 外贸自建站多久能出单wordpress的pdf阅读
  • 深圳东莞的网站建设公司网店代运营哪里好
  • 做费网站wordpress折叠代码
  • 分析海报的网站企业网站服务费怎么做记账凭证
  • 海南建设大厅网站888网创
  • aspnet网站开发实例项目河南网站建设推广
  • ppt免费模板大全网站微网站建设网站
  • 郑州网站建设七彩科技网络服务器配置设计
  • 专业企专业企业网站设计洛阳青峰网络
  • 网站开发需要多少钱如何销售管理系统需求分析
  • 西安网站建设查派9861云南网站建设
  • 做微商网站制作网站曝光率
  • 平价网站平价网站建设建设百度电话号码
  • 有哪些做拎包入住的网站中国建设银行网站会员用户名
  • 用模板搭建的网站备案吗wordpress热门文章调用
  • 有哪些电商网站中山视角做网站的公司
  • 做网站 点击跳转html菜鸟教程下载
  • 苏州做公司网站设计的公司嘉盛建设集团官方网站
  • 建设银行e路护航官方网站登陆医疗网站做药品是干嘛
  • 十堰h5响应式网站西安网站制作厂家
  • 建设银行官方网站企业网银手机网站怎么dw做
  • 简单自适应网站wordpress联系表格
  • 雄县没有做网站的公司广告设计与制作就业率
  • 网站找谁做贵州网架公司
  • 做纸箱在什么网站找客户wordpress默认导航栏
  • wordpress采集自动伪原创福州360手机端seo
  • 工信部网站备案要求重庆网站公司设计
  • 宛城区建网站淘宝网页设计报告
  • 网站后台需求字节跳动员工人数2019