写点什么

Centos7.x 部署 K8S 集群 (基于 containerd 运行时)

作者:蜗牛也是牛
  • 2022-12-04
    山东
  • 本文字数:11900 字

    阅读完需:约 39 分钟

前言

当 Kubernetes 社区宣布 1.20 版本之后会逐步弃用 dockershim(垫片),dockershim 是 Kubernetes 的一个组件(垫片),其作用是为了操作 Docker。Docker 是在 2013 年面世的,而 Kubernetes 是在 2016 年,所以 Docker 刚开始并没有想到编排,也不会知道会出现 Kubernetes 这个庞然大物。但是 Kubernetes 在创建的时候就是以 Docker 作为容器运行时,很多操作逻辑都是针对 Docker,随着社区越来越健壮,为了兼容更多的容器运行时,才将 Docker 的相关逻辑独立出来组成了 dockershim 。

正因为这样,只要 Kubernetes 的任何变动或者 Docker 的任何变动,都必须维护 dockershim ,这样才能保证足够的支持,但是通过 dockershim 操作 Docker,其本质还是操作 Docker 的底层运行时 Containerd ,而且 Containerd 自身也是支持 CRI (Container Runtime Interface),那为什么还要绕一层 Docker 呢?是不是可以直接通过 CRI 和 Containerd 进行交互?这或许也是社区启动 dockershim 的原因之一吧!!!

那么什么是 Containerd 呢?

Containerd 是从 Docker 中分离的一个项目,旨在为 Kubernetes 提供容器运行时,负责管理镜像和容器的生命周期。不过 Containerd 是可以抛开 Docker 独立工作的。

特性如下:

  • 支持 OCI 镜像规范,也就是 runc

  • 支持 OCI 运行时规范

  • 支持镜像的 pull

  • 支持容器网络管理

  • 存储支持多租户

  • 支持容器运行时和容器的生命周期管理

  • 支持管理网络名称空间

Containerd 和 Docker 在命令使用上的一些区别主要如下:

可以看到使用方式大同小异。

下面介绍一下使用 kubeadm 安装 K8S 集群,并使用 containerd 作为容器运行时的具体安装步骤。 

一、环境说明

主机节点

软件说明

二、环境准备

注:所有节点上执行------------------------开始----------------------------

2.1 修改 hostname

hostnamectl set-hostname masterhostnamectl set-hostname node1hostnamectl set-hostname node2
复制代码


2.2 三台机器网络连通( # 修改所有节点)

[root@master ~]# cat >> /etc/hosts <<-EOF192.168.1.92    master192.168.1.93    node1192.168.1.94    node2EOF
复制代码


2.3 关闭防火墙

systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
复制代码


2.4 关闭 selinux

setenforce 0sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
复制代码


2.5 关闭 swap

  • 修改/etc/fstab 文件,注释掉 SWAP 的自动挂载,使用 free -m 确认 swap 已经关闭。

swapoff -ased -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
复制代码


2.6 配置 iptables 的 ACCEPT 规则

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
复制代码


2.7 设置系统参数

  • swappiness 参数调整,swap 关闭,也必须添加此参数 

cat <<EOF > /etc/sysctl.d/k8s.confvm.swappiness = 0net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1EOF
复制代码


2.8 执行如下命令使修改生效

modprobe br_netfiltersysctl -p /etc/sysctl.d/k8s.conf
复制代码


2.9 安装 ipvs 

cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOFchmod 755 /etc/sysconfig/modules/ipvs.modules && bash/etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
复制代码


上面脚本创建了的/etc/sysconfig/modules/ipvs.modules 文件,保证在节点重启后能自动加载所需模块。 使用 lsmod | grep -e ip_vs -e nf_conntrack_ipv4 命令查看是否已经正确加载所需的内核模块。 


[root@node1 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4nf_conntrack_ipv4      15053  19nf_defrag_ipv4         12729  1 nf_conntrack_ipv4ip_vs_sh               12688  0ip_vs_wrr              12697  0ip_vs_rr               12600  4ip_vs                 145458  10 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack          139264  10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
复制代码

三、安装 ipset 软件包

3.1 安装 ipset 软件包

yum install ipset -y
复制代码

为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadm:

yum install ipvsadm -y
复制代码

3.2 同步服务器时间

yum install chrony -ysystemctl enable chronydsystemctl start chronydchronyc sources
复制代码

四、安装 contained

4.1 下载源码库

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
复制代码


4.2 安装 containerd

# 查看可安装版本[root@master ~]# yum list | grep containerdcontainerd.io.x86_64                      1.6.10-3.1.el7               @docker-ce-stable# 执行安装[root@master ~]# yum -y install containerd.io# 查看[root@master ~]# rpm -qa | grep containerdcontainerd.io-1.6.10-3.1.el7.x86_64
复制代码

4.3 创建 containerd 配置文件


# 创建目录mkdir -p /etc/containerdcontainerd config default > /etc/containerd/config.toml# 替换配置文件sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.tomlsed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"#' /etc/containerd/config.toml
复制代码


4.4 启动 containerd


systemctl enable containerdsystemctl start containerdsystemctl status containerd
复制代码


4.5 验证

[root@master ~]# ctr versionClient:  Version:  1.6.10  Revision: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661  Go version: go1.18.8 Server:  Version:  1.6.10  Revision: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661  UUID: 10b91012-6b24-4059-bf92-d71d269a5fbc
复制代码

五、安装三大件( kubelet、kubeadm、kubectl)

在确保 Containerd 安装完成后,上面的相关环境配置也完成了,现在我们就可以来安装 Kubeadm 了, 我们这里是通过指定 yum 源的方式来进行安装。


5.1 下载 kubernetes 源码库

cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
复制代码

5.2 安装 kubeadm、kubelet、kubectl(我安装的指定版本 1.25.0,有版本要求自己设定版本)

yum install -y kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0
复制代码

5.3 设置运行时

crictl config runtime-endpoint /run/containerd/containerd.sock
复制代码

5.4 可以看到我们这里安装的是 v1.25.0 版本,将 kubelet 设置成开机启动

systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet
systemctl status kubelet
复制代码


注:所有节点上执行------------------------结束----------------------------


六、初始化集群初始化 master(master 执行)

6.1 然后接下来在 master 节点配置 kubeadm 初始化文件,可以通过如下命令导出默认的初始化配置:

kubeadm config print init-defaults > kubeadm.yaml
复制代码

然后根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,需要 注意的是由于我们使用的 containerd 作为运行时,所以在初始化节点的时候需要指定 cgroupDriver systemd


6.2 修改内容:

  1. advertiseAddress: 192.168.1.92  # 修改为自己的 master 节点 IP

  2. name: master  # 修改为 master 主机名

  3. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers  # 修改为阿里云镜像地址

  4. kubernetesVersion: 1.25.0   # 确认是否为要安装版本,版本根据执行:kubelet --version 得来

  5. podSubnet: 172.16.0.0/16   # networking: 下添加 pod 网络

  6. scheduler: {}          # 添加模式为 ipvs

  7. cgroupDriver: systemd     # 指定 cgroupDriver 为 systemd 

[root@master ~]# cat kubeadm.yamlapiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authenticationkind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 192.168.1.92  bindPort: 6443nodeRegistration:  criSocket: unix:///var/run/containerd/containerd.sock  imagePullPolicy: IfNotPresent  name: master  taints: null---apiServer:  timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: {}etcd:  local:    dataDir: /var/lib/etcdimageRepository: registry.cn-hangzhou.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: 1.25.0networking:  dnsDomain: cluster.local  podSubnet: 172.16.0.0/16  serviceSubnet: 10.96.0.0/12scheduler: {}--- apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationcgroupDriver: systemd
复制代码

6.3 使用上面的配置文件进行初始化

kubeadm init --config=kubeadm.yaml  注意:CPU核心必须大于1       必须关闭Swap区(临时,永久)
[root@master ~]# kubeadm init --config=kubeadm.yaml[init] Using Kubernetes version: v1.25.0[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.1.92][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.1.92 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.1.92 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 7.502961 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.92:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:6a5e9054fa753fd48d7b43ec6923ffe514bd1988e0dbe00f3497111675cf1bc1[root@master ~]#
复制代码

6.4 执行拷贝 kubeconfig 文件

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
复制代码


6.5 添加节点(node1、node2)


记住初始化集群上面的配置和操作要提前做好,将 master 节点上面的 $HOME/.kube/config 文件拷贝到 node 节点对应的文件中,安装 kubeadm、kubelet、kubectl,然后执行上面初始化完成后提示的 join 命 令即可:如果忘记了上面的 join 命令可以使用命令 kubeadm token create --print-join-command 重新获取。

node1:


[root@node1 ~]# kubeadm join 192.168.1.92:6443 --token abcdef.0123456789abcdef \
>         --discovery-token-ca-cert-hash sha256:6a5e9054fa753fd48d7b43ec6923ffe514bd1988e0dbe00f3497111675cf1bc1
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node1 ~]#

node2:

[root@node2 ~]# kubeadm join 192.168.1.92:6443 --token abcdef.0123456789abcdef \>         --discovery-token-ca-cert-hash sha256:6a5e9054fa753fd48d7b43ec6923ffe514bd1988e0dbe00f3497111675cf1bc1[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@node2 ~]#
复制代码

6.6 查看集群状态:

[root@master ~]# kubectl get nodesNAME     STATUS     ROLES           AGE     VERSIONmaster   NotReady   control-plane   2m57s   v1.25.0node1    NotReady   <none>          47s     v1.25.0node2    NotReady   <none>          29s     v1.25.0
复制代码

七、安装网络插件

可以看到是 NotReady 状态,这是因为还没有安装网络插件,必须部署一个 容器网络接口 (CNI) 基于 Pod 网络附加组件,以便您的 Pod 可以相互通信。在安装网络之前,集群 DNS (CoreDNS) 不会启动。接下来安装网络插件,可以在以下两个任一地址中选择需要安装的网络插件(我选用的第二个地址安装),这里我们安装 calico;

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ https://projectcalico.docs.tigera.io/archive/v3.23/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico
复制代码

7.1 下载 calico 文件

[root@master ~]# curl https://projectcalico.docs.tigera.io/archive/v3.23/manifests/calico.yaml -O  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed100  226k  100  226k    0     0   278k      0 --:--:-- --:--:-- --:--:--  278k
复制代码

7.2 编辑 calico.yaml 文件:

注:文件默认 IP 为:192.168.0.0/16

- name: CALICO_IPV4POOL_CIDR   # 由于在init的时候配置的172网段,所以这里需要修改  value: "172.16.0.0/16"
复制代码

7.3 安装 calico 网络插件

[root@master ~]# kubectl apply -f calico.yamlconfigmap/calico-config createdcustomresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org createdclusterrole.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrole.rbac.authorization.k8s.io/calico-node createdclusterrolebinding.rbac.authorization.k8s.io/calico-node createddaemonset.apps/calico-node createdserviceaccount/calico-node createddeployment.apps/calico-kube-controllers createdserviceaccount/calico-kube-controllers createdpoddisruptionbudget.policy/calico-kube-controllers created  
复制代码

7.4 查看 pod 运行状态(每秒刷新一次)

[root@master ~]# watch -n 1 kubectl get pod -n kube-system[root@master ~]# kubectl get pod -n kube-systemNAME                                      READY   STATUS    RESTARTS   AGEcalico-kube-controllers-d8b9b6478-2qsz7   1/1     Running   0          2m10scalico-node-26rrq                         1/1     Running   0          2m10scalico-node-fqrls                         1/1     Running   0          2m10scalico-node-rjrcp                         1/1     Running   0          2m10scoredns-7f8cbcb969-hk67g                  1/1     Running   0          37mcoredns-7f8cbcb969-sv2bw                  1/1     Running   0          37metcd-master                               1/1     Running   0          37mkube-apiserver-master                     1/1     Running   0          37mkube-controller-manager-master            1/1     Running   0          37mkube-proxy-rv7dt                          1/1     Running   0          37mkube-proxy-wdx68                          1/1     Running   0          19mkube-proxy-zdwdc                          1/1     Running   0          18mkube-scheduler-master                     1/1     Running   0          37m[root@master ~]#
复制代码

7.5 查看集群状态

[root@master ~]# kubectl get nodesNAME     STATUS   ROLES           AGE   VERSIONmaster   Ready    control-plane   38m   v1.25.0node1    Ready    <none>          19m   v1.25.0node2    Ready    <none>          19m   v1.25.0[root@master ~]#
复制代码

八、测试

使用 k8s 启动一个 deployment 资源

[root@master ~] mkdir nginx[root@master ~] cd nginx[root@master nginx] vim deploy-nginx.yaml[root@master nginx] cat deploy-nginx.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: nginx-deploymentspec:  selector:    matchLabels:      app: nginx  replicas: 3 # 告知 Deployment 运行 3 个与该模板匹配的 Pod  template:    metadata:      labels:        app: nginx    spec:      containers:      - name: nginx        image: nginx:1.14.2        ports:        - containerPort: 80
[root@master nginx]# kubectl apply -f deploy-nginx.yamldeployment.apps/nginx-deployment created[root@master nginx]# kubectl get podNAME READY STATUS RESTARTS AGEnginx-deployment-7fb96c846b-4phqw 1/1 Running 0 44snginx-deployment-7fb96c846b-sllgf 1/1 Running 0 44snginx-deployment-7fb96c846b-wz622 1/1 Running 0 44s[root@master nginx]#
复制代码

查看所有 pod 运行状态

[root@master nginx]# kubectl get pod -A  -o wideNAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATESdefault       nginx-deployment-7fb96c846b-4phqw         1/1     Running   0          2m23s   172.16.104.4     node2    <none>           <none>default       nginx-deployment-7fb96c846b-sllgf         1/1     Running   0          2m23s   172.16.166.130   node1    <none>           <none>default       nginx-deployment-7fb96c846b-wz622         1/1     Running   0          2m23s   172.16.166.129   node1    <none>           <none>kube-system   calico-kube-controllers-d8b9b6478-2qsz7   1/1     Running   0          7m47s   172.16.104.3     node2    <none>           <none>kube-system   calico-node-26rrq                         1/1     Running   0          7m47s   192.168.1.92     master   <none>           <none>kube-system   calico-node-fqrls                         1/1     Running   0          7m47s   192.168.1.93     node1    <none>           <none>kube-system   calico-node-rjrcp                         1/1     Running   0          7m47s   192.168.1.94     node2    <none>           <none>kube-system   coredns-7f8cbcb969-hk67g                  1/1     Running   0          43m     172.16.104.2     node2    <none>           <none>kube-system   coredns-7f8cbcb969-sv2bw                  1/1     Running   0          43m     172.16.104.1     node2    <none>           <none>kube-system   etcd-master                               1/1     Running   0          43m     192.168.1.92     master   <none>           <none>kube-system   kube-apiserver-master                     1/1     Running   0          43m     192.168.1.92     master   <none>           <none>kube-system   kube-controller-manager-master            1/1     Running   0          43m     192.168.1.92     master   <none>           <none>kube-system   kube-proxy-rv7dt                          1/1     Running   0          43m     192.168.1.92     master   <none>           <none>kube-system   kube-proxy-wdx68                          1/1     Running   0          25m     192.168.1.93     node1    <none>           <none>kube-system   kube-proxy-zdwdc                          1/1     Running   0          24m     192.168.1.94     node2    <none>           <none>kube-system   kube-scheduler-master                     1/1     Running   0          43m     192.168.1.92     master   <none>           <none>[root@master nginx]#
复制代码


发布于: 刚刚阅读数: 4
用户头像

云原生。技术这东东,不怕慢,就怕站; 2022-02-13 加入

还未添加个人简介

评论

发布
暂无评论
Centos7.x部署K8S集群 (基于containerd 运行时)_蜗牛也是牛_InfoQ写作社区