写点什么

极客时间运维进阶训练营第十二周作业

作者:Starry
  • 2023-03-14
    北京
  • 本文字数:18275 字

    阅读完需:约 60 分钟

1.使用 kubeadm 部署一个分布式的 Kubernetes 集群。

kubernetes 集群组件运行模式

  1. 独立组件模式

  • 除 Add-ons 组件外,各关键组件以二进制方式部署与节点之上,并运行为守护进程

  • 各 Add-ons 以 Pod 形式运行。

  1. 静态 Pod 模式

  • 控制平面各组件以静态 Pod 对象运行与 master 主机之上

  • kubelet 和 docker 以二进制部署,运行为守护进程

  • kube-proxy 等则以 Pod 形式运行

  • k8s.gcr.io->registry.k8s.io


kubeadm 是社区提供的集群构建工具

  • 负责执行构建一个最小化可用集群并将其启动等必要的基本步骤

  • kubernetes 集群全生命周期管理工具,可用于实现集群的部署、升级/降级及卸载等

  • kubeadm 仅关心如何初始化并拉起一个集群,其职责仅限于下图中背景着色的部分


kubernetes 集群示例环境

OS: Ubuntu 20.04.3 LTS

Docker:20.10.23 CGroup Driver: systemd

Kubernetes: v1.26.2, CRI: docker-ce cri-docker, CNI:Flannel


设置时钟同步

# apt update# apt install chrony# systemctl start chrony# systemctl enable chrony
复制代码


主机名称解析

root@k8s-master01:~# cat /etc/hosts127.0.0.1 localhost127.0.1.1 ubuntu2004
# The following lines are desirable for IPv6 capable hosts::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters
172.29.9.1 k8s-master01.magedu.com k8s-master01 kubeapi.magedu.com k8sapi.magedu.com kubeapi172.29.9.2 k8s-master02.magedu.com k8s-master02172.29.9.3 k8s-master03.magedu.com k8s-master03172.29.9.11 k8s-node01.magedu.com k8s-node01172.29.9.12 k8s-node02.magedu.com k8s-node02172.29.9.13 k8s-node03.magedu.com k8s-node03
复制代码


禁用 swap 设备

root@k8s-master01:~# systemctl --type swap  UNIT          LOAD   ACTIVE SUB    DESCRIPTION  swap.img.swap loaded active active /swap.img  
LOAD = Reflects whether the unit definition was properly loaded.ACTIVE = The high-level unit activation state, i.e. generalization of SUB.SUB = The low-level unit activation state, values depend on unit type.
1 loaded units listed. Pass --all to see loaded but inactive units, too.To show all installed unit files use 'systemctl list-unit-files'.
root@k8s-master01:~# free -h total used free shared buff/cache availableMem: 1.9Gi 252Mi 1.3Gi 1.0Mi 358Mi 1.5GiSwap: 2.0Gi 0B 2.0Giroot@k8s-master01:~# systemctl mask swap.img.swapCreated symlink /etc/systemd/system/swap.img.swap → /dev/null.oot@k8s-master01:~# free -h total used free shared buff/cache availableMem: 1.9Gi 287Mi 1.3Gi 1.0Mi 359Mi 1.5GiSwap: 2.0Gi 0B 2.0Gi
复制代码


禁用默认的防火墙服务

root@k8s-master01:~# ufw statusStatus: inactiveroot@k8s-master01:~# ufw disableFirewall stopped and disabled on system startuproot@k8s-master01:~# ufw statusStatus: inactive
复制代码


安装程序包

安装并启动 docker

# apt -y install apt-transport-https ca-certificates curl software-properties-common# curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -# add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"# apt update# apt install docker-ce=5:20.10.23~3-0~ubuntu-focal
kubelet需要让docker容器引擎使用systemd作为CGroup的驱动,其默认值为cgroupfs,因而,我们还需要编辑docker的配置文件/etc/docker/daemon.json,添加如下内容,其中的registry-mirrors用于指明使用的镜像加速服务。 # cat /etc/docker/daemon.json {"registry-mirrors": [ "https://registry.docker-cn.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": { "max-size": "200m"},"storage-driver": "overlay2" }
# systemctl daemon-reload# systemctl restart docker.service# systemctl enable docker.serviceSynchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.Executing: /lib/systemd/systemd-sysv-install enable docker
复制代码


安装 cri-dockerd

# curl -LO https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.0/cri-dockerd_0.3.0.3-0.ubuntu-focal_amd64.deb# dpkg -i cri-dockerd_0.3.0.3-0.ubuntu-focal_amd64.deb# systemctl status cri-docker● cri-docker.service - CRI Interface for Docker Application Container Engine     Loaded: loaded (/lib/systemd/system/cri-docker.service; enabled; vendor preset: enabled)     Active: active (running) since Tue 2023-03-07 20:44:21 UTC; 46s agoTriggeredBy: ● cri-docker.socket       Docs: https://docs.mirantis.com   Main PID: 7294 (cri-dockerd)      Tasks: 7     Memory: 10.9M     CGroup: /system.slice/cri-docker.service             └─7294 /usr/bin/cri-dockerd --container-runtime-endpoint fd://
Mar 07 20:44:21 k8s-master01 cri-dockerd[7294]: time="2023-03-07T20:44:21Z" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."Mar 07 20:44:21 k8s-master01 cri-dockerd[7294]: time="2023-03-07T20:44:21Z" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."Mar 07 20:44:21 k8s-master01 cri-dockerd[7294]: time="2023-03-07T20:44:21Z" level=info msg="Loaded network plugin cni"Mar 07 20:44:21 k8s-master01 cri-dockerd[7294]: time="2023-03-07T20:44:21Z" level=info msg="Docker cri networking managed by network plugin cni"Mar 07 20:44:21 k8s-master01 cri-dockerd[7294]: time="2023-03-07T20:44:21Z" level=info msg="Docker Info: &{ID:DSWY:WCDM:UI5G:5LOI:DKZC:CF5G:44LH:7VW3:IJVL:5IN5:BASN:R7EX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersS>Mar 07 20:44:21 k8s-master01 cri-dockerd[7294]: time="2023-03-07T20:44:21Z" level=info msg="Setting cgroupDriver systemd"Mar 07 20:44:21 k8s-master01 cri-dockerd[7294]: time="2023-03-07T20:44:21Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"Mar 07 20:44:21 k8s-master01 cri-dockerd[7294]: time="2023-03-07T20:44:21Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."Mar 07 20:44:21 k8s-master01 cri-dockerd[7294]: time="2023-03-07T20:44:21Z" level=info msg="Start cri-dockerd grpc backend"Mar 07 20:44:21 k8s-master01 systemd[1]: Started CRI Interface for Docker Application Container Engine.
复制代码


安装 kubelet、kubeadm、kubectl

# apt update && apt install -y apt-transport-https curl# curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial mainEOF# apt update
在各主机安装kubelet、kubeadm和kubectl等程序包,并将其设置为随系统启动而自动引导:# apt install -y kubelet kubeadm kubectl# systemctl enable kubelet
复制代码

整合 kubelet 和 cri-dockerd

配置 cri-dockerd

配置 cri-dockerd,确保其能够正确加载到 CNI 插件。编辑/usr/lib/systemd/system/cri-docker.service 文件,确保其[Service]配置段中的 ExecStart 的值类似如下内容。

ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d
复制代码


需要添加的各配置参数(各参数的值要与系统部署的 CNI 插件的实际路径相对应):

  • --network-plugin:指定网络插件规范的类型,这里要使用 CNI;

  • --cni-bin-dir:指定 CNI 插件二进制程序文件的搜索目录;

  • --cni-cache-dir:CNI 插件使用的缓存目录;

  • --cni-conf-dir:CNI 插件加载配置文件的目录;

配置完成后,重载并重启 cri-docker.service 服务。

# systemctl daemon-reload && systemctl restart cri-docker.service
复制代码


配置 kubelet

配置 kubelet,为其指定 cri-dockerd 在本地打开的 Unix Sock 文件的路径,该路径一般默认为“/run/cri-dockerd.sock“。编辑文件/etc/sysconfig/kubelet,为其添加参数 KUBELET_KUBEADM_ARGS。

# mkdir /etc/sysconfig# cat /etc/sysconfig/kubelet KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/cri-dockerd.sock"
复制代码

需要说明的是,该配置也可不进行,而是直接在后面的各 kubeadm 命令上使用“--cri-socket unix:///run/cri-dockerd.sock”选项。

添加参数后,不需要重启 kubelet,因为等会 kubelet 会由 kubeadm 自动拉起来。


获取相关镜像文件

root@k8s-master01:~# kubeadm config images listregistry.k8s.io/kube-apiserver:v1.26.2registry.k8s.io/kube-controller-manager:v1.26.2registry.k8s.io/kube-scheduler:v1.26.2registry.k8s.io/kube-proxy:v1.26.2registry.k8s.io/pause:3.9registry.k8s.io/etcd:3.5.6-0registry.k8s.io/coredns/coredns:v1.9.3root@k8s-master01:~# kubeadm config images pull --cri-socket unix:///run/cri-dockerd.sock[config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.2[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.2[config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.2[config/images] Pulled registry.k8s.io/kube-proxy:v1.26.2[config/images] Pulled registry.k8s.io/pause:3.9[config/images] Pulled registry.k8s.io/etcd:3.5.6-0[config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3root@k8s-master01:~# docker imagesREPOSITORY                                TAG       IMAGE ID       CREATED        SIZEregistry.k8s.io/kube-apiserver            v1.26.2   63d3239c3c15   13 days ago    134MBregistry.k8s.io/kube-controller-manager   v1.26.2   240e201d5b0d   13 days ago    123MBregistry.k8s.io/kube-scheduler            v1.26.2   db8f409d9a5d   13 days ago    56.3MBregistry.k8s.io/kube-proxy                v1.26.2   6f64e7135a6e   13 days ago    65.6MBregistry.k8s.io/etcd                      3.5.6-0   fce326961ae2   3 months ago   299MBregistry.k8s.io/pause                     3.9       e6f181688397   4 months ago   744kBregistry.k8s.io/coredns/coredns           v1.9.3    5185b96f0bec   9 months ago   48.8MBroot@k8s-master01:~# 
复制代码


将 k8s-master01 当前状态做快照,然后基于此快照创建其他节点。

# cat /etc/hosts127.0.0.1 localhost127.0.1.1 ubuntu2004
# The following lines are desirable for IPv6 capable hosts::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters
172.29.9.1 k8s-master01.magedu.com k8s-master01 kubeapi.magedu.com k8sapi.magedu.com kubeapi172.29.9.2 k8s-master02.magedu.com k8s-master02172.29.9.3 k8s-master03.magedu.com k8s-master03172.29.9.11 k8s-node01.magedu.com k8s-node01172.29.9.12 k8s-node02.magedu.com k8s-node02172.29.9.13 k8s-node03.magedu.com k8s-node03
复制代码


初始化方式 1

root@k8s-master01:~# apt search kubeadmSorting... DoneFull Text Search... Donekubeadm/kubernetes-xenial,now 1.26.2-00 amd64 [installed]  Kubernetes Cluster Bootstrapping Tool
root@k8s-master01:~# kubeadm init --control-plane-endpoint="kubeapi.magedu.com" --kubernetes-version=v1.26.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --token-ttl=0 --cri-socket unix:///run/cri-dockerd.sock --upload-certs[init] Using Kubernetes version: v1.26.2[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubeapi.magedu.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.29.9.1][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.29.9.1 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.29.9.1 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 20.003436 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:820cd5ea2e3e58cfcdf0b2d687d4e7c9e7cfbaa5df6347765b5463171aecf0ce[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule][bootstrap-token] Using token: ifc0se.q5gas5s2sgh3rn2f[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join kubeapi.magedu.com:6443 --token ifc0se.q5gas5s2sgh3rn2f \ --discovery-token-ca-cert-hash sha256:3a027ce57e2df021bc432431f3d639301c480cd6559f743627dcb9c888f599e3 \ --control-plane --certificate-key 820cd5ea2e3e58cfcdf0b2d687d4e7c9e7cfbaa5df6347765b5463171aecf0ce
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join kubeapi.magedu.com:6443 --token ifc0se.q5gas5s2sgh3rn2f \ --discovery-token-ca-cert-hash sha256:3a027ce57e2df021bc432431f3d639301c480cd6559f743627dcb9c888f599e3 root@k8s-master01:~# mkdir -p $HOME/.kuberoot@k8s-master01:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master01:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01 NotReady control-plane 3m58s v1.26.2

复制代码


部署网络插件

首先,下载适配系统及硬件平台环境的flanneld至每个节点,并放置于/opt/bin/目录下。我们这里选用flanneld-amd64,目前最新的版本为v0.20.2,因而,我们需要在集群的每个节点上执行如下命令:root@k8s-master01:~# curl -LO https://github.com/flannel-io/flannel/releases/download/v0.20.2/flanneld-amd64root@k8s-master01:~# mkdir /opt/binroot@k8s-master01:~# cp flanneld-amd64 /opt/bin/flanneldroot@k8s-master01:~# chmod +x /opt/bin/flanneld root@k8s-master01:~# ls -l /opt/bin/flanneld -rwxr-xr-x 1 root root 39358256 Mar 13 20:23 /opt/bin/flanneldroot@k8s-master01:~# scp /opt/bin/flanneld k8s-node01:/opt/bin/root@k8s-master01:~# scp /opt/bin/flanneld k8s-node02:/opt/bin/root@k8s-master01:~# scp /opt/bin/flanneld k8s-node03:/opt/bin/
随后,在初始化的第一个master节点k8s-master01上运行如下命令,向Kubernetes部署kube-flannel。oot@k8s-master01:~# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.20.2/Documentation/kube-flannel.ymlnamespace/kube-flannel createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created
root@k8s-master01:~# kubectl get pods -n kube-flannelNAME READY STATUS RESTARTS AGEkube-flannel-ds-9ldcx 1/1 Running 0 92s
root@k8s-master01:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01 Ready control-plane 21m v1.26.2
复制代码


添加节点到集群中

在所有node节点都执行如下命令:kubeadm join kubeapi.magedu.com:6443 --token ifc0se.q5gas5s2sgh3rn2f --discovery-token-ca-cert-hash sha256:3a027ce57e2df021bc432431f3d639301c480cd6559f743627dcb9c888f599e3 --cri-socket unix:///run/cri-dockerd.sock
root@k8s-node01:~# kubeadm join kubeapi.magedu.com:6443 --token ifc0se.q5gas5s2sgh3rn2f \> --discovery-token-ca-cert-hash sha256:3a027ce57e2df021bc432431f3d639301c480cd6559f743627dcb9c888f599e3 --cri-socket unix:///run/cri-dockerd.sock[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
所有节点执行命令加入集群后,查看节点信息:root@k8s-master01:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01 Ready control-plane 28m v1.26.2k8s-node01 Ready <none> 3m36s v1.26.2k8s-node02 Ready <none> 2m25s v1.26.2k8s-node03 Ready <none> 2m14s v1.26.2
所有pod都为running状态,集群工作一切正常root@k8s-master01:~# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-787d4945fb-mrmg4 1/1 Running 0 29mcoredns-787d4945fb-xbdcm 1/1 Running 0 29metcd-k8s-master01 1/1 Running 0 30mkube-apiserver-k8s-master01 1/1 Running 0 30mkube-controller-manager-k8s-master01 1/1 Running 0 30mkube-proxy-blh45 1/1 Running 0 3m44skube-proxy-fxrmh 1/1 Running 0 5m6skube-proxy-jrcsj 1/1 Running 0 29mkube-proxy-kc527 1/1 Running 0 3m55skube-scheduler-k8s-master01 1/1 Running 0 30m
复制代码


安装参考链接:

https://mp.weixin.qq.com/s/ySnENeuIIq98FQNLpF7mYw

  1. 扩展作业:使用 kubeasz 部署一个分布式的 Kubernetes 集群。

2. 在集群上编排运行 demoapp,并使用 Service 完成 Pod 发现和服务发布。

API Server 内置支持多种资源类型

组名/版本级别     stable: 不会明确给出     betaN: 公测级别版本     root@k8s-master01:~# kubectl api-versionsadmissionregistration.k8s.io/v1apiextensions.k8s.io/v1apiregistration.k8s.io/v1apps/v1authentication.k8s.io/v1authorization.k8s.io/v1autoscaling/v1autoscaling/v2batch/v1certificates.k8s.io/v1coordination.k8s.io/v1discovery.k8s.io/v1events.k8s.io/v1flowcontrol.apiserver.k8s.io/v1beta2flowcontrol.apiserver.k8s.io/v1beta3networking.k8s.io/v1node.k8s.io/v1policy/v1rbac.authorization.k8s.io/v1scheduling.k8s.io/v1storage.k8s.io/v1storage.k8s.io/v1beta1v1
# 查看指定组内的资源类型root@k8s-master01:~# kubectl api-resources --api-group=appsNAME SHORTNAMES APIVERSION NAMESPACED KINDcontrollerrevisions apps/v1 true ControllerRevisiondaemonsets ds apps/v1 true DaemonSetdeployments deploy apps/v1 true Deploymentreplicasets rs apps/v1 true ReplicaSetstatefulsets sts apps/v1 true StatefulSet
复制代码


demoapp: 无状态,基于 flask 的 web application

创建一个 Deployment,部署无状态应用

~# kubectl create deployment --help

几个常用选项:

--image=IMAGE: 编排运行的 Pod 模板上的容器默认使用的 image;

--replicas=Num: 运行的 Pod 副本数量;

--dry-run=client: 仅测试创建是否能够成功完成,但并不真正创建资源;

root@k8s-master01:~# kubectl create deployment demoapp --image=ikubenetes/demoapp:v1.0 --replicas=3 --dry-run=client -o yamlapiVersion: apps/v1kind: Deploymentmetadata:  creationTimestamp: null  labels:    app: demoapp  name: demoappspec:  replicas: 3  selector:    matchLabels:      app: demoapp  strategy: {}  template:    metadata:      creationTimestamp: null      labels:        app: demoapp    spec:      containers:      - image: ikubenetes/demoapp:v1.0        name: demoapp        resources: {}status: {}
root@k8s-master01:~# kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 --replicas=3deployment.apps/demoapp create
root@k8s-master01:~# kubectl api-resources --api-group=appsNAME SHORTNAMES APIVERSION NAMESPACED KINDcontrollerrevisions apps/v1 true ControllerRevisiondaemonsets ds apps/v1 true DaemonSetdeployments deploy apps/v1 true Deploymentreplicasets rs apps/v1 true ReplicaSetstatefulsets sts apps/v1 true StatefulSet
# 列出特定资源下的所有资源对象 资源类型,复数形式 root@k8s-master01:~# kubectl get deploymentsNAME READY UP-TO-DATE AVAILABLE AGEdemoapp 3/3 3 3 5m35s
# 资源类型,单数形式root@k8s-master01:~# kubectl get deploymentNAME READY UP-TO-DATE AVAILABLE AGEdemoapp 3/3 3 3 5m38s
# 资源类型,简写形式root@k8s-master01:~# kubectl get deployNAME READY UP-TO-DATE AVAILABLE AGEdemoapp 3/3 3 3 5m40sroot@k8s-master01:~# kubectl get podsNAME READY STATUS RESTARTS AGEdemoapp-75f59c894-fjl46 1/1 Running 0 6m11sdemoapp-75f59c894-fqpg7 1/1 Running 0 6m11sdemoapp-75f59c894-nrkr6 1/1 Running 0 6m11
# 显示具体单个资源对象信息root@k8s-master01:~# kubectl get pods demoapp-75f59c894-fjl46NAME READY STATUS RESTARTS AGEdemoapp-75f59c894-fjl46 1/1 Running 0 10
root@k8s-master01:~# kubectl get pods demoapp-75f59c894-fjl46 -o yamlroot@k8s-master01:~# kubectl get pods demoapp-75f59c894-fjl46 -o json
# 显示资源对象完整格式root@k8s-master01:~# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemoapp-75f59c894-fjl46 1/1 Running 0 12m 10.244.3.3 k8s-node03 <none> <none>demoapp-75f59c894-fqpg7 1/1 Running 0 12m 10.244.2.3 k8s-node02 <none> <none>demoapp-75f59c894-nrkr6 1/1 Running 0 12m 10.244.1.4 k8s-node01 <none> <none>
root@k8s-master01:~# curl 10.244.3.3iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-75f59c894-fjl46, ServerIP: 10.244.3.3!

root@k8s-master01:~# kubectl delete pods demoapp-75f59c894-fjl46pod "demoapp-75f59c894-fjl46" deletedroot@k8s-master01:~# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemoapp-75f59c894-fqpg7 1/1 Running 0 16m 10.244.2.3 k8s-node02 <none> <none>demoapp-75f59c894-nrkr6 1/1 Running 0 16m 10.244.1.4 k8s-node01 <none> <none>demoapp-75f59c894-xlpb7 1/1 Running 0 40s 10.244.3.4 k8s-node03 <none> <none>
复制代码

列出特定资源下的所有对象:

~# kubectl get TYPE 【NAME】 -o {json|yaml|wide}

每个资源对象的定义:

要遵循引用的资源类型的格式规范

JSON 格式

api-server 会将 YAML 格式 自动转成 JSON 格式


创建 service 资源

root@k8s-master01:~# kubectl create service nodeport demoapp --tcp=80:80 --dry-run=client -o yamlapiVersion: v1kind: Servicemetadata:  creationTimestamp: null  labels:    app: demoapp  name: demoappspec:  ports:  - name: 80-80    port: 80    protocol: TCP    targetPort: 80  selector:    app: demoapp  type: NodePortstatus:  loadBalancer: {}
查看pod的标签root@k8s-master01:~# kubectl get pods --show-labelsNAME READY STATUS RESTARTS AGE LABELSdemoapp-75f59c894-fqpg7 1/1 Running 0 20m app=demoapp,pod-template-hash=75f59c894demoapp-75f59c894-nrkr6 1/1 Running 0 20m app=demoapp,pod-template-hash=75f59c894demoapp-75f59c894-xlpb7 1/1 Running 0 4m25s app=demoapp,pod-template-hash=75f59c894
创建serviceroot@k8s-master01:~# kubectl create service nodeport demoapp --tcp=80:80 service/demoapp created
查看服务资源root@k8s-master01:~# kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdemoapp NodePort 10.105.135.180 <none> 80:31863/TCP 10skubernetes ClusterIP 10.96.0.1 <none> 443/TCP 78m
root@k8s-master01:~# curl 10.105.135.180iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-75f59c894-nrkr6, ServerIP: 10.244.1.4!root@k8s-master01:~# curl 10.105.135.180iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-75f59c894-xlpb7, ServerIP: 10.244.3.4!root@k8s-master01:~# curl 10.105.135.180iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-75f59c894-nrkr6, ServerIP: 10.244.1.4!root@k8s-master01:~# curl 10.105.135.180iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-75f59c894-xlpb7, ServerIP: 10.244.3.4!root@k8s-master01:~# curl 10.105.135.180iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-75f59c894-nrkr6, ServerIP: 10.244.1.4!root@k8s-master01:~# curl 10.105.135.180iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-75f59c894-nrkr6, ServerIP: 10.244.1.4!root@k8s-master01:~# curl 10.105.135.180iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-75f59c894-nrkr6, ServerIP: 10.244.1.4!root@k8s-master01:~# curl 10.105.135.180iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-75f59c894-fqpg7, ServerIP: 10.244.2.3!
在创建service时,同名的endpoint会自动创建出来。root@k8s-master01:~# kubectl get endpointsNAME ENDPOINTS AGEdemoapp 10.244.1.4:80,10.244.2.3:80,10.244.3.4:80 2m54skubernetes 172.29.9.1:6443 81m
查看pods的资源规范,可以官网查,也可以通过命令直接查看内建文档,如:root@k8s-master01:~# kubectl explain podsroot@k8s-master01:~# kubectl explain pods.spec
复制代码


3. 要求以配置文件的方式,在集群上编排运行 nginx,并使用 Service 完成 Pod 发现和服务发布。

root@k8s-master01:~# cat deployment-nginx.yaml apiVersion: apps/v1kind: Deploymentmetadata:  labels:    app: nginx  name: nginxspec:  replicas: 2  selector:    matchLabels:      app: nginx  template:    metadata:      labels:        app: nginx    spec:      containers:      - image: nginx:alpine        name: nginx
root@k8s-master01:~# kubectl create -f deployment-nginx.yaml deployment.apps/nginx createdroot@k8s-master01:~# kubectl get deploymentsNAME READY UP-TO-DATE AVAILABLE AGEdemoapp 3/3 3 3 41mnginx 2/2 2 2 2mroot@k8s-master01:~# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemoapp-75f59c894-fqpg7 1/1 Running 0 41m 10.244.2.3 k8s-node02 <none> <none>demoapp-75f59c894-nrkr6 1/1 Running 0 41m 10.244.1.4 k8s-node01 <none> <none>demoapp-75f59c894-xlpb7 1/1 Running 0 25m 10.244.3.4 k8s-node03 <none> <none>nginx-6c557cc74d-65j7j 1/1 Running 0 2m6s 10.244.1.5 k8s-node01 <none> <none>nginx-6c557cc74d-m9z4p 1/1 Running 0 2m6s 10.244.2.4 k8s-node02 <none> <none>
复制代码


创建 service

root@k8s-master01:~# cat service-nginx.yaml apiVersion: v1kind: Servicemetadata:  labels:    app: nginx  name: nginxspec:  ports:  - name: 80-80    port: 80    protocol: TCP    targetPort: 80  selector:    app: nginx  type: NodePort
root@k8s-master01:~# kubectl create -f service-nginx.yaml service/nginx createdroot@k8s-master01:~# kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdemoapp NodePort 10.105.135.180 <none> 80:31863/TCP 23mkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 101mnginx NodePort 10.101.239.187 <none> 80:30960/TCP 18sroot@k8s-master01:~# curl 10.101.239.187<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>html { color-scheme: light dark; }body { width: 35em; margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p>
<p>For online documentation and support please refer to<a href="http://nginx.org/">nginx.org</a>.<br/>Commercial support is available at<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p></body></html>
在集群外部,也可以通过节点IP:节点端口 访问nginx:http://172.29.9.11:30960/
查看pod日志root@k8s-master01:~# kubectl get podsNAME READY STATUS RESTARTS AGEdemoapp-75f59c894-fqpg7 1/1 Running 0 48mdemoapp-75f59c894-nrkr6 1/1 Running 0 48mdemoapp-75f59c894-xlpb7 1/1 Running 0 32mnginx-6c557cc74d-65j7j 1/1 Running 0 8m30snginx-6c557cc74d-m9z4p 1/1 Running 0 8m30sroot@k8s-master01:~# kubectl logs nginx-6c557cc74d-65j7j/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d//docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh/docker-entrypoint.sh: Configuration complete; ready for start up2023/03/13 21:48:45 [notice] 1#1: using the "epoll" event method2023/03/13 21:48:45 [notice] 1#1: nginx/1.23.32023/03/13 21:48:45 [notice] 1#1: built by gcc 12.2.1 20220924 (Alpine 12.2.1_git20220924-r4) 2023/03/13 21:48:45 [notice] 1#1: OS: Linux 5.4.0-144-generic2023/03/13 21:48:45 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:10485762023/03/13 21:48:45 [notice] 1#1: start worker processes2023/03/13 21:48:45 [notice] 1#1: start worker process 302023/03/13 21:48:45 [notice] 1#1: start worker process 3110.244.0.0 - - [13/Mar/2023:21:53:35 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.68.0" "-"10.244.1.1 - - [13/Mar/2023:21:54:42 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "-"2023/03/13 21:54:42 [error] 31#31: *2 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.244.1.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "172.29.9.11:30960", referrer: "http://172.29.9.11:30960/"10.244.1.1 - - [13/Mar/2023:21:54:42 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://172.29.9.11:30960/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36" "-"
# 自动扩容root@k8s-master01:~# kubectl scale deployment nginx --replicas=6deployment.apps/nginx scaledroot@k8s-master01:~# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemoapp-75f59c894-fqpg7 1/1 Running 0 53m 10.244.2.3 k8s-node02 <none> <none>demoapp-75f59c894-nrkr6 1/1 Running 0 53m 10.244.1.4 k8s-node01 <none> <none>demoapp-75f59c894-xlpb7 1/1 Running 0 37m 10.244.3.4 k8s-node03 <none> <none>nginx-6c557cc74d-2xm6r 1/1 Running 0 92s 10.244.1.6 k8s-node01 <none> <none>nginx-6c557cc74d-59xxz 1/1 Running 0 92s 10.244.3.5 k8s-node03 <none> <none>nginx-6c557cc74d-65j7j 1/1 Running 0 14m 10.244.1.5 k8s-node01 <none> <none>nginx-6c557cc74d-dmng8 1/1 Running 0 92s 10.244.2.5 k8s-node02 <none> <none>nginx-6c557cc74d-fwz7k 1/1 Running 0 92s 10.244.3.6 k8s-node03 <none> <none>nginx-6c557cc74d-m9z4p 1/1 Running 0 14m 10.244.2.4 k8s-node02 <none> <none>
# 自动缩容root@k8s-master01:~# kubectl scale deployment nginx --replicas=4deployment.apps/nginx scaledroot@k8s-master01:~# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemoapp-75f59c894-fqpg7 1/1 Running 0 54m 10.244.2.3 k8s-node02 <none> <none>demoapp-75f59c894-nrkr6 1/1 Running 0 54m 10.244.1.4 k8s-node01 <none> <none>demoapp-75f59c894-xlpb7 1/1 Running 0 38m 10.244.3.4 k8s-node03 <none> <none>nginx-6c557cc74d-2xm6r 1/1 Running 0 2m12s 10.244.1.6 k8s-node01 <none> <none>nginx-6c557cc74d-59xxz 1/1 Running 0 2m12s 10.244.3.5 k8s-node03 <none> <none>nginx-6c557cc74d-65j7j 1/1 Running 0 14m 10.244.1.5 k8s-node01 <none> <none>nginx-6c557cc74d-m9z4p 1/1 Running 0 14m 10.244.2.4 k8s-node02 <none> <none>
复制代码


  1. 扩展作业:要求以配置文件的方式,在集群上编排运行 wordpress 和 mysql,并使用 Service 完成 Pod 发现和服务发布。

提示:使用变量的方式的为 wordpress 指定要使用 mysql 服务器地址、数据库名称、用户名称和用户密码。

用户头像

Starry

关注

还未添加个人签名 2018-12-10 加入

还未添加个人简介

评论

发布
暂无评论
极客时间运维进阶训练营第十二周作业_Starry_InfoQ写作社区