必做题
使用 kubeadm 部署一个分布式的 Kubernetes 集群。
扩展作业:使用 kubeasz 部署一个分布式的 Kubernetes 集群。
在集群上编排运行 demoapp,并使用 Service 完成 Pod 发现和服务发布。
要求以配置文件的方式,在集群上编排运行 nginx,并使用 Service 完成 Pod 发现和服务发布。
扩展作业:要求以配置文件的方式,在集群上编排运行 wordpress 和 mysql,并使用 Service 完成 Pod 发现和服务发布。
Kubeadm
Server Specs
Cloud platform: AWS
Instance type: m5.large
Disk size: 100GB gp3
Update System Packages
$ yum update -yLoaded plugins: extras_suggestions, langpacks, priorities, update-motdamzn2-core | 3.7 kB 00:00:00Resolving Dependencies--> Running transaction check---> Package chrony.x86_64 0:4.0-3.amzn2.0.2 will be updated---> Package chrony.x86_64 0:4.2-5.amzn2.0.2 will be an update...Installed: kernel.x86_64 0:5.10.135-122.509.amzn2Updated: chrony.x86_64 0:4.2-5.amzn2.0.2 dhclient.x86_64 12:4.2.5-79.amzn2.1.1 dhcp-common.x86_64 12:4.2.5-79.amzn2.1.1 dhcp-libs.x86_64 12:4.2.5-79.amzn2.1.1 gnupg2.x86_64 0:2.0.22-5.amzn2.0.5 kernel-tools.x86_64 0:5.10.135-122.509.amzn2 tzdata.noarch 0:2022c-1.amzn2Complete!
复制代码
Install K8s yum repo
# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearchenabled=1gpgcheck=1gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpgexclude=kubelet kubeadm kubectlEOF[kubernetes]name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearchenabled=1gpgcheck=1gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpgexclude=kubelet kubeadm kubectl
复制代码
Disable SELinux
# setenforce 0setenforce: SELinux is disabled# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
复制代码
Install kubeadm, kubelet and kubectl
# yum install -y kubelet-1.25.0-0 kubeadm-1.25.0-0 kubectl-1.25.0-0 --disableexcludes=kubernetes...Installed: kubeadm.x86_64 0:1.25.0-0 kubectl.x86_64 0:1.25.0-0 kubelet.x86_64 0:1.25.0-0Complete!# systemctl enable --now kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
复制代码
Forwarding IPv4 and letting iptables see bridged traffic
# cat <<EOF | sudo tee /etc/modules-load.d/k8s.confoverlaybr_netfilterEOF# modprobe overlay# modprobe br_netfilter# lsmod | egrep "overlay|netfilter"br_netfilter 32768 0bridge 258048 1 br_netfilteroverlay 151552 0
复制代码
Set required system parameters
# cat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1net.ipv4.ip_forward = 1EOF# sysctl --system...* Applying /etc/sysctl.conf ...
复制代码
Install containerd
Download the containerd-<VERSION>-<OS>-<ARCH>.tar.gz archive from https://github.com/containerd/containerd/releases , verify its sha256sum, and extract it under /usr/local:
# tar Cxzvf /usr/local containerd-1.6.8-linux-amd64.tar.gzbin/bin/containerd-shim-runc-v2bin/containerd-shimbin/ctrbin/containerd-shim-runc-v1bin/containerdbin/containerd-stress
复制代码
Install containerd systemd service
Download the containerd.service unit file from https://github.com/containerd/containerd/blob/main/containerd.service into /usr/lib/systemd/system/containerd.service, and run the following commands:
# cp containerd.service /usr/lib/systemd/system/# systemctl daemon-reload# systemctl enable --now containerdCreated symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
复制代码
Install runc
Download the runc.<ARCH> binary from https://github.com/opencontainers/runc/releases , verify its sha256sum, and install it as /usr/local/sbin/runc.
# install -m 755 runc.amd64 /usr/local/sbin/runc
复制代码
Installing CNI plugins
Download the cni-plugins-<OS>-<ARCH>-<VERSION>.tgz archive from https://github.com/containernetworking/plugins/releases , verify its sha256sum, and extract it under /opt/cni/bin:
# mkdir -p /opt/cni/bin# tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz././macvlan./static./vlan./portmap./host-local./vrf./bridge./tuning./firewall./host-device./sbr./loopback./dhcp./ptp./ipvlan./bandwidth
复制代码
Pull down required images
Download images
# kubeadm config images pull --v=5...[config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.0[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.0[config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.0[config/images] Pulled registry.k8s.io/kube-proxy:v1.25.0[config/images] Pulled registry.k8s.io/pause:3.8[config/images] Pulled registry.k8s.io/etcd:3.5.4-0[config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3
复制代码
Initialize Control Plane
The control-plane node is the machine where the control plane components run, including etcd (the cluster database) and the API Server (which the kubectl command line tool communicates with).
To initialize the control-plane node run:
# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.31.83.51 --node-name "ip-172-31-83-51.ec2.internal" --v=9...Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.31.83.51:6443 --token xxxx \ --discovery-token-ca-cert-hash sha256:1bf0cf842e56f84bdb1156a2635910bda519d20203fcbc2f99127abbf2bd626c
复制代码
Note the --apiserver-advertise-address is the private ip.
Install Network Plugin
We will use Calico as our K8s cluster network plugin. Download the tigera-operator.yaml and custom-resources.yaml files
# curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/tigera-operator.yaml -O$ curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/custom-resources.yaml -O
复制代码
Update the custom-resources.yaml CIDR config, use the --pod-network-cidr=10.244.0.0/16 instead of the default 192.168.0.0/16
Install tigera operator
# kubectl create -f tigera-operator.yamlnamespace/tigera-operator created...customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io createdserviceaccount/tigera-operator createdclusterrole.rbac.authorization.k8s.io/tigera-operator createdclusterrolebinding.rbac.authorization.k8s.io/tigera-operator createddeployment.apps/tigera-operator created# kubectl get po -n tigera-operatorNAME READY STATUS RESTARTS AGEtigera-operator-6675dc47f4-fflhn 1/1 Running 0 44s
复制代码
Install calico custom resource
# kubectl create -f custom-resources.yamlinstallation.operator.tigera.io/default createdapiserver.operator.tigera.io/default created
复制代码
Watch the pod creation:
# watch kubectl get po -A
复制代码
Final verification
# kubectl version --shortClient Version: v1.25.0Kustomize Version: v4.5.7Server Version: v1.25.0# kubectl get nodeNAME STATUS ROLES AGE VERSIONip-172-31-83-51.ec2.internal Ready control-plane 11m v1.25.0
复制代码
Kubeasz
Plan
Deployment node k8s-1 etcd node k8s-1 k8s-2 k8s-3master node k8s-1 k8s-2 Node node k8s-1 k8s-2 k8s-3
复制代码
Configure DNS
# All nodes
vim /etc/hosts
10.0.0.18 k8s-1
10.0.0.19 k8s-2
10.0.0.20 k8s-3
Download
cd /opt# Download the tool script easzup, for example, use kubeasz version 2.2.0export release=2.2.0curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/easzupchmod +x ./easzup# Download using tool script./easzup -D
复制代码
Installation
# Containerized operation kubeasz
cd /opt
./easzup -S
# Installation cluster
# Create Cluster Context
docker exec -it kubeasz easzctl checkout myk8s
# Modify the hosts file
cd /etc/ansible && cp example/hosts.multi-node hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
# variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster
[etcd]
10.0.0.18 NODE_NAME=etcd1
10.0.0.19 NODE_NAME=etcd2
10.0.0.20 NODE_NAME=etcd3
# master node(s)
[kube-master]
10.0.0.18
10.0.0.19
# work node(s)
[kube-node]
10.0.0.18
10.0.0.19
10.0.0.20
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one
# 'SELF_SIGNED_CERT': 'no' you need put files of certificates named harbor.pem and harbor-key.pem in directory 'down'
[harbor]
#192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no SELF_SIGNED_CERT=yes
# [optional] loadbalance for accessing k8s from outside
[ex-lb]
#192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
#192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
10.0.0.18
[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="flannel"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"
# NodePort Range
NODE_PORT_RANGE="20000-40000"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local."
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
# Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"
# Configure the whole k8s cluster
docker exec -it kubeasz easzctl setup
Check cluster
# If you prompt kubectl: command not found, exit and re ssh to log in, and the environment variable will take effect
$ kubectl version # Verify cluster version $ kubectl get componentstatus # Verify the status of components such as scheduler / Controller Manager / etcd$ kubectl get node # Verify node ready status$ kubectl get pod --all-namespaces # Verify the cluster pod status. By default, network plug-ins, coredns, metrics server, etc. have been installed$ kubectl get svc --all-namespaces # Verify cluster service status
复制代码
DemoApp
# Dry run$ kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 --replicas=2 --dry-run=client -oyamlapiVersion: apps/v1kind: Deploymentmetadata: creationTimestamp: null labels: app: demoapp name: demoappspec: replicas: 2 selector: matchLabels: app: demoapp strategy: {} template: metadata: creationTimestamp: null labels: app: demoapp spec: containers: - image: ikubernetes/demoapp:v1.0 name: demoapp resources: {}status: {}
# Create$ kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 --replicas=2deployment.apps/demoapp created
$ kubectl get poNAME READY STATUS RESTARTS AGEdemoapp-5748b7ccfc-ml6jp 1/1 Running 0 14sdemoapp-5748b7ccfc-r2fhg 1/1 Running 0 14s
复制代码
$ $ kubectl create service nodeport demoapp --tcp=80:80 --dry-run=client -oyamlapiVersion: v1kind: Servicemetadata: creationTimestamp: null labels: app: demoapp name: demoappspec: ports: - name: 80-80 port: 80 protocol: TCP targetPort: 80 selector: app: demoapp type: NodePortstatus: loadBalancer: {} # Check pod labels$ kubectl get pods --show-labelsNAME READY STATUS RESTARTS AGE LABELSdemoapp-5748b7ccfc-ml6jp 1/1 Running 0 8m37s app=demoapp,pod-template-hash=5748b7ccfcdemoapp-5748b7ccfc-r2fhg 1/1 Running 0 8m37s app=demoapp,pod-template-hash=5748b7ccfc
# Create service$ kubectl create service nodeport demoapp --tcp=80:80service/demoapp created$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdemoapp NodePort 10.100.61.201 <none> 80:32452/TCP 6skubernetes ClusterIP 10.100.0.1 <none> 443/TCP 49m
$ curl 10.100.61.201iKubernetes demoapp v1.0 !! ClientIP: 192.168.53.241, ServerName: demoapp-5748b7ccfc-r2fhg, ServerIP: 192.168.47.54!$ curl 10.100.61.201iKubernetes demoapp v1.0 !! ClientIP: 192.168.46.101, ServerName: demoapp-5748b7ccfc-ml6jp, ServerIP: 192.168.53.241!$ curl 10.100.61.201iKubernetes demoapp v1.0 !! ClientIP: 192.168.46.101, ServerName: demoapp-5748b7ccfc-ml6jp, ServerIP: 192.168.53.241!$ curl 10.100.61.201iKubernetes demoapp v1.0 !! ClientIP: 192.168.53.241, ServerName: demoapp-5748b7ccfc-r2fhg, ServerIP: 192.168.47.54!
复制代码
NGINX Deployment
apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deploy namespace: defaultspec: replicas: 2 selector: matchLabels: app: nginx minReadySeconds: 3 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.22 ports: - containerPort: 80
复制代码
apiVersion: v1kind: Servicemetadata: name: nginx-servicespec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80
复制代码
$ kubectl create -f nginx_deploy.yaml
$ kubectl get poNAME READY STATUS RESTARTS AGEnginx-deploy-645549fcf7-gwmgj 0/1 ContainerCreating 0 4snginx-deploy-645549fcf7-sf97t 0/1 ContainerCreating 0 4s
$ kubectl get poNAME READY STATUS RESTARTS AGEnginx-deploy-645549fcf7-gwmgj 1/1 Running 0 49snginx-deploy-645549fcf7-sf97t 1/1 Running 0 49s
$ $ k get po -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-deploy-645549fcf7-gwmgj 1/1 Running 0 2m18s 10.244.8.12 ip-10-230-40-133.ec2.internal <none> <none>nginx-deploy-645549fcf7-sf97t 1/1 Running 0 2m18s 10.244.10.19 ip-10-230-40-41.ec2.internal <none> <none>
复制代码
$ curl 10.244.10.19<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>html { color-scheme: light dark; }body { width: 35em; margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p>...
$ curl 10.244.8.12<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>html { color-scheme: light dark; }body { width: 35em; margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p>
复制代码
$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 107d
$ kubectl create -f nginx_service.yamlservice/nginx-service created
$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 107dnginx-service ClusterIP 10.100.164.8 <none> 80/TCP 13s
$ $ curl 10.100.164.8<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>html { color-scheme: light dark; }body { width: 35em; margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.</p>
$ kubectl describe svc nginx-serviceName: nginx-serviceNamespace: defaultLabels: <none>Annotations: <none>Selector: app=nginxType: ClusterIPIP Family Policy: SingleStackIP Families: IPv4IP: 10.100.164.8IPs: 10.100.164.8Port: <unset> 80/TCPTargetPort: 80/TCPEndpoints: 10.244.10.19:80,10.244.8.12:80Session Affinity: NoneEvents: <none>
复制代码
Wordpress + MySQL
MySQL Install
---apiVersion: v1kind: Secretmetadata: name: mysql-user-passworddata: # wpdb database.name: d3BkYg== root.password: dGVzdDEyMzQK # wpuser user.name: d3B1c2Vy user.password: dGVzdDEyMzQK---apiVersion: v1kind: Servicemetadata: labels: app: mysql name: mysqlspec: ports: - name: mysql port: 3306 protocol: TCP targetPort: 3306 selector: app: mysql type: ClusterIP---apiVersion: apps/v1kind: Deploymentmetadata: labels: app: mysql name: mysqlspec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - image: mysql:8.0 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-user-password key: root.password - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-user-password key: user.name - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: mysql-user-password key: user.password - name: MYSQL_DATABASE valueFrom: secretKeyRef: name: mysql-user-password key: database.name volumeMounts: - name: mysql-data mountPath: /var/lib/mysql/ volumes: - name: mysql-data emptyDir: {}
复制代码
$ kubectl create -f 01-mysql-secret.ymlsecret/mysql-user-password created$ kubectl create -f 02-mysql-service.ymlservice/mysql created$ kubectl create -f 03-mysql-deployment.ymldeployment.apps/mysql created
$ kubectl get poNAME READY STATUS RESTARTS AGEdemoapp-5748b7ccfc-ml6jp 1/1 Running 0 6h6mdemoapp-5748b7ccfc-r2fhg 1/1 Running 0 6h6mmysql-5b7db4c447-cfgrb 1/1 Running 1 (2m4s ago) 2m24s
复制代码
Nginx Install
---apiVersion: v1data: nginx.conf: | server { listen 80; listen [::]:80; server_name tony.com www.tony.com; index index.php index.html index.htm; root /var/www/html; location ~ /.well-known/acme-challenge { allow all; root /var/www/html; } location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass wordpress:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } location ~ /\.ht { deny all; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; allow all; } location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ { expires max; log_not_found off; } }kind: ConfigMapmetadata: creationTimestamp: null name: nginx-conf---apiVersion: v1kind: Servicemetadata: labels: app: nginx name: nginxspec: ports: - name: http-80 port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: NodePort externalIPs: - 54.82.55.154 --- apiVersion: apps/v1kind: Deploymentmetadata: labels: app: nginx name: nginxspec: replicas: 1 selector: matchLabels: app: nginx strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: nginx spec: volumes: - name: ngxconf configMap: name: nginx-conf - name: wordpress-app-data emptyDir: {} containers: - image: nginx:1.20-alpine name: nginx volumeMounts: - name: ngxconf mountPath: /etc/nginx/conf.d/ - name: wordpress-app-data mountPath: /var/www/html/
复制代码
$ kubectl create -f 01-nginx-configmap.ymlconfigmap/nginx-conf created$ kubectl create -f 02-nginx-service.ymlservice/nginx created$ kubectl create -f 03-nginx-deployment.ymldeployment.apps/nginx created
复制代码
Wordpress
---apiVersion: v1kind: Servicemetadata: labels: app: wordpress name: wordpressspec: type: NodePort ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: wordpress---apiVersion: apps/v1kind: Deploymentmetadata: labels: app: wordpress name: wordpressspec: replicas: 1 selector: matchLabels: app: wordpress template: metadata: labels: app: wordpress spec: containers: - image: wordpress:6.1-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: mysql - name: WORDPRESS_DB_USER valueFrom: secretKeyRef: name: mysql-user-password key: user.name - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-user-password key: user.password - name: WORDPRESS_DB_NAME valueFrom: secretKeyRef: name: mysql-user-password key: database.name volumeMounts: - name: wordpress-app-data mountPath: /var/www/html/ volumes: - name: wordpress-app-data emptyDir: {}
复制代码
$ kubectl create -f 01-wp-service.ymlservice/wordpress created$ kubectl create -f 02-wp-deploy.ymldeployment.apps/wordpress created
复制代码
Check all service/pod status
$ kubectl get poNAME READY STATUS RESTARTS AGEdemoapp-5748b7ccfc-ml6jp 1/1 Running 0 6h31mdemoapp-5748b7ccfc-r2fhg 1/1 Running 0 6h31mmysql-5b7db4c447-cfgrb 1/1 Running 1 (26m ago) 26mnginx-c965c9c5f-85pt8 1/1 Running 0 7m10swordpress-65f98c4b87-c7nq2 1/1 Running 0 7m41s
$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdemoapp NodePort 10.100.61.201 <none> 80:32452/TCP 6h21mkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 7h10mmysql ClusterIP 10.100.216.124 <none> 3306/TCP 27mnginx NodePort 10.100.75.178 54.82.55.154 80:31980/TCP 21mwordpress NodePort 10.100.41.153 <none> 80:30969/TCP 9m17s
复制代码
评论