k8s 集群添加 master 节点报 control plane 错误
作者:天翼云开发者社区
- 2025-08-08 北京
本文字数:2151 字
阅读完需:约 7 分钟
本文分享自天翼云开发者社区《k8s集群添加master节点报control plane 错误》,作者:SummerSnow
背景介绍
在刚部署的k8s集群中添加新的master节点时,报了error execution phase preflight: One or more conditions for hosting a new control plane instance is not satisfied的错误,接下来就针对此问题进行解决说明。
复制代码
环境介绍
k8s 版本
kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"xxx", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
复制代码
部署方式
kubeadm
复制代码
当前节点信息
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 24h v1.23.5
复制代码
master 节点加入
#第1步 打印加入集群的相关信息
kubeadm token create --print-join-command
kubeadm join xx.xx.xx.xx:6443 --token xxxx.xxxx --discovery-token-ca-cert-hash sha256:xxx
#第2步 打印加入master的certs信息
kubeadm init phase upload-certs --upload-certs
I0520 14:51:22.848075 1096 version.go:255] remote version is much newer: v1.30.1; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
aaaaaxxxxxxxxxxxxxxxxxx
#第3步 信息拼接(第一步和第二步)
kubeadm token create --print-join-command
kubeadm join xx.xx.xx.xx:6443 --token xxxx.xxxx --discovery-token-ca-cert-hash sha256:xxx --control-plane --certificate-key aaaaaxxxxxxxxxxxxxxxxxx
#第4步 在待加入节点上执行上面第3步的命令,然后报错信息如下:
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "xxx" could not be reached
[WARNING Hostname]: hostname "xxx": lookup xxx on 114.114.114.114:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.
unable to add a new control plane instance to a cluster that doesn't have a stable controlPlaneEndpoint address
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
To see the stack trace of this error execute with --v=5 or higher
复制代码
解决方法
方法 1
#1 查看 kubeadm-config的信息(只截取了一部分信息)
kubectl get cm kubeadm-config -n kube-system -oyaml
apiVersion: v1
data:
ClusterConfiguration: |
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: xxx
kind: ClusterConfiguration
kubernetesVersion: v1.23.5
networking:
dnsDomain: cluster.local
podSubnet: xx.xx.0.0/16
serviceSubnet: xx.xx.0.0/16
scheduler: {}
#2 添加controlPlaneEndpoint
kubectl edit cm kubeadm-config -n kube-system
apiVersion: v1
data:
ClusterConfiguration: |
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: xxx
kind: ClusterConfiguration
kubernetesVersion: v1.23.5
#在下面添加一行信息
controlPlaneEndpoint: "xx.xx.xx.xx:port" ## 换成自己的ip和port
networking:
dnsDomain: cluster.local
podSubnet: xx.xx.0.0/16
serviceSubnet: xx.xx.0.0/16
scheduler: {}
复制代码
方法 2
如果 k8s 集群中只有一个节点,可将 k8s 集群重置,然后重新初始化
#1 重置k8s,然后删除相关文件
kubeadm reset -f
rm -rf /etc/kubernetes
rm -rf ~/.kube
#2 重新初始化
kubeadm init --kubernetes-version 1.23.5 --control-plane-endpoint "xx.xx.xx.xx:port" --pod-network-cidr=xx.xx.xx.xx/16 --service-cidr=xx.xx.xx.xx/16 --upload-certs
复制代码
划线
评论
复制
发布于: 刚刚阅读数: 5

天翼云开发者社区
关注
还未添加个人签名 2022-02-22 加入
天翼云是中国电信倾力打造的云服务品牌,致力于成为领先的云计算服务提供商。提供云主机、CDN、云电脑、大数据及AI等全线产品和场景化解决方案。
评论