从 0 到 1 手动搭建 k8s 集群 - 初始化 master 节点
- 2023-12-13 吉林
本文字数:4460 字
阅读完需:约 15 分钟
k8s 集群搭建网上已经有多种成熟的方案,例如如 kubekey、kubespray。而这些方案大多数也是对 kubeadm 进行了封装而已。本文就通过具体的实例,手动使用 kubeadm 一步一步搭建 k8s 集群。
本文中部署的 k8s 实例版本号为 1.20.4,其他版本部署流程基本相同。
本章节主要介绍 k8s 集群的第一个节点(master)的初始化流程,后续章节会陆续介绍其他 master 节点或者 worker 节点如何加入集群。如果读者仅仅想体验 k8s 的功能,那么单节点就可以运行了。
1. 整体架构
master1:192.168.56.10
master2:192.168.56.11
master3:192.168.56.12
node1:192.168.56.13
node2:192.168.56.14
2 部署 master1 节点
2.1 环境初始化
关闭防火墙、虚拟交换分区、selinux
# 关闭防火墙
sudo systemctl stop firewalld && systemctl disable firewalld
sudo systemctl stop ufw && systemctl disable ufw
# 关闭虚拟交换(注释fstab中swap配置)
sudo swapoff -a
sudo sed -i /^[^#]*swap*/s/^/\#/g /etc/fstab
设置/etc/hosts
192.168.56.10 master1
192.168.56.11 master2
192.168.56.12 master3
192.168.56.13 node1
192.168.56.14 node2
部署 docker
#一键式部署docker
curl -fsSL https://get.docker.com | sudo bash -s docker --mirror Aliyun
sudo systemctl enable docker && sudo systemctl restart docker
安装必备软件
sudo apt-get install socat conntrack ebtables ipset ipvsadm
设置 hostname
sudo hostnamectl set-hostname master1
2.2 部署 k8s 二进制
将 kubelet、kubectl、kubeadm 拷贝到/usr/local/bin 路径下,并赋予执行权限
生成 kubelet 服务/etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
CPUAccounting=true
MemoryAccounting=true
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
生成 kubelet 配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
Environment="KUBELET_EXTRA_ARGS=--node-ip=192.168.56.10 --hostname-override=master1"
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
使能 kubelet
sudo systemctl disable kubelet
sudo systemctl enable kubelet
sudo ln -snf /usr/local/bin/kubelet /usr/bin/kubelet
2.3 初始化主节点
使用 kubeadm 初始化 master。该过程会部署控制面相关组件 etcd、kube-apiserver、kube-controller-manager、kube-scheduler
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --apiserver-advertise-address=192.168.56.10 --image-repository registry.aliyuncs.com/google_containers
部署网络插件 flannel(或者 calico),编辑 network-plugin.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- networking.k8s.io
resources:
- clustercidrs
verbs:
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
k8s-app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
k8s-app: flannel
spec:
selector:
matchLabels:
app: flannel
k8s-app: flannel
template:
metadata:
labels:
tier: node
app: flannel
k8s-app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
priorityClassName: system-node-critical
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
args:
- -f
- /flannel
- /opt/cni/bin/flannel
command:
- cp
image: flannel/flannel-cni-plugin:v1.1.2
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-plugin
- name: install-cni
image: flannel/flannel:v0.21.3
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: {{ .FlannelImage }}
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- mountPath: /run/xtables.lock
name: xtables-lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
部署 flannel
sudo kubectl apply -f network-plugin.yaml
doramingo
还未添加个人签名 2022-08-12 加入
还未添加个人简介
评论