01-kubernetes 安装部署(手动)

用户头像
绿星雪碧
关注
发布于: 2020 年 06 月 10 日
01-kubernetes安装部署(手动)

本文是在linux机器上纯手动安装kubernetes集群,包含k8s安装过程中必装的一些其他组件如etcd,flannel等,不使用kubeadm等快速安装方法,目的在于了解kubernetes内各个组件的实际作用,以及它们各自做了哪些事情。文中试验采用三台linux机器,一台作为master机器,两台作为node节点机器。



  1. 环境准备

准备部署的kubernetes集群所在的三台机器



节点及功能:

  • k8s-master(ip:10.1.1.44)安装Master、etcd

  • node1(ip:10.1.1.46)

  • node2(ip:10.1.1.152)



设置三台机器的主机名:

master上执行

hostnamectl --static set-hostname  k8s-master

node1上执行

hostnamectl --static set-hostname k8s-node-1

node2上执行

hostnamectl --static set-hostname k8s-node-2



在三台机器上设置hosts,均执行以下命令:

echo '10.1.1.44 k8s-master
10.1.1.44 etcd
10.1.1.44 registry
10.1.1.46 k8s-node-1
10.1.1.152 k8s-node-2' >> /etc/hosts

如果有新增节点机器,需要在master上配置新增机器的hosts



关闭三台机器的防火墙

systemctl disable firewalld
systemctl stop firewalld




  1. 部署etcd

在master节点上执行:

yum install etcd -y

编辑etcd的配置文件,更改一下红色部分信息:

[root@localhost ~]# vi /etc/etcd/etcd.conf

 

[member]

ETCD_NAME=master

ETCDDATADIR="/var/lib/etcd/default.etcd"

#ETCDWALDIR=""

#ETCDSNAPSHOTCOUNT="10000"

#ETCDHEARTBEATINTERVAL="100"

#ETCDELECTIONTIMEOUT="1000"

#ETCDLISTENPEER_URLS="http://0.0.0.0:2380"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

#ETCDMAXSNAPSHOTS="5"

#ETCDMAXWALS="5"

#ETCD_CORS=""

#

#[cluster]

#ETCDINITIALADVERTISEPEERURLS="http://localhost:2380"

#if you use different ETCDNAME (e.g. test), set ETCDINITIAL_CLUSTER value for this name, i.e. "test=http://..."

#ETCDINITIALCLUSTER="default=http://localhost:2380"

#ETCDINITIALCLUSTER_STATE="new"

#ETCDINITIALCLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"

#ETCD_DISCOVERY=""

#ETCDDISCOVERYSRV=""

#ETCDDISCOVERYFALLBACK="proxy"

#ETCDDISCOVERYPROXY=""



启动并验证状态:

[root@localhost ~]# systemctl enable etcd
[root@localhost ~]# systemctl start etcd
[root@localhost ~]# etcdctl set testdir/testkey0 0
0
[root@localhost ~]# etcdctl get testdir/testkey0
0
[root@localhost ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
[root@localhost ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy




  1. 部署master

  • 安装docker

yum install docker

修改docker配置文件,使其可以下载镜像

编辑docker配置文件

vim /usr/lib/systemd/system/docker.service

添加以下内容

Environment="HTTP_PROXY=http://username:password@<代理ip>:<代理port>"

更改成国内的docker仓库daocloud

echo "DOCKER_OPTS=\"\$DOCKER_OPTS --registry-mirror=http://f2d6cb40.m.daocloud.io\"" | sudo tee -a /etc/default/docker

将docker仓库修改成自己的私服库地址

vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --insecure-registry=<私服库ip>:<私服库port> --log-driver=json-file --signature-verification=false'
  • 安装kubernetes

yum install kubernetes

修改kubernetes apiserver配置(关注红色即可)

[root@k8s-master ~]# vim /etc/kubernetes/apiserver

 

###

# kubernetes system config

#

# The following values are used to configure the kube-apiserver

#

 

# The address on the local server to listen to.

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

 

# The port on the local server to listen on.

KUBE_API_PORT="--port=8080"

 

# Port minions listen on

# KUBELET_PORT="--kubelet-port=10250"

 

# Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

 

# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

 

# default admission control policies

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

 

# Add your own!

KUBE_API_ARGS=""



修改kubernetes config配置(关注红色即可)

[root@k8s-master ~]# vim /etc/kubernetes/config

 

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

 

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

 

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

 

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://<master ip>:<master port>"


  1. 部署node

  • 安装docker

参照第3节的安装docker

  • 安装kubernetes

参照第3节的安装kubernetes

  • 配置并启动kubernetes

修改config文件(关注红色即可)

[root@K8s-node-1 ~]# vim /etc/kubernetes/config

 

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

 

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

 

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

 

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://172.17.61.59:8080"

修改kubelet文件

[root@K8s-node-1 ~]# vim /etc/kubernetes/kubelet

 

###

# kubernetes kubelet (minion) config

 

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=0.0.0.0"

 

# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

 

# You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=k8s-node-1"

 

# location of the api-server

KUBELET_API_SERVER="--api-servers=http://<master的ip>:<master的port8080>"

 

# pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

 

# Add your own!

KUBELET_ARGS=""


  1. 创建覆盖网络 – flannel

  • 安装flannel

在master、node上均安装flannel

yum install flannel

  • 配置flannel

[root@k8s-master ~]# vi /etc/sysconfig/flanneld

 

# Flanneld configuration options

 

# etcd url location.  Point this to the server where etcd runs

FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

 

# etcd config key.  This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX="/atomic.io/network"

 

# Any additional options that you want to pass

#FLANNEL_OPTIONS=""

  • 配置etcd中关于flannel的key

Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置(/atomic.io/network/config这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)(因为只在master上安装了etcd,所以只在master上执行以下命令)

[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'

{ "Network": "10.0.0.0/16" }


  1. 启动服务

把各个服务设置成开机自启动、启动各服务并查看启动状态

systemctl enable flanneld.service docker kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy

systemctl start flanneld.service
systemctl start docker
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy

systemctl start flanneld.service docker kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy


systemctl status flanneld.service -l
systemctl status docker -l
systemctl status kube-apiserver -l
systemctl status kube-controller-manager -l
systemctl status kube-scheduler -l
systemctl status kubelet -l
systemctl status kube-proxy -l

可通过在master上执行命令查看集群的节点信息

kubectl get nodes



发布于: 2020 年 06 月 10 日 阅读数: 100
用户头像

绿星雪碧

关注

一直在寻找赚钱的方法 2018.03.03 加入

擅长:有问题,上网查

评论

发布
暂无评论
01-kubernetes安装部署(手动)