01-kubernetes 安装部署(手动)
本文是在linux机器上纯手动安装kubernetes集群,包含k8s安装过程中必装的一些其他组件如etcd,flannel等,不使用kubeadm等快速安装方法,目的在于了解kubernetes内各个组件的实际作用,以及它们各自做了哪些事情。文中试验采用三台linux机器,一台作为master机器,两台作为node节点机器。
环境准备
准备部署的kubernetes集群所在的三台机器
节点及功能:
k8s-master(ip:10.1.1.44)安装Master、etcd
node1(ip:10.1.1.46)
node2(ip:10.1.1.152)
设置三台机器的主机名:
master上执行
node1上执行
node2上执行
在三台机器上设置hosts,均执行以下命令:
如果有新增节点机器,需要在master上配置新增机器的hosts
关闭三台机器的防火墙
部署etcd
在master节点上执行:
编辑etcd的配置文件,更改一下红色部分信息:
[root@localhost ~]# vi /etc/etcd/etcd.conf
[member]
ETCD_NAME=master
ETCDDATADIR="/var/lib/etcd/default.etcd"
#ETCDWALDIR=""
#ETCDSNAPSHOTCOUNT="10000"
#ETCDHEARTBEATINTERVAL="100"
#ETCDELECTIONTIMEOUT="1000"
#ETCDLISTENPEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCDMAXSNAPSHOTS="5"
#ETCDMAXWALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCDINITIALADVERTISEPEERURLS="http://localhost:2380"
#if you use different ETCDNAME (e.g. test), set ETCDINITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCDINITIALCLUSTER="default=http://localhost:2380"
#ETCDINITIALCLUSTER_STATE="new"
#ETCDINITIALCLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCDDISCOVERYSRV=""
#ETCDDISCOVERYFALLBACK="proxy"
#ETCDDISCOVERYPROXY=""
启动并验证状态:
部署master
安装docker
修改docker配置文件,使其可以下载镜像
编辑docker配置文件
添加以下内容
更改成国内的docker仓库daocloud
将docker仓库修改成自己的私服库地址
安装kubernetes
修改kubernetes apiserver配置(关注红色即可)
[root@k8s-master ~]# vim /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
修改kubernetes config配置(关注红色即可)
[root@k8s-master ~]# vim /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://<master ip>:<master port>"
部署node
安装docker
参照第3节的安装docker
安装kubernetes
参照第3节的安装kubernetes
配置并启动kubernetes
修改config文件(关注红色即可)
[root@K8s-node-1 ~]# vim /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://172.17.61.59:8080"
修改kubelet文件
[root@K8s-node-1 ~]# vim /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://<master的ip>:<master的port8080>"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
创建覆盖网络 – flannel
安装flannel
在master、node上均安装flannel
yum install flannel
配置flannel
[root@k8s-master ~]# vi /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
配置etcd中关于flannel的key
Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置(/atomic.io/network/config这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)(因为只在master上安装了etcd,所以只在master上执行以下命令)
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
{ "Network": "10.0.0.0/16" }
启动服务
把各个服务设置成开机自启动、启动各服务并查看启动状态
可通过在master上执行命令查看集群的节点信息
kubectl get nodes
版权声明: 本文为 InfoQ 作者【绿星雪碧】的原创文章。
原文链接:【http://xie.infoq.cn/article/edc2a96c857c1a387980567c0】。文章转载请联系作者。
评论