Nebula Operator 云上实践
嗨,大家好!Nebula Operator 开源也有一段时间了,之前也有一篇相关的博客介绍,但是实践相关的博客却还没有,现在:
它来了!来了!它带着实践走来了!
Nebula Operator 介绍
关于 Nebula Operator 的介绍,大家可以参考之前那篇博客:一文详解云上自动化部署集群管理工具 Nebula Operator。
本文将主要侧重于实践方向,让你能很快地入手 Nebula Operator,体验图数据库的乐趣!
Nebula Operator云上实践
到这里,开始进入正题,本文将使用阿里云进行 Nebula Operator 实践,其他云厂商类似。
安装工具
本次实践需要在操作电脑上安装如下基础工具:
以上基础工具相关的安装方法请参考对应链接。
创建云上 Kubernetes
因为 Operator 是依托于 Kubernetes 的,所以在进行 Nebula Operator 实践之前,需要先准备好 Kubernetes 环境。
首先进入到阿里云的控制台,然后进入到容器服务 Kubernetes 版,再创建一个集群。此实践选择的是 ACK 托管版,相关的创建参数请按需选择。
注意: 为了方便外网访问 Kubernetes API Server ,本次实践勾选了使用 EIP 暴露 API Server,你可以根据自身情况选择是否启用,如果不开启,你需要打通操作电脑与 Kubernetes 的之间网络。其他参数请按需选择。
等待 Kubernetes 集群启动后,将集群的连接信息中公网访问中的内容复制到计算机$HOME/.kube/config
文件中。
然后你可以使用如下命令验证下 Kubernetes 集群:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cn-beijing.192.168.250.13 Ready <none> 51m v1.20.4-aliyun.1
cn-beijing.192.168.250.185 Ready <none> 51m v1.20.4-aliyun.1
cn-beijing.192.168.250.89 Ready <none> 51m v1.20.4-aliyun.1
安装 Nebula Operator 依赖
在安装 Nebula Operator 之前,还需要先安装一些依赖。
安装 CertManager
# 安装 CertManager
$ helm install cert-manager cert-manager --repo https://charts.jetstack.io \
--namespace cert-manager --create-namespace --version v1.3.1 \
--set installCRDs=true
# 稍等一会儿,检测 CertManager 是否启动正常
$ kubectl -n cert-manager get pod
NAME READY STATUS RESTARTS AGE
cert-manager-7998c69865-jfw9x 1/1 Running 0 93s
cert-manager-cainjector-7b744d56fb-846w9 1/1 Running 0 93s
cert-manager-webhook-7d6d4c78bc-ssk4w 1/1 Running 0 93s
安装 OpenKruise
# 安装 OpenKruise
$ helm install kruise \
https://github.com/openkruise/kruise/releases/download/v0.8.1/kruise-chart.tgz
# 稍等一会儿,检测 OpenKruise 是否启动正常
$ kubectl -n kruise-system get pod
NAME READY STATUS RESTARTS AGE
kruise-controller-manager-6797f89d9b-ppv65 1/1 Running 0 49s
kruise-controller-manager-6797f89d9b-wlkbd 1/1 Running 0 49s
kruise-daemon-7rljq 1/1 Running 0 49s
kruise-daemon-8kd8d 1/1 Running 0 49s
kruise-daemon-n6tdw 1/1 Running 0 49s
添加 Nebula Operator Charts
# 添加 Nebula Operator Charts Repo
$ helm repo add nebula-operator https://vesoft-inc.github.io/nebula-operator/charts
# 更新 repo
$ helm repo update
安装 Nebula Operator
由于阿里云上无法拉取gcr.io
和k8s.gcr.io
镜像,因此需要指定国内镜像,这里进行了如下替换:
你可以通过如下命令查看所有可以设置的参数:
$ helm show values nebula-operator/nebula-operator
此次实践中的安装命令如下:
# 安装 Nebula Operator
$ helm install nebula-operator nebula-operator/nebula-operator \
--namespace nebula-operator-system --create-namespace --version 0.1.0 \
--set image.kubeRBACProxy.image=kubesphere/kube-rbac-proxy:v0.8.0 \
--set image.kubeScheduler.image=kubesphere/kube-scheduler:v1.18.8
# 稍等一会儿,检测 Nebula Operator 是否启动正常
$ kubectl -n nebula-operator-system get pod
NAME READY STATUS RESTARTS AGE
nebula-operator-controller-manager-deployment-6968547fff-k62b4 2/2 Running 0 19s
nebula-operator-controller-manager-deployment-6968547fff-lhpdx 2/2 Running 0 19s
nebula-operator-scheduler-deployment-7c5fc7945-hbkv8 2/2 Running 0 19s
nebula-operator-scheduler-deployment-7c5fc7945-sxc7w 2/2 Running 0 19s
如果你自定义了 Kubernetes 的 Cluster Domain ,则需要修改安装命令,增加设置kubernetesClusterDomain
,如下:
# 安装 Nebula Operator ,请修改 <<YourCustomCLusterDomain>>
$ helm install nebula-operator nebula-operator/nebula-operator \
--namespace nebula-operator-system --create-namespace --version 0.1.0 \
--set image.kubeRBACProxy.image=kubesphere/kube-rbac-proxy:v0.8.0 \
--set image.kubeScheduler.image=kubesphere/kube-scheduler:v1.18.8 \
--set kubernetesClusterDomain=<<YourCustomCLusterDomain>>
部署 Nebula Cluster
至此,Nebula Operator 已经就绪,接下来安装Nebula Cluster
来体验图数据吧!
首先,需要获取StorageClass
,这个将会用来设置Nebula Cluster
所使用的存储。
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
alicloud-disk-available diskplugin.csi.alibabacloud.com Delete Immediate true 100m
alicloud-disk-efficiency diskplugin.csi.alibabacloud.com Delete Immediate true 100m
alicloud-disk-essd diskplugin.csi.alibabacloud.com Delete Immediate true 100m
alicloud-disk-ssd diskplugin.csi.alibabacloud.com Delete Immediate true 100m
alicloud-disk-topology diskplugin.csi.alibabacloud.com Delete WaitForFirstConsumer true 100m
从上面得知,阿里云目前支持5
种StorageClass
。此次实践将会使用alicloud-disk-ssd
。其他云厂商会有对应的StorageClass
,请按照实际情况进行选择。*注意:每个云厂商可能对于申请存储的大小有范围限制,比如阿里云对于 SSD 限制在20 Gi
到32,768 Gi
,在创建Nebula Cluster
的时候需要注意下。
你可以通过如下命令查看所有可以设置的参数:
$ helm show values nebula-operator/nebula-cluster
此次实践中的安装命令如下:
# 创建 Nebula Cluster 的名称
$ export NEBULA_CLUSTER_NAME=nebula
# 创建 Nebula Cluster 的 namespace
$ export NEBULA_CLUSTER_NAMESPACE=nebula
# 创建 Nebula Cluster 的 StorageClass 名称,这里设置为之前查找到的 alicloud-disk-ssd
$ export STORAGE_CLASS_NAME=alicloud-disk-ssd
# 创建 Nebula Cluster 中每个组建所使用存储的大小
$ export STORAGE_SIZE_GRAPHD=20Gi
$ export STORAGE_SIZE_METAD=20Gi
$ export STORAGE_SIZE_STORAGED=20Gi
# 创建 Nebula Cluster
$ helm install ${NEBULA_CLUSTER_NAME} nebula-operator/nebula-cluster \
--namespace ${NEBULA_CLUSTER_NAMESPACE} --create-namespace --version 0.1.0 \
--set nameOverride=${NEBULA_CLUSTER_NAME} \
--set nebula.storageClassName="${STORAGE_CLASS_NAME}" \
--set nebula.graphd.storage="${STORAGE_SIZE_GRAPHD}" \
--set nebula.metad.storage="${STORAGE_SIZE_METAD}" \
--set nebula.storaged.storage="${STORAGE_SIZE_STORAGED}"
# 稍等一会儿,检测 Nebula Cluster 是否启动正常
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get nebulacluster
NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE
nebula 2 2 3 3 3 3 4m10s
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get pod
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 96s
nebula-graphd-1 1/1 Running 0 96s
nebula-metad-0 1/1 Running 0 97s
nebula-metad-1 1/1 Running 0 97s
nebula-metad-2 1/1 Running 0 97s
nebula-storaged-0 1/1 Running 0 97s
nebula-storaged-1 1/1 Running 0 97s
nebula-storaged-2 1/1 Running 0 97s
当然,也可以将Storaged
实例升级到5
个,执行命令如下:
# 升级 Nebula Cluster
$ helm upgrade ${NEBULA_CLUSTER_NAME} nebula-operator/nebula-cluster \
--namespace ${NEBULA_CLUSTER_NAMESPACE} --create-namespace --version 0.1.0 \
--set nameOverride=${NEBULA_CLUSTER_NAME} \
--set nebula.storageClassName="${STORAGE_CLASS_NAME}" \
--set nebula.graphd.storage="${STORAGE_SIZE_GRAPHD}" \
--set nebula.metad.storage="${STORAGE_SIZE_METAD}" \
--set nebula.storaged.storage="${STORAGE_SIZE_STORAGED}" \
--set nebula.storaged.replicas=5
# 稍等一会儿,检测 Nebula Cluster 是否启动正常
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get nebulacluster
NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE
nebula 2 2 3 3 5 5 6m12s
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get pod
NAME READY STATUS RESTARTS AGE
nebula-graphd-0 1/1 Running 0 2m30s
nebula-graphd-1 1/1 Running 0 2m30s
nebula-metad-0 1/1 Running 0 2m30s
nebula-metad-1 1/1 Running 0 2m30s
nebula-metad-2 1/1 Running 0 2m30s
nebula-storaged-0 1/1 Running 0 2m30s
nebula-storaged-1 1/1 Running 0 2m30s
nebula-storaged-2 1/1 Running 0 2m30s
nebula-storaged-3 1/1 Running 0 52s
nebula-storaged-4 1/1 Running 0 52s
详细的安装说明请见:使用 Helm 安装 Nebula Operator。
访问 Nebula Cluster
终于,Nebula Cluster 启动成功了,接下来开始访问集群吧!
Kubernetes 内部访问
首先,在 Kubernetes 中启动一个 Nebula Graph Console,执行命令如下:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: nebula-console
spec:
containers:
- name: nebula-console
image: vesoft/nebula-console:v2-nightly
command:
- sleep
- "1000000"
EOF
然后再通过刚才创建的 Nebula Graph Console 来访问集群,如下:
$ kubectl exec -it nebula-console -- \
nebula-console -u u -p p --addr ${NEBULA_CLUSTER_NAME}-graphd-svc.${NEBULA_CLUSTER_NAMESPACE}.svc --port 9669
2021/06/23 06:21:22 [INFO] connection pool is initialized successfully
Welcome to Nebula Graph!
(u@nebula) [(none)]> show hosts
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| Host | Port | Status | Leader count | Leader distribution | Partition distribution |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-0.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-1.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-2.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-3.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-4.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "Total" | | | 0 | | |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
Got 4 rows (time spent 7669/9367 us)
Wed, 23 Jun 2021 06:21:26 UTC
Kubernetes 外部访问
Kubernetes 内部的服务在集群外部要想访问,可以使用hostPort
、hostNetwork
、Ingress
、LoadBalancer
等。这里利用云厂商的便利性,直接使用LoadBalancer
来访问集群。
注意:此方法会暴露你的 Nebula 集群,请勿在生产环境使用。
首先,将Graphd Service
的type
改成LoadBalancer
,然后再查看EXTERNAL-IP
。
# 将 service 的 type 改成 LoadBalancer
$ kubectl patch -n ${NEBULA_CLUSTER_NAMESPACE} svc ${NEBULA_CLUSTER_NAME}-graphd-svc \
-p '{"spec": {"type": "LoadBalancer"}}'
# 获取 EXTERNAL-IP ,如果为 pending ,请稍等一会儿再重试
$ kubectl -n ${NEBULA_CLUSTER_NAMESPACE} get svc nebula-graphd-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nebula-graphd-svc LoadBalancer 172.16.85.222 x.x.x.x 9669:31460/TCP,19669:32579/TCP,19670:31481/TCP 27m
现在,可以根据EXTERNAL-IP
来访问集群了,比如此处为x.x.x.x
。
$ export EXTERNAL_IP=x.x.x.x
$ docker run -it --rm vesoft/nebula-console:v2-nightly -u u -p p --addr ${EXTERNAL_IP} --port 9669
2021/06/23 06:42:17 [INFO] connection pool is initialized successfully
Welcome to Nebula Graph!
(u@nebula) [(none)]> show hosts
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| Host | Port | Status | Leader count | Leader distribution | Partition distribution |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-0.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-1.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-2.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-3.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-4.nebula-storaged-headless.nebula.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "Total" | | | 0 | | |
+-----------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
Got 4 rows (time spent 3747/60433 us)
Wed, 23 Jun 2021 06:42:21 UTC
享用时间
大功告成!
尽情地在 Nebula Graph 中驰骋吧!
版权声明: 本文为 InfoQ 作者【Nebula Graph】的原创文章。
原文链接:【http://xie.infoq.cn/article/bd9a85600fd6c45ba127b9874】。
本文遵守【CC BY-SA】协议,转载请保留原文出处及本版权声明。
Nebula Graph
一款开源的分布式图数据库 2020.04.28 加入
Follow me, here is my GitHub profile: https://github.com/vesoft-inc/nebula
评论