写点什么

k8s 实战案例之部署 redis 单机和 redis cluster

  • 2023-06-29
    福建
  • 本文字数:16892 字

    阅读完需:约 55 分钟

1、在 k8s 上部署 redis 单机

1.1、redis 简介


redis 是一款基于 BSD 协议,开源的非关系型数据库(nosql 数据库),作者是意大利开发者 Salvatore Sanfilippo 在 2009 年发布,使用 C 语言编写;redis 是基于内存存储,而且是目前比较流行的键值数据库(key-value database),它提供将内存通过网络远程共享的一种服务,提供类似功能的还有 memcache,但相比 memcache,redis 还提供了易扩展、高性能、具备数据持久性等功能。主要的应用场景有 session 共享,常用于 web 集群中的 tomcat 或 PHP 中多 web 服务器的 session 共享;消息队列,ELK 的日志缓存,部分业务的订阅发布系统;计数器,常用于访问排行榜,商品浏览数等和次数相关的数值统计场景;缓存,常用于数据查询、电商网站商品信息、新闻内容等;相对 memcache,redis 支持数据的持久化,可以将内存的数据保存在磁盘中,重启 redis 服务或者服务器之后可以从备份文件中恢复数据到内存继续使用。


1.2、PV/PVC 及 Redis 单机



由于 redis 的数据(主要是 redis 快照)都存放在存储系统中,即便 redis pod 挂掉,对应数据都不会丢;因为在 k8s 上部署 redis 单机,redis pod 挂了,k8s 会将对应 pod 重建,重建时会把对应 pvc 挂载至 pod 中,加载快照,从而使得 redis 的数据不被 pod 的挂掉而丢数据;


1.3、构建 redis 镜像


root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# lltotal 1784drwxr-xr-x  2 root root    4096 Jun  5 15:22 ./drwxr-xr-x 11 root root    4096 Aug  9  2022 ../-rw-r--r--  1 root root     717 Jun  5 15:20 Dockerfile-rwxr-xr-x  1 root root     235 Jun  5 15:21 build-command.sh*-rw-r--r--  1 root root 1740967 Jun 22  2021 redis-4.0.14.tar.gz-rw-r--r--  1 root root   58783 Jun 22  2021 redis.conf-rwxr-xr-x  1 root root      84 Jun  5 15:21 run_redis.sh*root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat Dockerfile #Redis Image# 导入自定义centos基础镜像FROM harbor.ik8s.cc/baseimages/magedu-centos-base:7.9.2009 # 添加redis源码包至/usr/local/srcADD redis-4.0.14.tar.gz /usr/local/src# 编译安装redisRUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server  /usr/sbin/ && mkdir -pv /data/redis-data # 添加redis配置文件ADD redis.conf /usr/local/redis/redis.conf # 暴露redis服务端口EXPOSE 6379
#ADD run_redis.sh /usr/local/redis/run_redis.sh#CMD ["/usr/local/redis/run_redis.sh"]# 添加启动脚本ADD run_redis.sh /usr/local/redis/entrypoint.sh# 启动redisENTRYPOINT ["/usr/local/redis/entrypoint.sh"]root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat build-command.sh #!/bin/bashTAG=$1#docker build -t harbor.ik8s.cc/magedu/redis:${TAG} .#sleep 3#docker push harbor.ik8s.cc/magedu/redis:${TAG}
nerdctl build -t harbor.ik8s.cc/magedu/redis:${TAG} .nerdctl push harbor.ik8s.cc/magedu/redis:${TAG}root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat run_redis.sh #!/bin/bash# Redis启动命令/usr/sbin/redis-server /usr/local/redis/redis.conf# 使用tail -f 在pod内部构建守护进程tail -f /etc/hostsroot@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# grep -v '^#\|^$' redis.conf bind 0.0.0.0protected-mode yesport 6379tcp-backlog 511timeout 0tcp-keepalive 300daemonize yessupervised nopidfile /var/run/redis_6379.pidloglevel noticelogfile ""databases 16always-show-logo yessave 900 1save 5 1save 300 10save 60 10000stop-writes-on-bgsave-error nordbcompression yesrdbchecksum yesdbfilename dump.rdbdir /data/redis-dataslave-serve-stale-data yesslave-read-only yesrepl-diskless-sync norepl-diskless-sync-delay 5repl-disable-tcp-nodelay noslave-priority 100requirepass 123456lazyfree-lazy-eviction nolazyfree-lazy-expire nolazyfree-lazy-server-del noslave-lazy-flush noappendonly noappendfilename "appendonly.aof"appendfsync everysecno-appendfsync-on-rewrite noauto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mbaof-load-truncated yesaof-use-rdb-preamble nolua-time-limit 5000slowlog-log-slower-than 10000slowlog-max-len 128latency-monitor-threshold 0notify-keyspace-events ""hash-max-ziplist-entries 512hash-max-ziplist-value 64list-max-ziplist-size -2list-compress-depth 0set-max-intset-entries 512zset-max-ziplist-entries 128zset-max-ziplist-value 64hll-sparse-max-bytes 3000activerehashing yesclient-output-buffer-limit normal 0 0 0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60hz 10aof-rewrite-incremental-fsync yesroot@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis#
复制代码



1.3.1、验证 rdis 镜像是否上传至 harbor?



1.4、测试 redis 镜像


1.4.1、验证将 redis 镜像运行为容器,看看是否正常运行?



1.4.2、远程连接 redis,看看是否可正常连接?



能够将 redis 镜像运行为容器,并且能够通过远程主机连接至 redis 进行数据读写,说明我们构建的 reids 镜像没有问题;


1.5、创建 PV 和 PVC


1.5.1、在 nfs 服务器上准备 redis 数据存储目录


root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis-datadir-1mkdir: created directory '/data/k8sdata/magedu/redis-datadir-1'root@harbor:~# cat /etc/exports# /etc/exports: the access control list for filesystems which may be exported#               to NFS clients.  See exports(5).## Example for NFSv2 and NFSv3:# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)## Example for NFSv4:# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)#/data/k8sdata/kuboard *(rw,no_root_squash)/data/volumes *(rw,no_root_squash)/pod-vol *(rw,no_root_squash)/data/k8sdata/myserver *(rw,no_root_squash)/data/k8sdata/mysite *(rw,no_root_squash)
/data/k8sdata/magedu/images *(rw,no_root_squash)/data/k8sdata/magedu/static *(rw,no_root_squash)

/data/k8sdata/magedu/zookeeper-datadir-1 *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-2 *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-3 *(rw,no_root_squash)

/data/k8sdata/magedu/redis-datadir-1 *(rw,no_root_squash)
root@harbor:~# exportfs -avexportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/mysite". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [7]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/images". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [8]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/static". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [11]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [12]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [13]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [16]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis-datadir-1". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exporting *:/data/k8sdata/magedu/redis-datadir-1exporting *:/data/k8sdata/magedu/zookeeper-datadir-3exporting *:/data/k8sdata/magedu/zookeeper-datadir-2exporting *:/data/k8sdata/magedu/zookeeper-datadir-1exporting *:/data/k8sdata/magedu/staticexporting *:/data/k8sdata/magedu/imagesexporting *:/data/k8sdata/mysiteexporting *:/data/k8sdata/myserverexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~#
复制代码


1.5.2、创建 pv


root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolume.yaml     ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-datadir-pv-1spec:  capacity:    storage: 10Gi  accessModes:    - ReadWriteOnce  nfs:    path: /data/k8sdata/magedu/redis-datadir-1     server: 192.168.0.42root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# 
复制代码



1.5.3、创建 pvc


root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolumeclaim.yaml ---apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: redis-datadir-pvc-1   namespace: mageduspec:  volumeName: redis-datadir-pv-1   accessModes:    - ReadWriteOnce  resources:    requests:      storage: 10Giroot@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# 
复制代码



1.6、部署 redis 服务


root@k8s-master01:~/k8s-data/yaml/magedu/redis# cat redis.yamlkind: Deployment#apiVersion: extensions/v1beta1apiVersion: apps/v1metadata:  labels:    app: devops-redis   name: deploy-devops-redis  namespace: mageduspec:  replicas: 1   selector:    matchLabels:      app: devops-redis  template:    metadata:      labels:        app: devops-redis    spec:      containers:        - name: redis-container          image: harbor.ik8s.cc/magedu/redis:v4.0.14           imagePullPolicy: Always          volumeMounts:          - mountPath: "/data/redis-data/"            name: redis-datadir      volumes:        - name: redis-datadir          persistentVolumeClaim:            claimName: redis-datadir-pvc-1 
---kind: ServiceapiVersion: v1metadata: labels: app: devops-redis name: srv-devops-redis namespace: mageduspec: type: NodePort ports: - name: http port: 6379 targetPort: 6379 nodePort: 36379 selector: app: devops-redis sessionAffinity: ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 10800root@k8s-master01:~/k8s-data/yaml/magedu/redis#
复制代码



上述报错说我们的服务端口超出范围,这是因为我们在初始化 k8s 集群时指定的服务端口范围;


1.6.1、修改 nodeport 端口范围



编辑/etc/systemd/system/kube-apiserver.service,将其--service-node-port-range 选项指定的值修改即可;其他两个 master 节点也需要修改哦


1.6.2、重载 kube-apiserver.service,重启 kube-apiserver


root@k8s-master01:~# systemctl daemon-reload                 root@k8s-master01:~# systemctl restart kube-apiserver.serviceroot@k8s-master01:~# 
复制代码


再次部署 redis



1.7、验证 redis 数据读写


1.7.1、连接 k8s 任意节点的 36376 端口,测试 redis 读写数据



1.8、验证 redis pod 重建对应数据是否丢失?


1.8.1、查看 redis 快照文件是否存储到存储上呢?


root@harbor:~# ll /data/k8sdata/magedu/redis-datadir-1total 12drwxr-xr-x 2 root root 4096 Jun  5 16:29 ./drwxr-xr-x 8 root root 4096 Jun  5 15:53 ../-rw-r--r-- 1 root root  116 Jun  5 16:29 dump.rdbroot@harbor:~# 
复制代码


可以看到刚才我们向 redis 写入数据,对应 redis 在规定时间内发现 key 的变化就做了快照,因为 redis 数据目录时通过 pv/pvc 挂载的 nfs,所以我们在 nfs 对应目录里时可以正常看到这个快照文件的;


1.8.2、删除 redis pod 等待 k8s 重建 redis pod



1.8.3、验证重建后的 redis pod 数据



可以看到 k8s 重建后的 redis pod 还保留着原有 pod 的数据;这说明 k8s 重建时挂载了前一个 pod 的 pvc;


2、在 k8s 上部署 redis 集群


2.1、PV/PVC 及 Redis Cluster-StatefulSet



redis cluster 相比 redis 单机要稍微复杂一点,我们也是通过 pv/pvc 将 redis cluster 数据存放在存储系统中,不同于 redis 单机,redis cluster 对存入的数据会做 crc16 计算,然后和 16384 做取模计算,得出一个数字,这个数字就是存入 redis cluster 的一个槽位;即 redis cluster 将 16384 个槽位,平均分配给集群所有 master 节点,每个 master 节点存放整个集群数据的一部分;这样一来就存在一个问题,如果 master 宕机,那么对应槽位的数据也就不可用,为了防止 master 单点故障,我们还需要对 master 做高可用,即专门用一个 slave 节点对 master 做备份,master 宕机的情况下,对应 slave 会接管 master 继续向集群提供服务,从而实现 redis cluster master 的高可用;如上图所示,我们使用 3 主 3 从的 redis cluster,redis0,1,2 为 master,那么 3,4,5 就对应为 0,1,2 的 slave,负责备份各自对应的 master 的数据;这六个 pod 都是通过 k8s 集群的 pv/pvc 将数据存放在存储系统中;


2.2、创建 PV


2.2.1、在 nfs 上准备 redis cluster 数据目录


root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis{0,1,2,3,4,5}mkdir: created directory '/data/k8sdata/magedu/redis0'mkdir: created directory '/data/k8sdata/magedu/redis1'mkdir: created directory '/data/k8sdata/magedu/redis2'mkdir: created directory '/data/k8sdata/magedu/redis3'mkdir: created directory '/data/k8sdata/magedu/redis4'mkdir: created directory '/data/k8sdata/magedu/redis5'root@harbor:~# tail -6 /etc/exports /data/k8sdata/magedu/redis0 *(rw,no_root_squash)/data/k8sdata/magedu/redis1 *(rw,no_root_squash)/data/k8sdata/magedu/redis2 *(rw,no_root_squash)/data/k8sdata/magedu/redis3 *(rw,no_root_squash)/data/k8sdata/magedu/redis4 *(rw,no_root_squash)/data/k8sdata/magedu/redis5 *(rw,no_root_squash)root@harbor:~# exportfs  -avexportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".  Assuming default behaviour ('no_subtree_check').  NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/mysite". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [7]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/images". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [8]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/static". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [11]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [12]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [13]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [16]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis-datadir-1". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [18]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis0". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [19]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis1". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [20]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis2". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [21]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis3". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [22]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis4". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exportfs: /etc/exports [23]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis5". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x
exporting *:/data/k8sdata/magedu/redis5exporting *:/data/k8sdata/magedu/redis4exporting *:/data/k8sdata/magedu/redis3exporting *:/data/k8sdata/magedu/redis2exporting *:/data/k8sdata/magedu/redis1exporting *:/data/k8sdata/magedu/redis0exporting *:/data/k8sdata/magedu/redis-datadir-1exporting *:/data/k8sdata/magedu/zookeeper-datadir-3exporting *:/data/k8sdata/magedu/zookeeper-datadir-2exporting *:/data/k8sdata/magedu/zookeeper-datadir-1exporting *:/data/k8sdata/magedu/staticexporting *:/data/k8sdata/magedu/imagesexporting *:/data/k8sdata/mysiteexporting *:/data/k8sdata/myserverexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~#
复制代码


2.2.2、创建 pv


root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat pv/redis-cluster-pv.yaml apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv0spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis0 
---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv1spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis1
---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv2spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis2
---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv3spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis3
---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv4spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis4
---apiVersion: v1kind: PersistentVolumemetadata: name: redis-cluster-pv5spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: server: 192.168.0.42 path: /data/k8sdata/magedu/redis5 root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码



2.3、部署 redis cluster


2.3.1、基于 redis.conf 文件创建 configmap


root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.conf appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 
复制代码


2.3.2、创建 configmap


root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl create cm redis-conf --from-file=./redis.conf -n magedu configmap/redis-conf createdroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl get cm -n magedu NAME               DATA   AGEkube-root-ca.crt   1      35hredis-conf         1      6sroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 
复制代码


2.3.3、验证 configmap


root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl describe cm redis-conf -n magedu Name:         redis-confNamespace:    mageduLabels:       <none>Annotations:  <none>
Data====redis.conf:----appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379

BinaryData====
Events: <none>root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码


2.3.4、部署 redis cluster


root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.yaml apiVersion: v1kind: Servicemetadata:  name: redis  namespace: magedu  labels:    app: redisspec:  selector:    app: redis    appCluster: redis-cluster  ports:  - name: redis    port: 6379  clusterIP: None  ---apiVersion: v1kind: Servicemetadata:  name: redis-access  namespace: magedu  labels:    app: redisspec:  type: NodePort  selector:    app: redis    appCluster: redis-cluster  ports:  - name: redis-access    protocol: TCP    port: 6379    targetPort: 6379    nodePort: 36379
---apiVersion: apps/v1kind: StatefulSetmetadata: name: redis namespace: mageduspec: serviceName: redis replicas: 6 selector: matchLabels: app: redis appCluster: redis-cluster template: metadata: labels: app: redis appCluster: redis-cluster spec: terminationGracePeriodSeconds: 20 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - redis topologyKey: kubernetes.io/hostname containers: - name: redis image: redis:4.0.14 command: - "redis-server" args: - "/etc/redis/redis.conf" - "--protected-mode" - "no" resources: requests: cpu: "500m" memory: "500Mi" ports: - containerPort: 6379 name: redis protocol: TCP - containerPort: 16379 name: cluster protocol: TCP volumeMounts: - name: conf mountPath: /etc/redis - name: data mountPath: /var/lib/redis volumes: - name: conf configMap: name: redis-conf items: - key: redis.conf path: redis.conf volumeClaimTemplates: - metadata: name: data namespace: magedu spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Giroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码


上述配置清单,主要用 sts 控制器创建了 6 个 pod 副本,每个副本都使用 configmap 中的配置文件作为 redis 配置文件,使用 pvc 模板指定 pod 在 k8s 上自动关联 pv,并在 magedu 名称空间创建 pvc,即只要 k8s 上有空余的 pv,对应 pod 就会在 magedu 这个名称空间按 pvc 模板信息创建 pvc;当然我们可以使用存储类自动创建 pvc,也可以提前创建好 pvc,一般情况下使用 sts 控制器,我们可以使用 pvc 模板的方式来指定 pod 自动创建 pvc(前提是 k8s 有足够的 pv 可用);


应用配置清单部署 redis cluster



使用 sts 控制器创建 pod,pod 名称是 sts 控制器的名称-id,使用 pvc 模板创建 pvc 的名称为 pvc 模板名称-pod 名称,即 pvc 模板名-sts 控制器名-id;


2.4、初始化 redis cluster


2.4.1、在 k8s 上创建临时容器,安装 redis cluster 初始化工具


root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl run -it ubuntu1804 --image=ubuntu:18.04 --restart=Never -n magedu bashIf you don't see a command prompt, try pressing enter.root@ubuntu1804:/#root@ubuntu1804:/# apt update# 安装必要工具root@ubuntu1804:/# apt install python2.7 python-pip redis-tools dnsutils iputils-ping net-tools# 更新piproot@ubuntu1804:/# pip install --upgrade pip# 使用pip安装redis cluster初始化工具redis-tribroot@ubuntu1804:/# pip install redis-trib==0.5.1root@ubuntu1804:/#
复制代码


2.4.2、初始化 redis cluster


root@ubuntu1804:/# redis-trib.py create \ `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \ `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \ `dig +short redis-2.redis.magedu.svc.cluster.local`:6379 
复制代码



在 k8s 上我们使用 sts 创建 pod,对应 pod 的名称是固定不变的,所以我们初始化 redis 集群就直接使用 redis pod 名称就可以直接解析到对应 pod 的 IP 地址;在传统虚拟机或物理机上初始化 redis 集群,我们可用直接使用 IP 地址,原因是物理机或虚拟机 IP 地址是固定的,在 k8s 上 pod 的 IP 地址是不固定的;


2.4.3、给 master 指定 slave


  • 给 redis-0 指定 slave 为 redis-3


root@ubuntu1804:/# redis-trib.py replicate \ --master-addr `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \ --slave-addr `dig +short redis-3.redis.magedu.svc.cluster.local`:6379
复制代码



  • 给 redis-1 指定 slave 为 redis-4


root@ubuntu1804:/# redis-trib.py replicate \ --master-addr `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \ --slave-addr `dig +short redis-4.redis.magedu.svc.cluster.local`:6379
复制代码



  • 给 redis-2 指定 slave 为 redis-5


root@ubuntu1804:/# redis-trib.py replicate \--master-addr `dig +short redis-2.redis.magedu.svc.cluster.local`:6379 \--slave-addr `dig +short redis-5.redis.magedu.svc.cluster.local`:6379
复制代码



2.5、验证 redis cluster 状态


2.5.1、进入 redis cluster 任意 pod 查看集群信息



2.5.2、查看集群节点



集群节点信息中记录了 master 节点 id 和 slave id,其中 slave 后面会对应 master 的 id,表示该 slave 备份对应 master 数据;


2.5.3、查看当前节点信息


127.0.0.1:6379> info# Serverredis_version:4.0.14redis_git_sha1:00000000redis_git_dirty:0redis_build_id:165c932261a105d7redis_mode:clusteros:Linux 5.15.0-73-generic x86_64arch_bits:64multiplexing_api:epollatomicvar_api:atomic-builtingcc_version:8.3.0process_id:1run_id:aa8ef00d843b4f622374dbb643cf27cdbd4d5ba3tcp_port:6379uptime_in_seconds:4303uptime_in_days:0hz:10lru_clock:8272053executable:/data/redis-serverconfig_file:/etc/redis/redis.conf
# Clientsconnected_clients:1client_longest_output_list:0client_biggest_input_buf:0blocked_clients:0
# Memoryused_memory:2642336used_memory_human:2.52Mused_memory_rss:5353472used_memory_rss_human:5.11Mused_memory_peak:2682248used_memory_peak_human:2.56Mused_memory_peak_perc:98.51%used_memory_overhead:2559936used_memory_startup:1444856used_memory_dataset:82400used_memory_dataset_perc:6.88%total_system_memory:16740012032total_system_memory_human:15.59Gused_memory_lua:37888used_memory_lua_human:37.00Kmaxmemory:0maxmemory_human:0Bmaxmemory_policy:noevictionmem_fragmentation_ratio:2.03mem_allocator:jemalloc-4.0.3active_defrag_running:0lazyfree_pending_objects:0
# Persistenceloading:0rdb_changes_since_last_save:0rdb_bgsave_in_progress:0rdb_last_save_time:1685992849rdb_last_bgsave_status:okrdb_last_bgsave_time_sec:0rdb_current_bgsave_time_sec:-1rdb_last_cow_size:245760aof_enabled:1aof_rewrite_in_progress:0aof_rewrite_scheduled:0aof_last_rewrite_time_sec:-1aof_current_rewrite_time_sec:-1aof_last_bgrewrite_status:okaof_last_write_status:okaof_last_cow_size:0aof_current_size:0aof_base_size:0aof_pending_rewrite:0aof_buffer_length:0aof_rewrite_buffer_length:0aof_pending_bio_fsync:0aof_delayed_fsync:0
# Statstotal_connections_received:7total_commands_processed:17223instantaneous_ops_per_sec:1total_net_input_bytes:1530962total_net_output_bytes:108793instantaneous_input_kbps:0.04instantaneous_output_kbps:0.00rejected_connections:0sync_full:1sync_partial_ok:0sync_partial_err:1expired_keys:0expired_stale_perc:0.00expired_time_cap_reached_count:0evicted_keys:0keyspace_hits:0keyspace_misses:0pubsub_channels:0pubsub_patterns:0latest_fork_usec:853migrate_cached_sockets:0slave_expires_tracked_keys:0active_defrag_hits:0active_defrag_misses:0active_defrag_key_hits:0active_defrag_key_misses:0
# Replicationrole:masterconnected_slaves:1slave0:ip=10.200.155.175,port=6379,state=online,offset=1120,lag=1master_replid:60381a28fee40b44c409e53eeef49215a9d3b0ffmaster_replid2:0000000000000000000000000000000000000000master_repl_offset:1120second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:1120
# CPUused_cpu_sys:12.50used_cpu_user:7.51used_cpu_sys_children:0.01used_cpu_user_children:0.00
# Clustercluster_enabled:1
# Keyspace127.0.0.1:6379>
复制代码


2.5.4、验证 redis cluster 读写数据是否正常?


2.5.4.1、手动连接 redis cluster 进行数据读写



手动连接 redis 集群 master 节点进行数据读写,存在一个问题就是当我们写入的 key 经过 crc16 计算对 16384 取模后,对应槽位可能不在当前节点,redis 它会告诉我们该 key 该在哪里去写;从上面的截图可用看到,现在 redis cluster 是可用正常读写数据的


2.5.4.2、使用 python 脚本连接 redis cluster 进行数据读写


root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis-client-test.py#!/usr/bin/env python#coding:utf-8#Author:Zhang ShiJie#python 2.7/3.8#pip install redis-py-cluster
import sys,timefrom rediscluster import RedisClusterdef init_redis(): startup_nodes = [ {'host': '192.168.0.34', 'port': 36379}, {'host': '192.168.0.35', 'port': 36379}, {'host': '192.168.0.36', 'port': 36379}, {'host': '192.168.0.34', 'port': 36379}, {'host': '192.168.0.35', 'port': 36379}, {'host': '192.168.0.36', 'port': 36379}, ] try: conn = RedisCluster(startup_nodes=startup_nodes, # 有密码要加上密码哦 decode_responses=True, password='') print('连接成功!!!!!1', conn) #conn.set("key-cluster","value-cluster") for i in range(100): conn.set("key%s" % i, "value%s" % i) time.sleep(0.1) data = conn.get("key%s" % i) print(data)
#return conn
except Exception as e: print("connect error ", str(e)) sys.exit(1)
init_redis()root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码


运行脚本,向 redis cluster 写入数据


root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# python redis-client-test.pyTraceback (most recent call last):  File "/root/k8s-data/yaml/magedu/redis-cluster/redis-client-test.py", line 8, in <module>    from rediscluster import RedisClusterModuleNotFoundError: No module named 'rediscluster'root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码


这里提示没有找到 rediscluster 模块,解决办法就是通过 pip 安装 redis-py-cluster 模块即可;


安装 redis-py-cluster 模块



运行脚本连接 redis cluster 进行数据读写



连接 redis pod,验证数据是否正常写入?





从上面的截图可用看到三个 reids cluster master pod 各自都存放了一部分 key,并非全部;说明刚才我们用 python 脚本把数据正常写入了 redis cluster;


验证在 slave 节点是否可用正常读取数据?



从上面的截图可以了解到在 slave 节点是不可以读取数据;


到 slave 对应的 master 节点读取数据



上述验证说明了 redis cluster 只有 master 可以读写数据,slave 只是对 master 数据做备份,不可以在 slave 上读写数据;


2.6、验证验证 redis cluster 高可用


2.6.1、在 k8s node 节点将 redis:4.0.14 镜像上传至本地 harbor


  • 修改镜像 tag


root@k8s-node01:~# nerdctl tag redis:4.0.14 harbor.ik8s.cc/redis-cluster/redis:4.0.14
复制代码


  • 上传 redis 镜像至本地 harbor


root@k8s-node01:~# nerdctl push harbor.ik8s.cc/redis-cluster/redis:4.0.14INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.list.v2+json, sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625) WARN[0000] skipping verifying HTTPS certs for "harbor.ik8s.cc" index-sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625:    done           |++++++++++++++++++++++++++++++++++++++| manifest-sha256:5bd4fe08813b057df2ae55003a75c39d80a4aea9f1a0fbc0fbd7024edf555786: done           |++++++++++++++++++++++++++++++++++++++| config-sha256:191c4017dcdd3370f871a4c6e7e1d55c7d9abed2bebf3005fb3e7d12161262b8:   done           |++++++++++++++++++++++++++++++++++++++| elapsed: 1.4 s                                                                    total:  8.5 Ki (6.1 KiB/s)                                       root@k8s-node01:~# 
复制代码


2.6.2、修改 redis cluster 部署清单镜像和镜像拉取策略



修改镜像为本地 harbor 镜像和拉取策略是方便我们测试 redis cluster 的高可用;


2.6.3、重新 apply redis cluster 部署清单


root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl apply -f redis.yamlservice/redis unchangedservice/redis-access unchangedstatefulset.apps/redis configuredroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 
复制代码


这里相当于给 redis cluster 更新,他们之间的集群关系还存在,因为集群关系配置都保存在远端存储之上;


  • 验证 pod 是否都正常 running?



  • 验证集群状态和集群关系



不同于之前,这里 rdis-0 变成了 slave ,redis-3 变成了 master;从上面的截图我们也发现,在 k8s 上部署 redis cluster pod 重建以后(IP 地址发生变化),对应集群关系不会发生变化;对应 master 和 salve 一对关系始终只是再对应的 master 和 salve 两个 pod 中切换,这其实就是高可用;


2.6.4、停掉本地 harbor,删除 redis master pod,看看对应 slave 是否会提升为 master?


  • 停止 harbor 服务


root@harbor:~# systemctl stop harbor
复制代码


  • 删除 redis-3,看看 redis-0 是否会提升为 master?



可用看到我们把 redis-3 删除(相当于 master 宕机)以后,对应 slave 提升为 master 了;


2.6.5、恢复 harbor 服务,看看对应 redis-3 恢复会议后是否还是 redis-0 的 slave 呢?


  • 恢复 harbor 服务



  • 验证 redis-3pod 是否恢复?



再次删除 redis-3 以后,对应 pod 正常被重建,并处于 running 状态;


  • 验证 redis-3 的主从关系



可以看到 redis-3 恢复以后,对应自动加入集群成为 redis-0 的 slave;


文章转载自:Linux-1874

原文链接:https://www.cnblogs.com/qiuhom-1874/p/17459116.html

用户头像

还未添加个人签名 2023-06-19 加入

还未添加个人简介

评论

发布
暂无评论
k8s实战案例之部署redis单机和redis cluster_k8s_不在线第一只蜗牛_InfoQ写作社区