写点什么

云原生(三十四) | Kubernetes 篇之平台存储系统实战

作者:Lansonli
  • 2022 年 9 月 03 日
    广东
  • 本文字数:5253 字

    阅读完需:约 17 分钟

云原生(三十四) | Kubernetes篇之平台存储系统实战

Kubernetes 平台存储系统实战

一、块存储(RDB)

RDB: RADOS Block Devices


RADOS: Reliable, Autonomic Distributed Object Store


不能是 RWX 模式

1、配置

RWO:(ReadWriteOnce)


参考文档:Ceph Docs


常用块存储 。RWO 模式;STS 删除,pvc 不会删除,需要自己手动维护


apiVersion: ceph.rook.io/v1kind: CephBlockPoolmetadata:  name: replicapool  namespace: rook-cephspec:  failureDomain: host  #容灾模式,host或者osd  replicated:    size: 2  #数据副本数量---apiVersion: storage.k8s.io/v1kind: StorageClass  #存储驱动metadata:   name: rook-ceph-block# Change "rook-ceph" provisioner prefix to match the operator namespace if neededprovisioner: rook-ceph.rbd.csi.ceph.comparameters:    # clusterID is the namespace where the rook cluster is running    clusterID: rook-ceph    # Ceph pool into which the RBD image shall be created    pool: replicapool
# (optional) mapOptions is a comma-separated list of map options. # For krbd options refer # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options # For nbd options refer # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options # mapOptions: lock_on_read,queue_depth=1024
# (optional) unmapOptions is a comma-separated list of unmap options. # For krbd options refer # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options # For nbd options refer # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options # unmapOptions: force
# RBD image format. Defaults to "2". imageFormat: "2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature. imageFeatures: layering
# The secrets contain Ceph admin credentials. csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, csi-provisioner # will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock # in hyperconverged settings where the volume is mounted on the same node as the osds. csi.storage.k8s.io/fstype: ext4
# Delete the rbd volume when a PVC is deletedreclaimPolicy: DeleteallowVolumeExpansion: true
复制代码

二、STS 案例实战

apiVersion: apps/v1kind: StatefulSetmetadata:  name: sts-nginx  namespace: defaultspec:  selector:    matchLabels:      app: sts-nginx # has to match .spec.template.metadata.labels  serviceName: "sts-nginx"  replicas: 3 # by default is 1  template:    metadata:      labels:        app: sts-nginx # has to match .spec.selector.matchLabels    spec:      terminationGracePeriodSeconds: 10      containers:      - name: sts-nginx        image: nginx        ports:        - containerPort: 80          name: web        volumeMounts:        - name: www          mountPath: /usr/share/nginx/html  volumeClaimTemplates:  - metadata:      name: www    spec:      accessModes: [ "ReadWriteOnce" ]      storageClassName: "rook-ceph-block"      resources:        requests:          storage: 20Mi---apiVersion: v1kind: Servicemetadata:  name: sts-nginx  namespace: defaultspec:  selector:    app: sts-nginx  type: ClusterIP  ports:  - name: sts-nginx    port: 80    targetPort: 80    protocol: TCP
复制代码


测试: 创建 sts、修改 nginx 数据、删除 sts、重新创建 sts。他们的数据丢不丢,共享不共享

三、文件存储(CephFS)

1、配置

常用 文件存储。 RWX 模式;如:10 个 Pod 共同操作一个地方


参考文档:Ceph Docs


apiVersion: ceph.rook.io/v1kind: CephFilesystemmetadata:  name: myfs  namespace: rook-ceph # namespace:clusterspec:  # The metadata pool spec. Must use replication.  metadataPool:    replicated:      size: 3      requireSafeReplicaSize: true    parameters:      # Inline compression mode for the data pool      # Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression      compression_mode:        none        # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool      # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size      #target_size_ratio: ".5"  # The list of data pool specs. Can use replication or erasure coding.  dataPools:    - failureDomain: host      replicated:        size: 3        # Disallow setting pool with replica 1, this could lead to data loss without recovery.        # Make sure you're *ABSOLUTELY CERTAIN* that is what you want        requireSafeReplicaSize: true      parameters:        # Inline compression mode for the data pool        # Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression        compression_mode:          none          # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool        # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size        #target_size_ratio: ".5"  # Whether to preserve filesystem after CephFilesystem CRD deletion  preserveFilesystemOnDelete: true  # The metadata service (mds) configuration  metadataServer:    # The number of active MDS instances    activeCount: 1    # Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover.    # If false, standbys will be available, but will not have a warm cache.    activeStandby: true    # The affinity rules to apply to the mds deployment    placement:      #  nodeAffinity:      #    requiredDuringSchedulingIgnoredDuringExecution:      #      nodeSelectorTerms:      #      - matchExpressions:      #        - key: role      #          operator: In      #          values:      #          - mds-node      #  topologySpreadConstraints:      #  tolerations:      #  - key: mds-node      #    operator: Exists      #  podAffinity:      podAntiAffinity:        requiredDuringSchedulingIgnoredDuringExecution:          - labelSelector:              matchExpressions:                - key: app                  operator: In                  values:                    - rook-ceph-mds            # topologyKey: kubernetes.io/hostname will place MDS across different hosts            topologyKey: kubernetes.io/hostname        preferredDuringSchedulingIgnoredDuringExecution:          - weight: 100            podAffinityTerm:              labelSelector:                matchExpressions:                  - key: app                    operator: In                    values:                      - rook-ceph-mds              # topologyKey: */zone can be used to spread MDS across different AZ              # Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower              # Use <topologyKey: topology.kubernetes.io/zone>  in k8s cluster is v1.17 or upper              topologyKey: topology.kubernetes.io/zone    # A key/value list of annotations    annotations:    #  key: value    # A key/value list of labels    labels:    #  key: value    resources:    # The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory    #  limits:    #    cpu: "500m"    #    memory: "1024Mi"    #  requests:    #    cpu: "500m"    #    memory: "1024Mi"    # priorityClassName: my-priority-class  mirroring:    enabled: false
复制代码


apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:  name: rook-cephfs# Change "rook-ceph" provisioner prefix to match the operator namespace if neededprovisioner: rook-ceph.cephfs.csi.ceph.comparameters:  # clusterID is the namespace where operator is deployed.  clusterID: rook-ceph
# CephFS filesystem name into which the volume shall be created fsName: myfs # Ceph pool into which the volume shall be created # Required for provisionVolume: "true" pool: myfs-data0 # The secrets contain Ceph admin credentials. These are generated automatically by the operator # in the same namespace as the cluster. csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: DeleteallowVolumeExpansion: true
复制代码

2、测试

apiVersion: apps/v1kind: Deploymentmetadata:  name:  nginx-deploy  namespace: default  labels:    app:  nginx-deployspec:  selector:    matchLabels:      app: nginx-deploy  replicas: 3  strategy:    rollingUpdate:      maxSurge: 25%      maxUnavailable: 25%    type: RollingUpdate  template:    metadata:      labels:        app:  nginx-deploy    spec:      containers:      - name:  nginx-deploy        image:  nginx        volumeMounts:        - name: localtime          mountPath: /etc/localtime        - name: nginx-html-storage          mountPath: /usr/share/nginx/html      volumes:        - name: localtime          hostPath:            path: /usr/share/zoneinfo/Asia/Shanghai        - name: nginx-html-storage          persistentVolumeClaim:            claimName: nginx-pv-claim---apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: nginx-pv-claim  labels:    app:  nginx-deployspec:  storageClassName: rook-cephfs  accessModes:    - ReadWriteMany  ##如果是ReadWriteOnce将会是什么效果  resources:    requests:      storage: 10Mi
复制代码


测试,创建 deploy、修改页面、删除 deploy,新建 deploy 是否绑定成功,数据是否在

四、pvc 扩容

参照 CSI(容器存储接口)文档:


卷扩容:Ceph Docs

动态卷扩容

之前创建 storageclass 的时候已经配置好了测试:去容器挂载目录 curl -O 某个大文件 默认不能下载修改原来的 PVC,可以扩充容器。注意,只能扩容,不能缩容


有状态应用(3 个副本)使用块存储。自己操作自己的 pvc 挂载的 pv;也不丢失


无状态应用(3 个副本)使用共享存储。很多人操作一个 pvc 挂载的一个 pv;也不丢失


  • 其他 Pod 可以对数据进行修改

  • MySQL 有状态做成主节点。。。MySQL - Master ---- pv

  • MySQL 无状态只读 挂载 master 的 pvc。


发布于: 刚刚阅读数: 6
用户头像

Lansonli

关注

微信公众号:三帮大数据 2022.07.12 加入

CSDN大数据领域博客专家,华为云享专家、阿里云专家博主、腾云先锋(TDP)核心成员、51CTO专家博主,全网六万多粉丝,知名互联网公司大数据高级开发工程师

评论

发布
暂无评论
云原生(三十四) | Kubernetes篇之平台存储系统实战_云原生_Lansonli_InfoQ写作社区