写点什么

【Kubernetes】k8s 的安全管理详细说明【role 赋权和 clusterrole 赋权详细配置说明

  • 2022 年 5 月 12 日
  • 本文字数:14540 字

    阅读完需:约 48 分钟

[](()token 验证 &&kubeconfig 验证


====================================================================================


内容过多,分开发布,token 验证 &&kubeconfig 验证去这篇博客:


[【Kubernetes】k8s 的安全管理详细说明【k8s 框架说明、token 验证和 kubeconfig 验证详细说明】](()


[](()授权


=================================================================


[](()了解 authorization-mode 授权模式




  • 配置文件:/etc/kubernetes/manifests/kube-apiserver.yaml


这个里面配置授权规则,大概在 20 行,规则有如下几项


修改规则以后需要重启服务生效:systemctl restart kubelet


[root@master sefe]# cat -n /etc/kubernetes/manifests/kube-apiserver.yaml| egrep mode


20 - --authorization-mode=Node,RBAC


[root@master sefe]#


--authorization-mode=Node,RBAC #默认


  • --authorization-mode=AlwaysAllow #允许所有请求,无论是否给权限,都能访问

  • --authorization-mode=AlwaysDeny #拒绝所有请求,无论是否给权限,都不允许访问【不影响 admin 文件的权限/etc/kubernetes/admin.conf】

  • --authorization-mode=ABAC


Attribute-Based Access Control #不够灵活被放弃使用


  • --authorization-mode=RBAC #这个最常用,或者说一般情况都使用这个


Role Based Access Control


  • --authorization-mode=Node


Node 授权器主要用于各个 node 上的 kubelet 访问 apiserver 时使用的,其他一般均由 RBAC 授权器来授权


[](()AlwaysAllow&&AlwaysDeny




  • 这个比较直观,就是允许全部和拒绝全部


我这用一个允许全部做测试


现在有授权,先删除授权


[root@master sefe]# kubectl get clusterrolebindings.rbac.authorization.k8s.io test1


NAME ROLE AGE


test1 ClusterRole/cluster-admin 28m


[root@master sefe]# kubectl delete clusterrolebindings.rbac.authorization.k8s.io test1


clusterrolebinding.rbac.authorization.k8s.io "test1" deleted


[root@master sefe]#

我现在用的是 kubeconfig 文件继续做测试,先去看看我上面的 kubeconfig 验证,否则这看不懂啊

[root@master sefe]# ls


ca.crt ccx.crt ccx.csr ccx.key csr.yaml kc1


[root@master sefe]#


[root@master sefe]# kubectl --kubeconfig=kc1 get pods


Error from server (Forbidden): pods is forbidden: User "ccx" cannot list resource "pods" in API group "" in the namespace "default"


[root@master sefe]#


  • 配置文件


修改为允许,然后重启服务


[root@master sefe]# vi /etc/kubernetes/manifests/kube-apiserver.yaml


[root@master sefe]# cat -n /etc/kubernetes/manifests/kube-apiserver.yaml| egrep mode


20 #- --authorization-mode=Node,RBAC


21 --authorization-mode=AlwaysAllow


[root@master sefe]#


[root@master sefe]# !sys


systemctl restart kubelet


[root@master sefe]#


[root@master ~]# systemctl restart kubelet


  • 测试

重启以后呢,挺久时间都是会这样子的报错,是因为 apiserver 服务没起来。

[root@master ~]# systemctl restart kubelet


[root@master ~]#


[root@master ~]# kubectl get pods


The connection to the server 192.168.59.142:6443 was refused - did you specify the right host or port?


[root@master ~]#

api 状态久久不能 up,就离谱。

[root@master kubernetes]# docker ps -a | grep api


525821586ed5 4d217480042e "kube-apiserver --ad…" 15 hours ago Exited (137) 7 minutes ago k8s_kube-apiserver_kube-apiserver-master_kube-system_654a890f23facb6552042e41f67f4aef_1


6b64a8bfc748 registry.aliyuncs.com/google_containers/pause:3.4.1 "/pause" 15 hours ago Up 15 hours k8s_POD_kube-apiserver-master_kube-system_654a890f23facb6552042e41f67f4aef_0


[root@master kubernetes]#


  • 做不了测试咯,我改了以后,集群就出问题了,api 一直起不来不说,kubelet 状态还一直报下面错误,messages 看到的一样,没找到原因,算了,不搞了,反正只要知道这个东西就行,平常也不建议用全放开或全拒绝。


[root@master ~]# systemctl status kubelet


● kubelet.service - kubelet: The Kubernetes Node Agent


Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)


Drop-In: /usr/lib/systemd/system/kubelet.service.d


└─10-kubeadm.conf


Active: active (running) since Thu 2021-11-04 09:55:26 CST; 55s ago


Docs: https://kubernetes.io/docs/


Main PID: 29495 (kubelet)


Tasks: 45


Memory: 64.8M


CGroup: /system.slice/kubelet.service


├─29495 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=regi...


└─30592 /opt/cni/bin/calico


Nov 04 09:56:19 master kubelet[29495]: I1104 09:56:19.238570 29495 kubelet.go:461] "Kubelet nodes not sync"


Nov 04 09:56:19 master kubelet[29495]: I1104 09:56:19.250440 29495 kubelet.go:461] "Kubelet nodes not sync"


Nov 04 09:56:19 master kubelet[29495]: I1104 09:56:19.394574 29495 kubelet.go:461] "Kubelet nodes not sync"


Nov 04 09:56:19 master kubelet[29495]: I1104 09:56:19.809471 29495 kubelet.go:461] "Kubelet nodes not sync"


Nov 04 09:56:20 master kubelet[29495]: I1104 09:56:20.206978 29495 kubelet.go:461] "Kubelet nodes not sync"


Nov 04 09:56:20 master kubelet[29495]: I1104 09:56:20.237387 29495 kubelet.go:461] "Kubelet nodes not sync"


Nov 04 09:56:20 master kubelet[29495]: I1104 09:56:20.250606 29495 kubelet.go:461] "Kubelet nodes not sync"


Nov 04 09:56:20 master kubelet[29495]: I1104 09:56:20.395295 29495 kubelet.go:461] "Kubelet nodes not sync"


Nov 04 09:56:20 master kubelet[29495]: E1104 09:56:20.501094 29495 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://192.168.59.142:6443/apis/coordination.k8s.io/v1/namespace...onnection refused


Nov 04 09:56:20 master kubelet[29495]: I1104 09:56:20.809833 29495 kubelet.go:461] "Kubelet nodes not sync"


Hint: Some lines were ellipsized, use -l to show in full.


[root@master ~]#

这里面看到的报错和上面一样

[root@master ~]# tail -f /var/log/messages


[](()RBAC 模式说明【重要】




感兴趣的可以去看看官网介绍:


[RBAC](()

[](()查看 admin 权限

[root@master ~]# kubectl describe clusterrole admin


Name: admin


Labels: kubernetes.io/bootstrapping=rbac-defaults


Annotations: rbac.authorization.kubernetes.io/autoupdate: true


PolicyRule:


Resources Non-Resource URLs Resource Names Verbs




rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]


roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]


configmaps [] [] [create delete deletecollection patch update get list watch]


endpoints [] [] [create delete deletecollection patch update get list watch]


persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch]


pods [] [] [create delete deletecollection patch update get list watch]


replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch]


replicationcontrollers [] [] [create delete deletecollection patch update get list watch]


services [] [] [create delete deletecollection patch update get list watch]


daemonsets.apps [] [] [create delete deletecollection patch update get list watch]


deployments.apps/scale [] [] [create delete deletecollection patch update get list watch]


deployments.apps [] [] [create delete deletecollection patch update get list watch]


replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch]


replicasets.apps [] [] [create delete deletecollection patch update get list watch]


statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch]


statefulsets.apps [] [] [create delete deletecollection patch update get list watch]


horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch]


cronjobs.batch [] [] [create delete deletecollection patch update get list watch]


jobs.batch [] [] [create delete deletecollection patch update get list watch]


daemonsets.extensions [] [] [create delete deletecollection patch update get list watch]


deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch]


deployments.extensions [] [] [create delete deletecollection patch update get list watch]


ingresses.extensions [] [] [create delete deletecollection patch update get list watch]


networkpolicies.extensions [] [] [create delete deletecollection patch update get list watch]


replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch]


replicasets.extensions [] [] [create delete deletecollection patch update get list watch]


replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch]


ingresses.networking.k8s.io [] [] [create delete deletecollection patch update get list watch]


networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update get list watch]


poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch]


deployments.apps/rollback [] [] [create delete deletecollection patch update]


deployments.extensions/rollback [] [] [create delete deletecollection patch update]


localsubjectaccessreviews.authorization.k8s.io [] [] [create]


pods/attach [] [] [get list watch create delete deletecollection patch update]


pods/exec [] [] [get list watch create delete deletecollection patch update]


pods/portforward [] [] [get list watch create delete deletecollection patch update]


pods/proxy [] [] [get list watch create delete deletecollection patch update]


secrets [] [] [get list watch create delete deletecollection patch update]


services/proxy [] [] [get list watch create delete deletecollection patch update]


bindings [] [] [get list watch]


events [] [] [get list watch]


limitranges [] [] [get list watch]


namespaces/status [] [] [get list watch]


namespaces [] [] [get list watch]


persistentvolumeclaims/status [] [] [get list watch]


pods/log [] [] [get list watch]


pods/status [] [] [get list watch]


replicationcontrollers/status [] [] [get list watch]


resourcequotas/status [] [] [get list watch]


resourcequotas [] [] [get list watch]


services/status [] [] [get list watch]


controllerrevisions.apps [] [] [get list watch]


daemonsets.apps/status [] [] [get list watch]


deployments.apps/status [] [] [get list watch]


replicasets.apps/status [] [] [get list watch]


statefulsets.apps/status [] [] [get list watch]


horizontalpodautoscalers.autoscaling/status [] [] [get list watch]


cronjobs.batch/status [] [] [get list watch]


jobs.batch/status [] [] [get list watch]


daemonsets.extensions/status [] [] [get list watch]


deployments.extensions/status [] [] [get list watch]


ingresses.extensions/status [] [] [get list watch]


replicasets.extensions/status [] [] [get list watch]


nodes.metrics.k8s.io [] [] [get list watch]


pods.metrics.k8s.io [] [] [get list watch]


ingresses.networking.k8s.io/status [] [] [get list watch]


poddisruptionbudgets.policy/status [] [] [get list watch]


serviceaccounts [] [] [impersonate create delete deletecollection patch update get list watch]


[root@master ~]#

[](()基本概念

  • RBAC(Role-Based Access Control,基于角色的访问控制),允许通过 Kubernetes API 动态配置策略。

  • 在 k8s v1.5 中引入,在 v1.6 版本时升级为 Beta 版本,并成为 kubeadm 安装方式下的默认选项,相对于其他访问控制方式,新的 RBAC 具有如下优势:

  • 对集群中的资源和非资源权限均有完整的覆盖

  • 整个 RBAC 完全由几个 API 对象完成,同其他 API 对象一样,可以用 kubectl 或 API 进行操作

  • 可以在运行时进行调整,无需重启 API Server

  • 要使用 RBAC 授权模式,需要在 API Server 的启动参数中加上–authorization-mode=RBAC

[](()原理

  • 流程图如下【下面是对这个流程图的拆解说明】



  • 角色


一个角色就是一组权限的集合,这里的权限都是许可形式的,不存在拒绝的规则。



角色只能对命名空间内的资源进行授权


  • Role:授权特定命名空间的访问权限【 在一个命名空间中,可以用角色来定义一个角色】

  • ClusterRole:授权所有命名空间的访问权限【如果是集群级别的,就需要使用 ClusterRole 了。】

  • 角色绑定

  • RoleBinding:将角色绑定到主体(即 subject)【对应角色的 Role】



  • ClusterRoleBinding:将集群角色绑定到主体【对应角色的 ClusterRole】



  • 主体(subject)

  • User:用户

  • Group:用户组

  • ServiceAccount:服务账号

  • 用户或者用户组,服务账号,与具备某些权限的角色绑定,然后将该角色的权限继承过来,这一点类似阿里云的 ram 授权。这里需要注意 定义的角色是 Role 作用域只能在指定的名称空间下有效,如果是 ClusterRole 可作用于所有名称空间下。

  • Rolebinding 和 Role 对应,ClusterRoleBinding 和 ClusterRole 对应。


[](()ClusterRole 和 Role 的参数值说明

  • 1、apiGroups 可配置参数


这个很重要,是父子级的关系【kubectl api-versions 可以查看】【一般有 2 种格式 /xx 和 xx/yy】


“”,“apps”, “autoscaling”, “batch”


  • 2、resources 可配置参数


“services”, “endpoints”,“pods”,“secrets”,“configmaps”,“crontabs”,“deployments”,“jobs”,“nodes”,“rolebindings”,“clusterroles”,“daemonsets”,“replicasets”,“statefulsets”,“horizontalpodautoscalers”,“replicationcontrollers”,“cronjobs”


  • 3、verbs 可配置参数


“get”,“list”,“watch”, “create”,“update”, “patch”, “delete”,“exec”


  • 4、apiGroups 和 resources 对应关系


  • apiGroups: [""] # 空字符串""表明使用 core API group


resources: ["pods","pods/log","pods/exec", "pods/attach", "pods/status", "events", "replicationcontrollers", "services", "configmaps", "persistentvolumeclaims"]


  • apiGroups: [ "apps"]


resources: ["deployments", "daemonsets", "statefulsets","replicasets"]

[](()role 测试说明

[](()创建一个 role

  • 除了我下面的方法以外,还可以使用这种方式写 role 文件


kind: Role


apiVersion: rbac.authorization.k8s.io/v1


metadata:

限定可访问的命名空间为 minio

namespace: minio

角色名称

name: role-minio-service-minio

控制 dashboard 中 命名空间模块 中的面板是否有权限查看

rules:


  • apiGroups: [""] # 空字符串""表明使用 core API group


#resources: ["pods","pods/log","pods/exec", "pods/attach", "pods/status", "events", "replicationcontrollers", "services", "configmaps", "persistentvolumeclaims"]


resources: ["namespaces","pods","pods/log","pods/exec", "pods/attach", "pods/status","services"]


verbs: ["get", "watch", "list", "create", "update", "patch", "delete"]


  • apiGroups: [ "apps"]


resources: ["deployments", "daemonsets", "statefulsets","replicasets"]


verbs: ["get", "list", "watch"]


  • 生成 role 的配置文件


我们可以直接通过这种方式生成 yaml 文件,后面如果需要做啥操作的话,直接对 yaml 文件操作就行了


下面生成的 yaml 中,参数需要修改的,看上面 Role 参数值说明,上面中有可选参数的详细说明。


[root@master ~]# kubectl create role role1 --verb=get,list --resource=pods --dry-run=client -o yaml > role1.yaml


[root@master ~]# cat role1.yaml


apiVersion: rbac.authorization.k8s.io/v1


kind: Role


metadata:


creationTimestamp: null


name: role1


rules:


  • apiGroups:

  • ""


resources:


  • pods


#如果需要同时存在多个参数,以下面 get 和 list 的形式存在。


verbs:


  • get

  • list


[root@master ~]#

如,我现在对 verbs 增加一个 create 权限和增加一个 node 权限

[root@master ~]# mv role1.yaml sefe/


[root@master ~]# cd sefe/


[root@master sefe]# vi role1.yaml


[root@master sefe]# cat role1.yaml


apiVersion: rbac.authorization.k8s.io/v1


kind: Role


metadata:


creationTimestamp: null


name: role1


rules:


  • apiGroups:

  • ""


resources:


  • pods

  • nodes


verbs:


  • get

  • list

  • create


[root@master sefe]#


  • 生成 role


后面修改后可以直接生成就会覆盖之前的权限了,不用删除了再生成。


[root@master ~]# kubectl apply -f role1.yaml


role.rbac.authorization.k8s.io/role1 created


[root@master ~]#


[root@master ~]# kubectl get role


NAME CREATED AT


role1 2021-11-05T07:34:45Z


[root@master ~]#


  • 查看详细


可以看到这个 role 已有权限


[root@master sefe]# kubectl describe role role1


Name: role1


Labels: <none>


Annotations: <none>


PolicyRule:


Resources Non-Resource URLs Resource Names Verbs




jobs [] [] [get list create]


nodes [] [] [get list create]


pods [] [] [get list create]


[root@master sefe]#

[](()创建 rolebinding【绑定用户】

  • 注意,用户是不属于任何命名空间的

  • 创建 rolebinding 需要先创建一个 role 和对应的用户名哦


#下面中 rdind1 是自定义的名称

--role=指定一个 role

#user=为哪个用户授权


[root@master ~]# kubectl create rolebinding rbind1 --role=role1 --user=ccx


rolebinding.rbac.authorization.k8s.io/rbind1 created


[root@master ~]#


[root@master ~]# kubectl get rolebindings.rbac.authorization.k8s.io


NAME ROLE AGE


rbind1 Role/role1 5s


[root@master ~]#


[root@master sefe]# kubectl describe rolebindings.rbac.authorization.k8s.io rbind1


Name: rbind1


Labels: <none>


Annotations: <none>


Role:


Kind: Role


Name: role1


Subjects:


Kind Name Namespace




User ccx


[root@master sefe]#


  • 创建 ServiceAccount 和 Role 的绑定


我上面是用命令的形式实现的嘛,这是我在网上看到的其他资料,是用文件的形式实现,感兴趣的小伙伴可以用这种方法一试。。。。


[root@app01 k8s-user]# vim role-bind-minio-service-minio.yaml


kind: RoleBinding


apiVersion: rbac.authorization.k8s.io/v1


metadata:


#namespace: minio


name: role-bind-minio-service-monio #自定义名称


subjects:


  • kind: ServiceAccount


#namespace: minio


name: username # 为哪个用户授权


roleRef:


kind: Role

角色名称

name: rolename#role 的名称


apiGroup: rbac.authorization.k8s.io


[root@app01 k8s-user]# kubectl apply -f role-bind-minio-service-minio.yaml


rolebinding.rbac.authorization.k8s.io/role-bind-minio-service-monio created


[root@app01 k8s-user]# kubectl get rolebinding -n minio -owide


NAME AGE ROLE USERS GROUPS SERVICEACCOUNTS


role-bind-minio-service-monio 29s Role/role-minio-service-minio minio/service-minio


[root@app01 k8s-user]#

[](()测试

  • 注意,我上面已经授权了 ccx 用户和配置文件【看 kubeconfig 验证中的操作流程】,所以我这直接用一个集群外的主机来做测试【注意,这个集群 ip 我在 kubeconfig 验证中已经做好所有配置了,所以我可以直接使用】

  • 查询测试

上面我们已经对 ccx 用户授权 pod 权限了,可是直接使用会发现依然报错

[root@master2 ~]# kubectl --kubeconfig=kc1 get pods


Error from server (Forbidden): pods is forbidden: User "ccx" cannot list resource "pods" in API group "" in the namespace "default"


#是因为前面说明 role 是对命 《一线大厂 Java 面试题解析+后端开发学习笔记+最新架构讲解视频+实战项目源码讲义》无偿开源 威信搜索公众号【编程进阶路】 名空间生效的【上面我们在 safe 命名空间】


#所以我们需要指定命名空间


[root@master2 ~]# kubectl --kubeconfig=kc1 get pods -n safe


No resources found in safe namespace.


[root@master2 ~]#

当然,只要有这个 kc1 配置文件的,在哪执行都行【当前在集群 master 上】

[root@master sefe]# ls


ca.crt ccx.crt ccx.csr ccx.key csr.yaml kc1 role1.yaml


[root@master sefe]# kubectl --kubeconfig=kc1 get pods -n safe


No resources found in safe namespace.


[root@master sefe]#


  • 创建 pod 测试


为了更能方便看出这个效果,所以我还在集群外的主机上操作吧。


#拷贝一个 pod 文件


[root@master sefe]# scp ../pod1.yaml 192.168.59.151:~


root@192.168.59.151's password:


pod1.yaml 100% 431 424.6KB/s 00:00


[root@master sefe]#

回到测试机上创建一个 pod,是可以正常创建成功的

[root@master2 ~]# kubectl --kubeconfig=kc1 get nodes -n safe


^[[AError from server (Forbidden): nodes is forbidden: User "ccx" cannot list resource "nodes" in API group "" at the cluster scope


[root@master2 ~]# kubectl --kubeconfig=kc1 get pods -n safe


No resources found in safe namespace.


[root@master2 ~]#


[root@master2 ~]# export KUBECONFIG=kc1


[root@master2 ~]#


[root@master2 ~]# kubectl apply -f pod1.yaml -n safe


pod/pod1 created


[root@master2 ~]#


[root@master2 ~]# kubectl get pods -n safe


NAME READY STATUS RESTARTS AGE


pod1 1/1 Running 0 8s


[root@master2 ~]#

如果不指定命名空间的话就是创建在本地了,但是本地是没有镜像的,所以状态会一直为 pending

[root@master2 ~]# kubectl apply -f pod1.yaml


pod/pod1 created


[root@master2 ~]# kubectl get pods


NAME READY STATUS RESTARTS AGE


pod1 0/1 Pending 0 3s


[root@master2 ~]#

现在回到集群 master 上,可以看到这个 pod 被创建成功的

[root@master sefe]# kubectl get pods


NAME READY STATUS RESTARTS AGE


pod1 1/1 Running 0 32s


[root@master sefe]#


  • 删除 pod 测试


是删除不了的哈,因为这个没有给 delete 权限


[root@master2 ~]# kubectl delete -f pod1.yaml -n safe


Error from server (Forbidden): error when deleting "pod1.yaml": pods "pod1" is forbidden: User "ccx" cannot delete resource "pods" in API group "" in the namespace "safe"


[root@master2 ~]#

没关系呀,我们去给 delete 权限就是了

#集群 master 节点


[root@master sefe]# cat role1.yaml


apiVersion: rbac.authorization.k8s.io/v1


kind: Role


metadata:


creationTimestamp: null


name: role1


rules:


  • apiGroups:

  • ""


resources:


  • nodes

  • pods

  • jobs


verbs:


  • get

  • list

  • create

  • delete


[root@master sefe]# kubectl apply -f role1.yaml


role.rbac.authorization.k8s.io/role1 configured


[root@master sefe]#


#测试节点


[root@master2 ~]# kubectl delete -f pod1.yaml -n safe


pod "pod1" deleted


[root@master2 ~]#


[root@master2 ~]# kubectl get pods -n safe


No resources found in safe namespace.


[root@master2 ~]#

[](()Error from server (Forbidden)报错处理

  • 报错内容如下


[root@master2 ~]# kubectl --kubeconfig=kc1 get pods -n safe


Error from server (Forbidden): pods is forbidden: User "ccx" cannot list resource "pods" in API group "" in the namespace "safe"


[root@master2 ~]#


  • 之前问讲师


可能讲师也很懵吧,一直没给我解答,最后自己琢磨出来了




  • 这种情况并不是 role 的配置文件问题,而是因为 kc1 中 ccx 这个用户的授权出问题了,至于排查方法,去看 kubeconfig 配置那篇文章,从开始一步步跟着排查【重点看 csr 和授权这样子】



  • 总结:上面 config 授权好像和这没关系,如果用之前授权的方式给用户授权的话,好像 role 并没有生效了【本来 role 就是授权用的,上面的方式直接给了 admin 权限了,覆盖 role 了 】,所以,我上面的处理方法好像是错的,但是讲师也没有纠正我,所以,讲师也并没有很负责吧,几千块钱的培训费花的感觉并不值得,反正 role 这个问题,如果遇到部分权限能用,部分不能用,还是排查 role 相关的知识吧,我上面这个办法还是别采取了【所以,role 着知识我觉得上面配置流程是没有错的,可能是我集群环境试验太多东西,啥配置被我搞乱罢了,后面有时间打新集群了再回过头来重新做一遍 role 相关的试验吧】

[](()role 的 resources 分开赋权

  • 其实 apiGroups 是一个独立模块,一个 apiGroups 定义一个功能,所以如果我们想对不同的组建定义不同的功能,我们就可以添加足够多的 apiGroups 分开写就是了

  • 比如现在我们想对 pod 和 deployment 分开赋权,那么我们就可以像下面这么写【用 2 个 apiGroups 即可】


像这种写法就可以创建 deployment 和管理副本数了。


[root@master sefe]# cat role2.yaml


apiVersion: rbac.authorization.k8s.io/v1


kind: Role


metadata:


creationTimestamp: null


name: role1


rules:


  • apiGroups:

  • ""


resources:


  • pods


verbs:


  • get

  • list

  • create

  • delete

  • apiGroups:

  • "apps"


resources:


  • deployments

  • deployments/scale


verbs:


  • get

  • list

  • create

  • delete

  • patch


[root@master sefe]#


  • role 的使用到此就完了,后面可以多测试哦


现在我们删除这 2 样,后面进行 ClusterRole 的测试吧


[root@master sefe]# kubectl delete -f role1.yaml


role.rbac.authorization.k8s.io "role1" deleted


[root@master sefe]# kubectl delete rolebindings.rbac.authorization.k8s.io rbind1


rolebinding.rbac.authorization.k8s.io "rbind1" deleted


[root@master sefe]#

[](()clusterrole 测试说明

[](()创建一个 clusterrole

  • 除了我下面的方法外,还可以使用这种方式


kind: ClusterRole


apiVersion: rbac.authorization.k8s.io/v1


metadata:

鉴于 ClusterRole 是集群范围对象,所以这里不需要定义"namespace"字段

角色名称

name: cluster-role-paas-basic-service-minio

控制 dashboard 中 命名空间模块 中的面板是否有权限查看

rules:


  • apiGroups: ["rbac.authorization.k8s.io",""] # 空字符串""表明使用 core API group


#resources: ["pods","pods/log","pods/exec", "pods/attach", "pods/status", "events", "replicationcontrollers", "services", "configmaps", "persistentvolumeclaims"]


resources: ["pods","pods/log","pods/exec", "pods/attach", "pods/status","services"]


verbs: ["get", "watch", "list", "create", "update", "patch", "delete"]


  • apiGroups: [ "apps"]


resources: ["namespaces","deployments", "daemonsets", "statefulsets"]


verbs: ["get", "list", "watch"]


  • 注:这个其实好 role 是一样的配置方法,唯一区别就是,yaml 文件中 kind 的 Role 需要改为 ClusterRole

  • 生成 clusterrole 的配置文件


我们可以直接通过这种方式生成 yaml 文件,后面如果需要做啥操作的话,直接对 yaml 文件操作就行了


下面生成的 yaml 中,参数需要修改的,看上面 ClusterRole 参数值说明,上面中有可选参数的详细说明。


[root@master sefe]# kubectl create clusterrole crole1 --verb=get,create,delete --resource=deploy,pod,svc --dry-run=client -o yaml > crole1.yaml


[root@master sefe]# cat crole1.yaml


apiVersion: rbac.authorization.k8s.io/v1


kind: ClusterRole


metadata:


creationTimestamp: null


name: crole1


rules:


  • apiGroups:

  • ""


resources:


  • pods

  • services


verbs:


  • get

  • create

  • delete

  • apiGroups:

  • apps


resources:


  • deployments


verbs:


  • get

  • create

  • delete


[root@master sefe]#


  • 生成 role


后面修改后可以直接生成就会覆盖之前的权限了,不用删除了再生成。


[root@master sefe]# kubectl apply -f crole1.yaml


clusterrole.rbac.authorization.k8s.io/crole1 created


[root@master sefe]#


[root@master sefe]# kubectl get clusterrole crole1


NAME CREATED AT


crole1 2021-11-05T10:09:04Z


[root@master sefe]#


  • 查看详细


[root@master sefe]# kubectl describe clusterrole crole1


Name: crole1


Labels: <none>


Annotations: <none>


PolicyRule:


Resources Non-Resource URLs Resource Names Verbs




pods [] [] [get create delete]


services [] [] [get create delete]


deployments.apps [] [] [get create delete]

[](()创建 cluserrolebinding【绑定用户】

  • 注意,这是所有命名空间都生效的

  • 创建 cluserrolebinding 需要先创建一个 role 和对应的用户名哦


#下面中 rdind1 是自定义的名称

--role=指定一个 role

#user=为哪个用户授权


[root@master ~]#


[root@master sefe]# kubectl create clusterrolebinding cbind1 --clusterrole=crole1 --user=ccx


clusterrolebinding.rbac.authorization.k8s.io/cbind1 created


[root@master sefe]#


[root@master sefe]# kubectl get clusterrolebindings.rbac.authorization.k8s.io cbind1


NAME ROLE AGE


cbind1 ClusterRole/crole1 16s


[root@master sefe]#

前面说过,这是对所有命名空间都生效的,所以我们随便查看几个命名都会发现有这个 cluser 的存在

[root@master sefe]# kubectl get clusterrolebindings.rbac.authorization.k8s.io -n default cbind1


NAME ROLE AGE


cbind1 ClusterRole/crole1 28s


[root@master sefe]#


[root@master sefe]# kubectl get clusterrolebindings.rbac.authorization.k8s.io -n ds cbind1


NAME ROLE AGE


cbind1 ClusterRole/crole1 35s


[root@master sefe]#


  • 查看详情


[root@master sefe]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io cbind1


Name: cbind1


Labels: <none>


Annotations: <none>


Role:


Kind: ClusterRole

用户头像

还未添加个人签名 2022.04.13 加入

还未添加个人简介

评论

发布
暂无评论
【Kubernetes】k8s的安全管理详细说明【role赋权和clusterrole赋权详细配置说明_Java_爱好编程进阶_InfoQ写作社区