写点什么

Kubernetes 下部署 JMeter 集群

作者:zuozewei
  • 2022 年 1 月 21 日
  • 本文字数:10922 字

    阅读完需:约 36 分钟

Kubernetes 下部署 JMeter 集群

一、前提条件

Kubernetes > 1.16

二、部署拓扑


可以从 master 节点启动测试,master 节点把对应的测试脚本发送到对应的 slaves 节点,slave 节点的 pod/nodes 主要作用即发压。


部署文件清单:


  • jmeter_cluster_create.sh — 此脚本将要求一个唯一的 namespace,然后它将继续创建命名空间和所有组件(jmeter master,slaves,influxdb 和 grafana)。

  • 注意:在启动前,请在jmeter_slaves_deploy.yaml文件中设置要用于 slaves 服务器的副本数,通常副本数应与拥有的 worker nodes 相匹配。

  • jmeter_master_configmap.yaml — Jmeter master 的应用配置。

  • jmeter_master_deployment.yaml — Jmeter master 的部署清单。

  • jmeter_slaves_deploy.yaml — Jmeter slave 的部署清单。

  • jmeter_slave_svc.yaml — jmeter slave 的服务清单。使用 headless service,这使我们能够直接获取 jmeter slave 的 POD IP 地址,而我们不需要 DNS 或轮询。创建此文件是为了使 slave Pod IP 地址更容易直接发送到 jmeter master。

  • jmeter_influxdb_configmap.yaml — influxdb 部署的应用配置。如果要在默认的 influxdb 端口之外使用 graphite 存储方法,这会将 influxdb 配置为暴露端口 2003,以便支持 graphite 。因此,可以使用 influxdb 部署来支持 jmeter 后置监听器方法(graphite 和 influxdb)。

  • jmeter_influxdb_deploy.yaml — Influxdb 的部署清单

  • jmeter_influxdb_svc.yaml — Influxdb 的服务清单。

  • jmeter_grafana_deploy.yaml — grafana 部署清单。

  • jmeter_grafana_svc.yaml — grafana 部署的服务清单,默认情况下使用 NodePort,如果公有云中运行它,则可以将其更改为 LoadBalancer(并且可以设置 CNAME 以使用 FQDN 缩短名称)。

  • dashboard.sh — 该脚本用于自动创建以下内容:

  • (1)influxdb pod 中的一个 influxdb 数据库(Jmeter)

  • (2)grafana 中的数据源(jmeterdb)

  • start_test.sh —此脚本用于自动运行 Jmeter 测试脚本,而无需手动登录 Jmeter 主 shell,它将询问 Jmeter 测试脚本的位置,然后将其复制到 Jmeter master pod 并启动自动对 Jmeter slave 进行测试。

  • jmeter_stop.sh - 停止测试

  • GrafanaJMeterTemplate.json — 预先构建的 Jmeter grafana 仪表板。

  • Dockerfile-base - 构建 Jmeter 基础镜像

  • Dockerfile-master - 构建 Jmeter master 镜像

  • Dockerfile-slave - 构建 Jmeter slave 镜像

  • Dockerimages.sh - 批量构建 docker 镜像

三、docker 镜像

1、构建 docker 镜像

执行脚本,构建镜像:


./dockerimages.sh
复制代码


查看镜像:


$ docker images
复制代码


将镜像推送到 Registry:


$ sudo docker login --username=xxxx registry.cn-beijing.aliyuncs.com$ sudo docker tag [ImageId] registry.cn-beijing.aliyuncs.com/7d/jmeter-base:[镜像版本号]$ sudo docker push registry.cn-beijing.aliyuncs.com/7d/jmeter-base:[镜像版本号]
复制代码

2、部署清单

Dockerfile-base (构建 Jmeter 基础镜像):


FROM alpine:latestLABEL MAINTAINER 7DGroup
ARG JMETER_VERSION=5.2.1
#定义时区参数ENV TZ=Asia/Shanghai
RUN apk update && \ apk upgrade && \ apk add --update openjdk8-jre wget tar bash && \ mkdir /jmeter && cd /jmeter/ && \ wget https://mirrors.tuna.tsinghua.edu.cn/apache/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz && \ tar -xzf apache-jmeter-$JMETER_VERSION.tgz && rm apache-jmeter-$JMETER_VERSION.tgz && \ cd /jmeter/apache-jmeter-$JMETER_VERSION/ && \ wget -q -O /tmp/JMeterPlugins-Standard-1.4.0.zip https://jmeter-plugins.org/downloads/file/JMeterPlugins-Standard-1.4.0.zip && unzip -n /tmp/JMeterPlugins-Standard-1.4.0.zip && rm /tmp/JMeterPlugins-Standard-1.4.0.zip && \ wget -q -O /jmeter/apache-jmeter-$JMETER_VERSION/lib/ext/pepper-box-1.0.jar https://github.com/raladev/load/blob/master/JARs/pepper-box-1.0.jar?raw=true && \ cd /jmeter/apache-jmeter-$JMETER_VERSION/ && \ wget -q -O /tmp/bzm-parallel-0.7.zip https://jmeter-plugins.org/files/packages/bzm-parallel-0.7.zip && \unzip -n /tmp/bzm-parallel-0.7.zip && rm /tmp/bzm-parallel-0.7.zip && \ ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo "$TZ" > /etc/timezone
ENV JMETER_HOME /jmeter/apache-jmeter-$JMETER_VERSION/
ENV PATH $JMETER_HOME/bin:$PATH#JMeter 主配置文件ADD jmeter.properties $JMETER_HOME/bin/jmeter.properties
复制代码


Dockerfile-master(构建 Jmeter master 镜像):


FROM registry.cn-beijing.aliyuncs.com/7d/jmeter-base:latestMAINTAINER 7DGroup    EXPOSE 60000
复制代码


Dockerfile-slave(构建 Jmeter slave 镜像):


Dockerfile-slave:FROM registry.cn-beijing.aliyuncs.com/7d/jmeter-base:latestMAINTAINER 7DGroup    EXPOSE 1099 50000    ENTRYPOINT $JMETER_HOME/bin/jmeter-server \-Dserver.rmi.localport=50000 \-Dserver_port=1099 \-Jserver.rmi.ssl.disable=true
复制代码


Dockerimages.sh(批量构建 docker 镜像):


#!/bin/bash -edocker build --tag="registry.cn-beijing.aliyuncs.com/7d/jmeter-base:latest" -f Dockerfile-base .docker build --tag="registry.cn-beijing.aliyuncs.com/7d/jmeter-master:latest" -f Dockerfile-master .docker build --tag="registry.cn-beijing.aliyuncs.com/7d/jmeter-slave:latest" -f Dockerfile-slave .
复制代码

四、Kubernetes 部署

1、部署组件

执行jmeter_cluster_create.sh,并输入一个唯一的 namespace


./jmeter_cluster_create.sh
复制代码


等待一会,查看 pods 安装情况:


$ kubectl get pods -n 7dgroupNAME                               READY   STATUS    RESTARTS   AGEinfluxdb-jmeter-584cf69759-j5m85   1/1     Running   2          5mjmeter-grafana-6d5b75b7f6-57dxj    1/1     Running   1          5mjmeter-master-84bfd5d96d-kthzm     1/1     Running   0          5mjmeter-slaves-b5b75757-dxkxz       1/1     Running   0          5mjmeter-slaves-b5b75757-n58jw       1/1     Running   0          5m
复制代码

2、部署清单

2.1、主执行脚本

jmeter_cluster_create.sh(创建命名空间和所有组件(jmeter master,slaves,influxdb 和 grafana)):


#!/usr/bin/env bash#Create multiple Jmeter namespaces on an existing kuberntes cluster#Started On January 23, 2018working_dir=`pwd`echo "checking if kubectl is present"if ! hash kubectl 2>/dev/nullthenecho "'kubectl' was not found in PATH"echo "Kindly ensure that you can acces an existing kubernetes cluster via kubectl"exitfikubectl version --shortecho "Current list of namespaces on the kubernetes cluster:"echokubectl get namespaces | grep -v NAME | awk '{print $1}'echoecho "Enter the name of the new tenant unique name, this will be used to create the namespace"read tenantecho#Check If namespace existskubectl get namespace $tenant > /dev/null 2>&1if [ $? -eq 0 ]thenecho "Namespace $tenant already exists, please select a unique name"echo "Current list of namespaces on the kubernetes cluster"sleep 2kubectl get namespaces | grep -v NAME | awk '{print $1}'exit 1fiechoecho "Creating Namespace: $tenant"kubectl create namespace $tenantecho "Namspace $tenant has been created"echoecho "Creating Jmeter slave nodes"nodes=`kubectl get no | egrep -v "master|NAME" | wc -l`echoecho "Number of worker nodes on this cluster is " $nodesecho#echo "Creating $nodes Jmeter slave replicas and service"echokubectl create -n $tenant -f $working_dir/jmeter_slaves_deploy.yamlkubectl create -n $tenant -f $working_dir/jmeter_slaves_svc.yamlecho "Creating Jmeter Master"kubectl create -n $tenant -f $working_dir/jmeter_master_configmap.yamlkubectl create -n $tenant -f $working_dir/jmeter_master_deploy.yaml
echo "Creating Influxdb and the service"kubectl create -n $tenant -f $working_dir/jmeter_influxdb_configmap.yamlkubectl create -n $tenant -f $working_dir/jmeter_influxdb_deploy.yamlkubectl create -n $tenant -f $working_dir/jmeter_influxdb_svc.yamlecho "Creating Grafana Deployment"kubectl create -n $tenant -f $working_dir/jmeter_grafana_deploy.yamlkubectl create -n $tenant -f $working_dir/jmeter_grafana_svc.yamlecho "Printout Of the $tenant Objects"echokubectl get -n $tenant allecho namespace = $tenant > $working_dir/tenant_export
复制代码

2.2、jmeter_slaves

jmeter_slaves_deploy.yaml(Jmeter slave 的部署清单):


apiVersion: apps/v1kind: Deploymentmetadata:  name: jmeter-slaves  labels:    jmeter_mode: slavespec:  replicas: 2   selector:    matchLabels:      jmeter_mode: slave  template:    metadata:      labels:        jmeter_mode: slave    spec:      containers:      - name: jmslave        image: registry.cn-beijing.aliyuncs.com/7d/jmeter-slave:latest        imagePullPolicy: IfNotPresent        ports:        - containerPort: 1099        - containerPort: 50000        resources:          limits:            cpu: 4000m            memory: 4Gi          requests:            cpu: 500m            memory: 512Mi
复制代码


jmeter_slaves_svc.yaml( Jmeter slave 的服务清单):


apiVersion: v1kind: Servicemetadata:  name: jmeter-slaves-svc  labels:    jmeter_mode: slavespec:  clusterIP: None  ports:    - port: 1099      name: first      targetPort: 1099    - port: 50000      name: second      targetPort: 50000
复制代码

2.3、jmeter_master

jmeter_master_configmap.yaml(jmeter_master 应用配置):


apiVersion: v1kind: ConfigMapmetadata:  name: jmeter-load-test  labels:    app: influxdb-jmeterdata:  load_test: |    #!/bin/bash    #Script created to invoke jmeter test script with the slave POD IP addresses    #Script should be run like: ./load_test "path to the test script in jmx format"    /jmeter/apache-jmeter-*/bin/jmeter -n -t $1 `getent ahostsv4 jmeter-slaves-svc | cut -d' ' -f1 | sort -u | awk -v ORS=, '{print $1}' | sed 's/,$//'`
复制代码


jmeter_master_deploy.yaml(jmeter_master 部署清单):


apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2kind: Deploymentmetadata:  name: jmeter-master  labels:    jmeter_mode: masterspec:  replicas: 1  selector:    matchLabels:      jmeter_mode: master  template:    metadata:      labels:        jmeter_mode: master    spec:      containers:      - name: jmmaster        image: registry.cn-beijing.aliyuncs.com/7d/jmeter-master:latest        imagePullPolicy: IfNotPresent        command: [ "/bin/bash", "-c", "--" ]        args: [ "while true; do sleep 30; done;" ]        volumeMounts:          - name: loadtest            mountPath: /load_test            subPath: "load_test"        ports:        - containerPort: 60000        resources:          limits:            cpu: 4000m            memory: 4Gi          requests:            cpu: 500m            memory: 512Mi      volumes:      - name: loadtest         configMap:         name: jmeter-load-test
复制代码

2.4、influxdb

jmeter_influxdb_configmap.yaml(influxdb 的应用配置):


apiVersion: v1kind: ConfigMapmetadata:  name: influxdb-config  labels:    app: influxdb-jmeterdata:  influxdb.conf: |    [meta]      dir = "/var/lib/influxdb/meta"
[data] dir = "/var/lib/influxdb/data" engine = "tsm1" wal-dir = "/var/lib/influxdb/wal"
# Configure the graphite api [[graphite]] enabled = true bind-address = ":2003" # If not set, is actually set to bind-address. database = "jmeter" # store graphite data in this database
复制代码


jmeter_influxdb_deploy.yaml(influxdb 部署清单):


apiVersion: apps/v1kind: Deploymentmetadata:  name: influxdb-jmeter  labels:    app: influxdb-jmeterspec:  replicas: 1  selector:    matchLabels:      app: influxdb-jmeter  template:    metadata:      labels:        app: influxdb-jmeter    spec:      containers:        - image: influxdb          imagePullPolicy: IfNotPresent          name: influxdb          volumeMounts:          - name: config-volume            mountPath: /etc/influxdb          ports:            - containerPort: 8083              name: influx            - containerPort: 8086              name: api            - containerPort: 2003              name: graphite      volumes:      - name: config-volume        configMap:         name: influxdb-config
复制代码


jmeter_influxdb_svc.yaml(influxdb 部署服务清单):


apiVersion: v1kind: Servicemetadata:  name: jmeter-influxdb  labels:    app: influxdb-jmeterspec:  ports:    - port: 8083      name: http      targetPort: 8083    - port: 8086      name: api      targetPort: 8086    - port: 2003      name: graphite      targetPort: 2003  selector:    app: influxdb-jmeter
复制代码

2.5、grafana

jmeter_grafana_deploy.yaml(grafana 部署清单):


apiVersion: apps/v1kind: Deploymentmetadata:  name: jmeter-grafana  labels:    app: jmeter-grafanaspec:  replicas: 1  selector:    matchLabels:      app: jmeter-grafana  template:    metadata:      labels:        app: jmeter-grafana    spec:      containers:      - name: grafana        image: grafana/grafana:5.2.0        imagePullPolicy: IfNotPresent        ports:        - containerPort: 3000          protocol: TCP        env:        - name: GF_AUTH_BASIC_ENABLED          value: "true"        - name: GF_USERS_ALLOW_ORG_CREATE          value: "true"        - name: GF_AUTH_ANONYMOUS_ENABLED          value: "true"        - name: GF_AUTH_ANONYMOUS_ORG_ROLE          value: Admin        - name: GF_SERVER_ROOT_URL          # If you're only using the API Server proxy, set this value instead:          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy          value: /
复制代码


jmeter_grafana_svc.yaml(grafana 部署服务清单):


apiVersion: v1kind: Servicemetadata:  name: jmeter-grafana  labels:    app: jmeter-grafanaspec:  ports:    - port: 3000      targetPort: 3000  selector:    app: jmeter-grafana  type: NodePort---apiVersion: extensions/v1beta1kind: Ingressmetadata:  annotations:    nginx.ingress.kubernetes.io/service-weight: 'jmeter-grafana: 100'  name: jmeter-grafana-ingressspec:  rules:  # 配置七层域名  - host: grafana-jmeter.7d.com    http:      paths:      # 配置Context Path      - path: /        backend:          serviceName: jmeter-grafana          servicePort: 3000
复制代码

五、初始化 dashboard

1、启动 dashboard 脚本

$ ./dashboard.sh
复制代码


检查 service 部署情况:


$ kubectl get svc -n 7dgroupNAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGEjmeter-grafana      NodePort    10.96.6.201    <none>        3000:31801/TCP               10mjmeter-influxdb     ClusterIP   10.96.111.60   <none>        8083/TCP,8086/TCP,2003/TCP   10mjmeter-slaves-svc   ClusterIP   None           <none>        1099/TCP,50000/TCP           10m
复制代码


我们可以通过 http://任意 node_ip:31801/ 访问 grafana


最后,我们在 grafana 导入 dashborad 模版:




如果你不喜欢这个模版,也可以导入热门模版:5496



2、部署清单

dashboard.sh 该脚本用于自动创建以下内容:


  • (1)influxdb pod 中的一个 influxdb 数据库(Jmeter)

  • (2)grafana 中的数据源(jmeterdb)


#!/usr/bin/env bashworking_dir=`pwd`#Get namesapce variabletenant=`awk '{print $NF}' $working_dir/tenant_export`## Create jmeter database automatically in Influxdbecho "Creating Influxdb jmeter Database"##Wait until Influxdb Deployment is up and running##influxdb_status=`kubectl get po -n $tenant | grep influxdb-jmeter | awk '{print $2}' | grep Runninginfluxdb_pod=`kubectl get po -n $tenant | grep influxdb-jmeter | awk '{print $1}'`kubectl exec -ti -n $tenant $influxdb_pod -- influx -execute 'CREATE DATABASE jmeter'## Create the influxdb datasource in Grafanaecho "Creating the Influxdb data source"grafana_pod=`kubectl get po -n $tenant | grep jmeter-grafana | awk '{print $1}'`## Make load test script in Jmeter master pod executable#Get Master pod detailsmaster_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'`kubectl exec -ti -n $tenant $master_pod -- cp -r /load_test /![]()jmeter/load_testkubectl exec -ti -n $tenant $master_pod -- chmod 755 /jmeter/load_test##kubectl cp $working_dir/influxdb-jmeter-datasource.json -n $tenant $grafana_pod:/influxdb-jmeter-datasource.jsonkubectl exec -ti -n $tenant $grafana_pod -- curl 'http://admin:admin@127.0.0.1:3000/api/datasources' -X POST -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"name":"jmeterdb","type":"influxdb","url":"http://jmeter-influxdb:8086","access":"proxy","isDefault":true,"database":"jmeter","user":"admin","password":"admin"}'
复制代码

六、启动测试

1、执行脚本

$ ./start_test.sh
复制代码


需要一个测试脚本,本例为:web-test.jmx


$  ./start_test.shEnter path to the jmx file web-test.jmx''SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/jmeter/apache-jmeter-5.0/lib/log4j-slf4j-impl-2.11.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/jmeter/apache-jmeter-5.0/lib/ext/pepper-box-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]Jul 25, 2020 11:30:58 AM java.util.prefs.FileSystemPreferences$1 runINFO: Created user preferences directory.Creating summariser <summary>Created the tree successfully using web-test.jmxConfiguring remote engine: 10.100.113.31Configuring remote engine: 10.100.167.173Starting remote enginesStarting the test @ Sat Jul 25 11:30:59 UTC 2020 (1595676659540)Remote engines have been startedWaiting for possible Shutdown/StopTestNow/Heapdump message on port 4445summary +    803 in 00:00:29 =   27.5/s Avg:   350 Min:   172 Max:  1477 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary +   1300 in 00:00:29 =   45.3/s Avg:   367 Min:   172 Max:  2729 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =   2103 in 00:00:58 =   36.4/s Avg:   361 Min:   172 Max:  2729 Err:     0 (0.00%)summary +   1400 in 00:00:31 =   45.4/s Avg:   342 Min:   160 Max:  2145 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =   3503 in 00:01:29 =   39.5/s Avg:   353 Min:   160 Max:  2729 Err:     0 (0.00%)summary +   1400 in 00:00:31 =   45.2/s Avg:   352 Min:   169 Max:  2398 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =   4903 in 00:02:00 =   41.0/s Avg:   353 Min:   160 Max:  2729 Err:     0 (0.00%)summary +   1400 in 00:00:30 =   46.8/s Avg:   344 Min:   151 Max:  1475 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =   6303 in 00:02:30 =   42.1/s Avg:   351 Min:   151 Max:  2729 Err:     0 (0.00%)summary +   1200 in 00:00:28 =   43.5/s Avg:   354 Min:   163 Max:  2018 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =   7503 in 00:02:57 =   42.3/s Avg:   351 Min:   151 Max:  2729 Err:     0 (0.00%)summary +   1300 in 00:00:30 =   43.7/s Avg:   456 Min:   173 Max:  2401 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =   8803 in 00:03:27 =   42.5/s Avg:   367 Min:   151 Max:  2729 Err:     0 (0.00%)summary +   1400 in 00:00:31 =   44.9/s Avg:   349 Min:   158 Max:  2128 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =  10203 in 00:03:58 =   42.8/s Avg:   364 Min:   151 Max:  2729 Err:     0 (0.00%)summary +   1400 in 00:00:32 =   44.3/s Avg:   351 Min:   166 Max:  1494 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =  11603 in 00:04:30 =   43.0/s Avg:   363 Min:   151 Max:  2729 Err:     0 (0.00%)summary +   1400 in 00:00:30 =   46.9/s Avg:   344 Min:   165 Max:  2075 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =  13003 in 00:05:00 =   43.4/s Avg:   361 Min:   151 Max:  2729 Err:     0 (0.00%)summary +   1300 in 00:00:28 =   46.0/s Avg:   352 Min:   159 Max:  1486 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =  14303 in 00:05:28 =   43.6/s Avg:   360 Min:   151 Max:  2729 Err:     0 (0.00%)summary +   1400 in 00:00:31 =   45.6/s Avg:   339 Min:   163 Max:  2042 Err:     0 (0.00%) Active: 40 Started: 40 Finished: 0summary =  15703 in 00:05:58 =   43.8/s Avg:   358 Min:   151 Max:  2729 Err:     0 (0.00%)summary +    494 in 00:00:07 =   69.0/s Avg:   350 Min:   171 Max:  1499 Err:     0 (0.00%) Active: 0 Started: 40 Finished: 40summary =  16197 in 00:06:06 =   44.3/s Avg:   358 Min:   151 Max:  2729 Err:     0 (0.00%)Tidying up remote @ Sat Jul 25 11:37:09 UTC 2020 (1595677029361)... end of run
复制代码


查看测试数据:


2、部署清单

start_test.sh(此脚本用于自动运行 Jmeter 测试脚本,而无需手动登录 Jmeter 主 shell,它将询问 Jmeter 测试脚本的位置,然后将其复制到 Jmeter master pod 并启动自动对 Jmeter slave 进行测试):


#!/usr/bin/env bash#Script created to launch Jmeter tests directly from the current terminal without accessing the jmeter master pod.#It requires that you supply the path to the jmx file#After execution, test script jmx file may be deleted from the pod itself but not locally.
#直接从当前终端启动 Jmeter 测试而创建的脚本,无需访问 Jmeter master pod。#要求提供 jmx 文件的路径#执行后,测试脚本 jmx 文件可能会从 pod 本身删除,但不会在本地删除。
working_dir="`pwd`"
# 获取 namesapce 变量tenant=`awk '{print $NF}' "$working_dir/tenant_export"`
jmx="$1"[ -n "$jmx" ] || read -p 'Enter path to the jmx file ' jmx
if [ ! -f "$jmx" ];then echo "Test script file was not found in PATH" echo "Kindly check and input the correct file path" exitfi
test_name="$(basename "$jmx")"
# 获取 master pod 详细信息master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'`kubectl cp "$jmx" -n $tenant "$master_pod:/$test_name"
## 启动 Jmeter 压测kubectl exec -ti -n $tenant $master_pod -- /bin/bash /load_test "$test_name"kubectl exec -ti -n $tenant $master_pod -- /bin/bash /load_test "$test_name"
复制代码


jmeter_stop.sh(停止测试):


#!/usr/bin/env bash#Script writtent to stop a running jmeter master test#Kindly ensure you have the necessary kubeconfig
#编写脚本来停止运行的 jmeter master 测试#请确保你有必要的 kubeconfigworking_dir=`pwd`
#获取 namesapce 变量tenant=`awk '{print $NF}' $working_dir/tenant_export`master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'`kubectl -n $tenant exec -it $master_pod -- bash -c "./jmeter/apache-jmeter-5.0/bin/stoptest.sh"
复制代码

七、小结

传统 Jmeter 存在的问题:


  • 并发数超过单节点承载能力时,多节点环境配置、维护复杂;

  • 默认配置下无法并行运行多个测试,需要更改配置启动额外进程;

  • 难以支持云环境下测试资源的弹性伸缩需求。


Kubernetes-Jmeter 带来的改变:


  • 压测执行节点一键安装;

  • 多个项目、多个测试可并行使用同一个测试资源池(最大并发数允许情况下, Kubernetes 也提供了 RBAC、namespace 等管理能力,支持多用户共享一个集群,并实现资源限制),提高资源利用率;

  • 对接 Kubernetes HPA 根据并发数自动启动、释放压测执行节点。


源码地址:


  • https://github.com/zuozewei/blog-example/tree/master/Kubernetes/k8s-jmeter-cluster


参考资料:


  • [1]:https://github.com/kubernauts/jmeter-kubernetes

发布于: 刚刚阅读数: 2
用户头像

zuozewei

关注

测试及性能 | 公众号@7DGroup 2017.12.23 加入

CSDN博客专家、测试领域优质创作者,华为云云享专家、2021年度华为云社区十佳博主,掘金2021年度人气作者No.12,极客时间《全链路压测实战30讲》专栏作者之一,《性能测试实战30讲》《高楼的性能工程实战课》编委。

评论

发布
暂无评论
Kubernetes 下部署 JMeter 集群