k6+k8s分布式

技术背景

  • wsl2
  • kind
  • k8s
  • k6 Operator(方案1)
  • k8s worker deoployment(方案2)

    阿里云镜像仓库

    由于经常遇到time out问题,故在阿里云留个档。
    https://cr.console.aliyun.com/cn-guangzhou/instance/repositories
    1
    2
    3
    $ docker login --username=dingtalk_yqsosa crpi-mstj0ugnrdkhzm1r.cn-guangzhou.personal.cr.aliyuncs.com
    $ docker tag [ImageId] crpi-mstj0ugnrdkhzm1r.cn-guangzhou.personal.cr.aliyuncs.com/chensuixaing/k6:[镜像版本号]
    $ docker push crpi-mstj0ugnrdkhzm1r.cn-guangzhou.personal.cr.aliyuncs.com/chensuixaing/k6:[镜像版本号]

多cluster部署

wsl2中尝试
  • kind create cluster --name k6-kind-cluster --config kind-3nodes.yaml
  • kind delete cluster –name k6-kind-cluster
    1
    2
    3
    4
    5
    6
    7
    //尝试在wsl2中通过kind创建集群,结果 get node一直notready,最终放弃改换其他方式了
    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    nodes:
    - role: control-plane
    - role: worker
    - role: worker
    docker desktop中尝试
  • 设置中可以直接开启,看日志也是pull这个镜像来实现的。需要梯子,最好手动pull。docker pull docker/desktop-cloud-provider-kind:v0.3.0-desktop.3
  • 无法找到docker-desktop集群kubectl config use-context docker-desktop
    1
    2
    3
    4
    5
    6
    7
    8
    9
    //解决方式
    1、确认docker desktopnode节点均正常
    2、设置-resources-wsl integration中必须启用 ubuntu
    3、临时测试 kubectl --kubeconfig /mnt/c/Users/12986/.kube/config get node
    4echo 'export KUBECONFIG="/mnt/c/Users/12986/.kube/config"' >> ~/.bashrc
    5source ~/.bashrc

    kubectl config get-contexts
    kubectl config use-context docker-desktop

    应用部署

    1
    2
    3
    4
    5
    6
    7

    kubectl apply -f k6-worker-deployment.yaml
    kubectl apply -f k6-worker-service.yaml
    kubectl delete -f k6-worker-deployment.yaml
    kubectl delete -f k6-worker-service.yaml
    kubectl get deployments
    kubectl get services
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    // k6-worker-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: k6-workers
    spec:
    replicas: 3
    selector:
    matchLabels:
    app: k6-worker
    template:
    metadata:
    labels:
    app: k6-worker
    spec:
    containers:
    - name: k6-worker-container
    image: crpi-mstj0ugnrdkhzm1r.cn-guangzhou.personal.cr.aliyuncs.com/chensuixaing/k6:20251206
    command: ["./k6", "run", "dummy.js"]
    imagePullPolicy: Always
    env:
    - name: K6_WORKER
    value: "true"
    ports:
    - containerPort: 6565
    name: k6-dist-port

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    // k6-worker-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
    name: k6-worker-service
    spec:
    selector:
    app: k6-worker
    ports:
    - protocol: TCP
    port: 6565
    targetPort: 6565

    一些思考

  1. 我想要手动执行压测,而不是apply之后直接执行
  • k6 Operator似乎可以实现,但由于wsl2环境未能打通,暂未执行
  • 实际上不一定要追求手动同时触发,多个pods同时执行,相差不了多少时间。
  • k8s Job 或 CronJob
  1. 日志的聚合
  • 官网有相关https://grafana.com/docs/k6/latest/set-up/set-up-distributed-k6/usage/extensions/
  • 应该单独起个influxdb,然后通过grafana进行聚合。dcokerfile中应当调整run命令,增加--out influxdb=http://<host>:<port>/<database>去尝试。有待实际行动。

本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!