Let's assume there is a microservice represented by a composition of containers running on a K8s cluster somewhere in a cloud, e.g. Oracle Kubernetes Engine (OKE). At some point we want to quickly stress test a specific microservice component or the entire microservice. So we want to know how it works under the load, how it handles many subsequent requests coming from many parallel clients. The good news is that we have already a tool for that. Up and running. This is the Kubernetes cluster itself.
We're going to use Kubernetes Job for this testing described in the following manifest file:
kubectl get pods -l job-name=job-load
We're going to use Kubernetes Job for this testing described in the following manifest file:
apiVersion: batch/v1 kind: Job metadata: name: job-load spec: parallelism: 50 template: spec: containers: - name: loader image: eugeneflexagon/aplpine-with-curl:1.0.0 command: ["time", "curl", "http://my_service:8080/my_path?[1-100]"] restartPolicy: OnFailureThis job is going to spin up 50 pods running in parallel and sending 100 requests each to my_service on port 8080 and with path my_path. Having the job created and started by invoking
kubectl apply -f loadjob.yamlWe can observe all 50 pods created by the job using
kubectl get pods -l job-name=job-load
NAME READY STATUS RESTARTS AGE job-load-4n262 1/2 Completed 1 12m job-load-dsqtc 1/2 Completed 1 12m job-load-khdn4 1/2 Completed 1 12m job-load-kptww 1/2 Completed 1 12m job-load-wf9pd 1/2 Completed 1 12m ...If we look at the logs of any of these pods
kubectl logs job-load-4n262We'll see something like the following:
[1/100]: http://my_service.my_namespace:8080/my_path?1 --> <stdout> {"id":456,"content":"Hello world!"} [2/100]: http://my_service.my_namespace:8080/my_path?2 --> <stdout> {"id":457,"content":"Hello world!"} [3/100]: http://my_service.my_namespace:8080/my_path?3 --> <stdout> {"id":458,"content":"Hello world!"} .... real 0m 10.04s user 0m 0.00s sys 0m 0.04s
That's it!
No comments:
Post a Comment
Post Comment