Skip to content

Milestone 4

Sudip Padhye edited this page Apr 14, 2021 · 17 revisions

TASKS:


Developing Jmeter Script

  sudo apt-get update &&
  sudo apt-get install default-jre &&
  sudo apt-get install zip &&
  sudo apt-get install unzip
  1. Installing Jmeter
  2. Dependencies: Concurrent Thread Groups
  3. Enabling Jmeter Cache Storage
  4. Configuring Jmeter web-browser certificate and localhost proxy
  5. Storing images into the Jmeter bin folder

We have made use of Jmeter UI for defining thread groups, cache, listeners, and other parameterization. However, tests are run using the bash shell command:

  ./jmeter -n -t <JMX-file-path> -l <results.csv-generation-file-path> -e -o <report-generation-file-path>

We have tested for various replica counts and user counts. Our test results are grouped hierarchically along the replica-count and user-count. The results consist of the following data:

  1. JMX file: Jmeter test file
  2. credentials.csv: user credentials used for testing users
  3. index.html: web page to display test statistics and graphs
  4. results.csv: test summary report

Load Testing

For 3 replicas of our Kubernetes microservices, we observed the following throughput graphs for 100, 500, and 1000 users.

100 users

500 users

1000 users

Description: The system can handle 300 users within 100 seconds period accessing the application simultaneously.

Spike Testing

Replica - 3

100 users

500 users

1000 users

Replica - 4

100 users

500 users

1000 users

Replica - 5

100 users

500 users

1000 users

Description: Considering a ramp-up time of 100 seconds for all scenarios.

With 3 replicas: The system was able to handle approximately 300 users. (If we increased the ramp-up period, we can go up to 1000 users)

With 4 replicas: Around 500 users can access the application simultaneously.

With 5 replicas: In a span of 100 seconds, around 850 users were able to access the application.

As expected we observe that, with an increase in the number of replica instances of various microservices, the system can handle more traffic and more users. This can be achieved by a simple scaling-up command listed below. The most accessed service is user management, responsible for user authentication. So rather than scaling up all microservices, we need to scale user management to 8 replicas and keep others as required.

Scaling-up command:

  kubectl scale deployment <deployment-name> --replicas=<higher-replica-number> -n <namespace>

Scaling-down command:

  kubectl scale deployment <deployment-name> --replicas=<lower-replica-number> -n <namespace>

System capacity limit: 300 users in 10 seconds ramp-up time on a single instance of MySQL database (bottleneck).

Improvements:

  1. Scaling up the replicas increases the performance of the system
  2. Increasing Kubernetes worker nodes

Measuring Fault Tolerance

Dependencies:

  1. helm
  2. kube-monkey (Chaos Engineering)
  3. Kubernetes Replication Controller

To measure the fault tolerance of our system we made use of Kubemonkey (Chaos Engineering) concept. Following is the code to install helm & kubemonkey:

  cd PingIntelligence &&
  git checkout automation-script &&
  git pull &&
  bash helm-installation.sh &&
  cd .. &&
  git clone https://github.com/asobti/kube-monkey &&
  mv PingIntelligence/values.yaml kube-monkey/helm/kubemonkey/ &&
  helm install -name my-release kubemonkey -f kube-monkey/helm/kubemonkey/values.yaml &&
  kubectl apply -f PingIntelligence/kubemonkey-config.yaml

The above code enables kubemonkey to monitor the default namespace, kills 3 random service replicas with a frequency of daily run (Scheduling runs at 8 am and pods are destroyed from 10 am - 4 pm).

We have tested for 3-replicas and for users (100, 500 & 1000). Below are our test results:

100 users

500 users

1000 users

Architectural Changes: We have added ReplicaSet along with Horizontal Pod Autoscaler (HPA) for auto-scaling replicas of each microservice. This ensures the system is fault-tolerant. The command to check already deployed replica sets is:

  kubectl get rs