-
Notifications
You must be signed in to change notification settings - Fork 6
Milestone 4
sudo apt-get update &&
sudo apt-get install default-jre &&
sudo apt-get install zip &&
sudo apt-get install unzip
- Installing Jmeter
- Dependencies: Concurrent Thread Groups
- Enabling Jmeter Cache Storage
- Configuring Jmeter web-browser certificate and localhost proxy
- Storing images into the Jmeter bin folder
We have made use of Jmeter UI for defining thread groups, cache, listeners, and other parameterization. However, tests are run using the bash shell command:
./jmeter -n -t <JMX-file-path> -l <results.csv-generation-file-path> -e -o <report-generation-file-path>
We have tested for various replica counts and user counts. Our test results are grouped hierarchically along the replica-count and user-count. The results consist of the following data:
- JMX file: Jmeter test file
- credentials.csv: user credentials used for testing users
- index.html: web page to display test statistics and graphs
- results.csv: test summary report
For 3 replicas of our Kubernetes microservices, we observed the following throughput graphs for 100, 500, and 1000 users.
Description: The system can handle 300 users within 100 seconds period accessing the application simultaneously.
Description: Considering a ramp-up time of 100 seconds for all scenarios.
With 3 replicas: The system was able to handle approximately 300 users. (If we increased the ramp-up period, we can go up to 1000 users)
With 4 replicas: Around 500 users can access the application simultaneously.
With 5 replicas: In a span of 100 seconds, around 850 users were able to access the application.
As expected we observe that, with an increase in the number of replica instances of various microservices, the system can handle more traffic and more users. This can be achieved by a simple scaling-up command listed below. The most accessed service is user management, responsible for user authentication. So rather than scaling up all microservices, we need to scale user management to 8 replicas and keep others as required.
Scaling-up command:
kubectl scale deployment <deployment-name> --replicas=<higher-replica-number> -n <namespace>
Scaling-down command:
kubectl scale deployment <deployment-name> --replicas=<lower-replica-number> -n <namespace>
System capacity limit: 300 users in 10 seconds ramp-up time on a single instance of MySQL database (bottleneck).
Improvements:
- Scaling up the replicas increases the performance of the system
- Increasing Kubernetes worker nodes
Dependencies:
To measure the fault tolerance of our system we made use of Kubemonkey (Chaos Engineering) concept. Following is the code to install helm & kubemonkey:
cd PingIntelligence &&
git checkout automation-script &&
git pull &&
bash helm-installation.sh &&
cd .. &&
git clone https://github.com/asobti/kube-monkey &&
mv PingIntelligence/values.yaml kube-monkey/helm/kubemonkey/ &&
helm install -name my-release kubemonkey -f kube-monkey/helm/kubemonkey/values.yaml &&
kubectl apply -f PingIntelligence/kubemonkey-config.yaml
The above code enables kubemonkey to monitor the default namespace, kills 3 random service replicas with a frequency of daily run (Scheduling runs at 8 am and pods are destroyed from 10 am - 4 pm).
We have tested for 3-replicas and for users (100, 500 & 1000). Below are our test results:
Architectural Changes: We have added ReplicaSet along with Horizontal Pod Autoscaler (HPA) for auto-scaling replicas of each microservice. This ensures the system is fault-tolerant. The command to check already deployed replica sets is:
kubectl get rs