In my last blog post, you read about How to design workload model for load testing. In this blog post, we will see how to design workload model for stress testing. Every person in this world has a breaking point. In short, there is a limit for everything. It applies for everything. Similarly, servers also has a breaking point, beyond a particular limit it will not behave as expected. It is much important to identify the breaking point before application goes live.
See the definition of Stress testing in my earlier post Types of Performance Testing. Stress Testing validates the stability of the system and it identifies the breaking point by overloading the concurrent users.
We will see what the appropriate approach for stress testing is. Following are the entry criteria for stress testing.
- Hardware configurations
- Network configurations
- Number of Concurrent Users derived from Load Testing i.e. peak load
- Identified critical scenarios
It is best to refer hardware and network configuration, as it will help to plan the stress testing to avoid any disaster.
Consider a web application which publishes your post across social medias; total number of concurrent during peak business hours (from 11:00 to 13:00) per day is 1500. Following are the list of transactions performed by the users.
- Click on Write Post link
- Write Post (140 characters)
- Submit Post
In order to identify the breaking point of the system, first you load the server with 1500 concurrent users i.e. at A as shown in below figure.
Load the server with 1500 concurrent users gradually. Once it reached, increase the concurrent users to 1600 and observe the script execution. Monitor the graphs between Response Time vs. Number of Virtual Users (VUsers), Throughput vs. Number of VUsers, and Errors vs. Number of VUsers. These graphs alerts the breaking point.
If everything goes fine for 1600 VUsers, increase it to 1700 VUsers, 1800 VUsers and so on till you face any abnormal behavior of the server. At some point, i.e. in our scenario at point B say at 2000, there are abnormal behavior such as Error 404, Error 500 errors occurred during the execution.
It is ideal to stop the execution either gradually or immediately. Because further execution give no valid result for analysis. In this case, breaking point of the server is 2000, i.e. 2000 VUsers can login concurrently and access the website. But beyond 2000, server will not be able to handle those many users.
To summarize, first is to gather entry criteria and start loading the server and incrementing it, till it breaks the system.
- Measuring Client-side performance using Performance APIs - September 19, 2017
- Unboxing HPE LoadRunner 12.55 - August 20, 2017
- Measure Client-side Performance using Lighthouse - August 10, 2017
- Perf Calculator – free iOS app for performance testers - July 16, 2017
- How to performance test AngularJS or ReactJS applications? - June 25, 2017