How to categorize Performance Testing Defects?

How to categorize Performance Testing Defects? In manual/automated testing, every testers raise defects either in bug tracking tool; modern methodologies such as Agile, use tools like JIRA to create user stories and/or bugs. Each defect has two important particulars: priority and severity. Priority defines how soon the defect needs to be fixed. Severity defines how critical the defect is. Famous example to differentiate b/w severity and priority is the invalid dimension of a company logo in the header. Company Logo carries the brand and value, if the logo dimension is not right, surely it is a defect. When raising a defect, the tester will assign the following severity and priority as Trivial and Critical respectively.

The reason of these variety is: the dimension of the logo will not impact the actual business functions. Users will still be able to perform the operations. But the priority to fix the defect is ASAP i.e. critical. Developers need to fix the dimension ASAP in the immediate build. Now let’s talk about performance defects.

Sample Case Study


Assume that you are testing a simple web based taxi reservation application. Critical business flows will be Registration, Login, Booking a ride, Account, Payment, Help, Driver’s Registration, Driver’s History, Payment, Reports, Disputes etc. Hardly, you will be testing 25 scenarios to validate the performance.


Technical Solution design states that 3 seconds as SLA objective for all the pages. Once the performance testing exercise is completed, you observed that there are variances in the response time in the following scenarios.


Scenario

Pages 95th percentile response time (s)
Home Page3.1
Registration4.6
Disputes Page Load3.3
Booking a Ride3.9

Now the question is: how do you raise your performance defects in JIRA or QC or Bugzilla or your favorite bug tracking tool.

What will the severity and priority for each defect/story? This article will address those. 

How to categorize performance testing defects?

You need to define the performance testing well ahead with the help of your architects or business analysts (not developers). From my agile experience, I would say try to categorize the performance testing defects into four categories: Blocker, Critical, Major and Minor. You can ignore trivial thingy. 

Blocker

Assume that you got a new code deployment in your test environment, once you launch the page, it is taking more than 5 seconds for first paint and time to render the complete page in Internet Explorer is more than 10 seconds which is a complete blocker for your tests. During the load, definitely it will bring more issues to the table. Without proceeding your tests, you can easily raise this defect as a complete blocker. 

Critical

Assume that you are trying to complete a booking in your web application. After the user click on Submit button, it is expected to get a valid booking order id. During the load, if the booking order id is not getting generated or invalid booking id generated, then your complete transaction has a critical defect. Also, if the response time is exceeding  30%-50% of the actual SLA, you can raise a defect as Critical.

The main intention of testing your web application is: whether all the users are able to book and get the valid order ID. If most of the transactions are failing, then you should raise a critical defect.

Major

If any of the major functionalities which has a workaround but it needs to be fixed by developers, you can categorize this issue as Major defect. E.g. if you are not able to access a page which cannot be clicked, but it can be navigable using shortcut key or directly loading the page, then you can raise this as a major defect.

Minor

UI defects, grammatical errors, spelling mistakes, alignment issues, etc can be categorized into minor or trivial. Also, the response time violation in terms of very minimal value (1 or 2 milliseconds), you can raise this concern as minor or trivial.

Below table is ONLY for reference.

Response Time (s)variance from SLASeverity
>=3.3 and <= 3.815%Minor
>=3.9 and <=4.316% to 33%Major
>=4.4 and <=4.833% to 50%Critical
>=4.9>50%Blocker

Conclusion

It is important to categorize the performance testing defects well before the execution. The above categorization heavily varies from project to project. There is no thumb rule that needs to be followed. For Google like websites, even the milliseconds matters, so think wisely and formulate the defect categories.

About the Author

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Hamster - Launch JMeter Recent Test Plans SwiftlyDownload for free
+
Share via
Copy link