Once in my training session, I asked “When should stop testing?” One guy proactively told “When the manager told us to stop testing, we should stop”. His answers might look crazy. But, how test managers or test leads arrive a decision to stop testing.
This question comes to every tester. At some point of time there is a necessity to stop testing. Stopping testing helps to measure the progress of the project and to prepare metrics and dashboard which give insight to test managers to take quicker and better decisions.
Following are the situations:
- When the acceptance criteria (defined in test strategy) is met
- When testers finds more number of defects in very short period of time
- When testers are not able to find defects after spending rigorous effort
- When the deadlines are about to reach
- No budget to proceed testing
- Defect injection rate falls below the threshold level (it differs from project to project)
- When the testing coverage reaches comfortable level
- Traceability between requirements and execution of test cases is looking ideal
Testing is never ending cycle. Testing minimized the chances of failure of any product. Stopping testing is directly aligned with the risks associated in the project. Risks level can be identified by using following
- High priority defects count
- Test Coverage metrics
- Risk based test cases count
- Testing Cycles count
Testing is a continuous process; it is tough to define when you/we should stop testing. It is purely driven by business and the circumstances.
- Unboxing HPE LoadRunner 12.55 - August 20, 2017
- Measure Client-side Performance using Lighthouse - August 10, 2017
- Perf Calculator – free iOS app for performance testers - July 16, 2017
- How to performance test AngularJS or ReactJS applications? - June 25, 2017
- How to run Apache JMeter tests with Visual Studio Team Services? - June 6, 2017