Once in my training session, I asked “When should stop testing?” One guy proactively told “When the manager told us to stop testing, we should stop”. His answers might look crazy. But, how test managers or test leads arrive a decision to stop testing.
This question comes to every tester. At some point of time there is a necessity to stop testing. Stopping testing helps to measure the progress of the project and to prepare metrics and dashboard which give insight to test managers to take quicker and better decisions.
Following are the situations:
- When the acceptance criteria (defined in test strategy) is met
- When testers finds more number of defects in very short period of time
- When testers are not able to find defects after spending rigorous effort
- When the deadlines are about to reach
- No budget to proceed testing
- Defect injection rate falls below the threshold level (it differs from project to project)
- When the testing coverage reaches comfortable level
- Traceability between requirements and execution of test cases is looking ideal
Testing is never ending cycle. Testing minimized the chances of failure of any product. Stopping testing is directly aligned with the risks associated in the project. Risks level can be identified by using following
- High priority defects count
- Test Coverage metrics
- Risk based test cases count
- Testing Cycles count
Testing is a continuous process; it is tough to define when you/we should stop testing. It is purely driven by business and the circumstances.
- Brotli Compression in Performance Testing - October 6, 2017
- Unboxing HPE StormRunner 2.7 - September 27, 2017
- What’s new in Apache JMeter 3.3? - September 25, 2017
- Measuring Client-side performance using Performance APIs - September 19, 2017
- Unboxing HPE LoadRunner 12.55 - August 20, 2017