Using ML to Optimize Automated Web App Testing with Real-World Data
Though test automation has made testing faster, quality teams struggle to prioritize end-to-end testing to maximize test coverage. It’s challenging to define the E2E scenarios similarly to unit test coverage, particularly when teams only have a small set of test needs outlined. The issue becomes even more complex when new application features are added since there’s no way to determine where more E2E tests are necessary. This talk explains how a ML engineer built and tested a new feature and how prioritization of E2E testing in Agile environments can be automated. Lauren’s team developed an algorithm that groups urls to show E2E web app coverage and suggests how to prioritize adding tests. They were able to test the algorithm’s performance against multiple sets of customer data and prove its ability to recommend pages that needed testing. Testing was integrated throughout the development process, including unit, integration, and manual testing to ensure the new feature worked within the existing mabl app.
- Prioritization is persistent challenge for QA teams, particularly with new app features
- Data as the foundation for an effective testing strategy that incorporates manual and automated testing
- How manual performance testing can be used to fine-tune automated testing
- How an Agile team uses shift-left testing in feature development