Even seemingly simple software systems can be a dense forest of intersecting logical pathways which may leave you wondering if your testing was robust enough. Traditional test cases are flawed since they only execute the pathways the tester considered at the time the test case was written, and they will execute the same way—every time and without variation. Jon Fetrow shows how, using model-based testing, you can create a map of your software forest and answer the question “Did you test enough?” Jon discusses the use of models to catch defects in the requirements and design phase...
Jon Fetrow
Jon Fetrow has fifteen years of experience in healthcare-related software focused on testing, process, and quality. He has led and mentored team members in test automation, test strategy, and agile processes by applying best practices and methodologies to continually improve testing. In his five years with Olympus Corporation of the Americas, Jon has served as manager of software quality and test engineering. As test architect, he researched and helped to design their current automation test framework and performance testing program. Jon has helped transition the test team from classic waterfall with sole reliance on manual testing to an agile development team whose test suite is now largely automated.