Skip to main content


Test Management

Tutorials

MH Measurement and Metrics for Test Managers
Michael Sowers, TechWell Corp.
Mon, 11/09/2015 - 8:30am

To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics are complicated because many developers and testers are concerned that the metrics will be used against them. Join Mike Sowers as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Mike identifies several metrics paradigms and discusses the pros and cons of each.

Read more
ML Test Attacks to Break Mobile, IoT, and Embedded Software NEW
Jon Hagar, Independent Consultant
Mon, 11/09/2015 - 1:00pm

In the tradition of James Whittaker’s book series How to Break Software, Jon Hagar applies the testing “attack” concept to the domain of mobile, IoT, and embedded software systems. First, Jon defines the environments of mobile, IoT and embedded software. He then examines the issues of software product failures caused by defects found in these types of software. Next, Jon shares a set of ten attacks against mobile, IoT, and embedded software based on common modes of failure that teams can direct against their software today. Like software design patterns, attacks are test design patterns that must be customized for particular contexts. For specific attacks, Jon explains when and how to conduct the attack, who should conduct the attack, and why the attack works to find bugs. In addition to learning these testing concepts, attendees will get to practice the attack pattern on devices containing mobile, IoT and/or embedded software—so bring your smart phones.

Read more
MN Planning, Architecting, and Implementing Test Automation within the Lifecycle
Michael Sowers, TechWell Corp.
Mon, 11/09/2015 - 1:00pm

In test automation, we must often use several tools that have been developed or acquired over time with little to no consideration of an overall plan, architecture, or the need for integration. As a result, productivity suffers and frustrations increase. Join Mike Sowers as he shares experiences from multiple organizations in creating an integrated test automation plan and developing a test automation architecture. Mike discusses both the good (engaging the technical architecture team) and bad (too much isolation between test automators and test designers) on his test automation journey in large and small enterprises. Discover approaches to ensure that the test tools you currently have and the new test tools you acquire or develop will work well with other testing and application lifecycle software. Explore approaches to drive test automation adoption across multiple project teams and departments, and communicate the real challenges and potential benefits to your stakeholders.

Read more
TH Test Estimation in Practice
Rob Sabourin, AmiBug.com
Tue, 11/10/2015 - 8:30am

Anyone who has ever attempted to estimate software testing effort realizes just how difficult the task can be. The number of factors that can affect the estimate is virtually unlimited. Rob Sabourin says that the key to good estimates is to understand the primary variables, compare them to known standards, and normalize the estimates based on their differences. This is easy to say but difficult to accomplish because estimates are frequently required even when very little is known about the project and what is known is constantly changing. Throw in a healthy dose of politics and a bit of wishful thinking, and estimation can become a nightmare. Rob provides a foundation for anyone who must estimate software testing work effort. Learn about the test team’s and tester’s roles in estimation and measurement, and how to estimate in the face of uncertainty. Analysts, developers, leads, test managers, testers, and QA personnel can all benefit from this tutorial.

Read more
TJ Quality Assurance: Moving Your Organization Beyond Testing NEW
Jeffery Payne, Coveros, Inc.
Tue, 11/10/2015 - 1:00pm

Many organizations use the terms quality assurance and software testing interchangeably to describe their testing activities. But true quality assurance is much, much more than testing alone. Quality assurance encompasses a planned set of tasks, activities, and actions used to provide management with information about the quality of software so appropriate business decisions can be made. Jeffery Payne discusses the differences between software testing and quality assurance, examining the typical activities performed during a true quality assurance program. Topics discussed include evaluating software processes, validating software artifacts (requirements, designs, etc.), presenting a quality case to management, and how to start implementing a true quality assurance program. Leave with a working knowledge of quality assurance and a framework for incrementally improving your overall software quality assurance program.

Read more
TL Acceptance Test-Driven Development: Principles and Practices NEW
Ken Pugh, Net Objectives
Tue, 11/10/2015 - 1:00pm

Defining, understanding, and agreeing on the scope of work to be done is often an area of discomfort for product managers, developers, and quality assurance experts alike. The origin of many items living in our defect tracking systems can be traced to the difficulty of performing these initial activities. Ken Pugh introduces acceptance test-driven development (ATDD), explains why it works, and outlines the different roles team members play in the process. ATDD improves communication among customers, developers, and testers. ATDD has proven to dramatically increase productivity and reduce delays in development by decreasing re-work. Through interactive exercises, Ken shows how acceptance tests created during requirement analysis decrease ambiguity, increase scenario coverage, help with effort estimation, and act as a measurement of quality. Join Ken to examine issues with automating acceptance tests including how to create test doubles and when to insert them into the process. Explore the quality of tests and how they relate to the underlying code.

Read more
TN Advanced Test Automation in Agile Development
Rob Sabourin, AmiBug.com
Tue, 11/10/2015 - 1:00pm

Agile teams are charged with delivering potentially shippable software at the end of each iteration. In fact, some high-performing agile teams with advanced automation can ship working software every day. They achieve regression confidence with extensive automated test suites and other advanced practices. Rob Sabourin shares automation techniques to improve story and feature testing, exploratory testing, and regression testing. Explore ways that test-driven development (TDD) techniques, precise test and tool selection, appropriate automation design, and team collaboration can be combined to fully integrate testing into agile delivery teams. Learn how automation supports and drives agile testing activities, and how test automation is implemented in diverse organizations. Rob illustrates many types of automation with sample test descriptions, source code, and test scripts. See examples of automated tests for TDD, acceptance test-driven development, and behavior driven-development. Leave with a new toolkit of agile automation methods and techniques.

Read more

Concurrent Sessions

BW12 Exploratory Testing: Make It Part of Your Test Strategy
Kevin Dunne, QA Symphony
Wed, 11/11/2015 - 4:15pm

Developers often have the unfortunate distinction of not thoroughly testing their code. It’s not that developers do not understand how to test well; it’s just that often they have not had an opportunity to understand how the product works. Kevin Dunne maintains that implementing a team-wide exploratory testing initiative can help build the collaboration and knowledge sharing needed to elevate all team members to the level of product master. Exploratory testing can be performed by anyone, but the real challenge is making sure that the process is properly managed, documented, and optimized. Kevin describes the tools necessary to drive a deeper understanding of software quality and to implement an effective and impactful exploratory testing practice. Creating better software is not just about writing code more accurately and efficiently; it is about delivering value to the end user. Well-executed exploratory testing helps unlock this capability across the entire development team.

Read more
BT3 Test Data Management: A Healthcare Industry Case Study
Jatinder Singh, Harvard Pilgrim Health Care
Shaheer Mohammed, Harvard Pilgrim Health Care
Thu, 11/12/2015 - 10:00am

As IT systems increase in both scale and complexity, delivering quality applications becomes more challenging. In addition to creating and executing test scenarios, testers need to create and maintain the test data that enables test execution. Test data management (TDM) creates and processes data in test environments using business knowledge and technology. Test data is created based on requirements provided from consumers. With TDM in your software delivery process, teams dependent on data can focus on creating and executing test scenarios instead of having to provision the data to run these tests. Shaheer Mohammed and Jatinder Singh present a case study that recaps the successful creation of a TDM team. They review what worked well, share lessons learned along the way, touch on the challenges of managing protected data in the health-care industry, and discuss innovative tools and processes that enabled their success.

 
 

Read more
BT6 Detection Theory Applied to Finding and Fixing Defects
Ru Cindrea, Altom Consulting
Thu, 11/12/2015 - 11:30am

Detection theory says: When trying to detect a certain event, a person can correctly report that it happened, miss it, report a false alarm, or correctly report that nothing happened. Under conditions of uncertainty, the decision to report an event is strongly influenced by how likely it is that the event could happen or what the consequences of the event might be. Using real life examples, Ru Cindrea shows how this theory can be applied not only to finding defects but also to fixing them. The decision to fix a defect is also made under conditions of uncertainty and, although testers are not the ones making such decisions, testers may influence how decisions are made. Ru discusses how we testers, in addition to finding the right balance between misses and false alarms when hunting for defects, must use our credibility to provide the right information to stakeholders making decisions about fixing defects.

 
 

Read more