Skip to main content

Test Design

Tutorials

TE Fundamental Test Design Techniques
Lee Copeland, Software Quality Engineering
Tue, 06/23/2015 - 8:30am

As testers, we know that we can define many more test cases than we will ever have time to design, execute, and report. The key problem in testing is choosing from the almost infinite number of tests available a small, “smart” subset that will find a large percentage of the defects. Join Lee Copeland to discover how to design test cases using formal black-box techniques, including equivalence class testing, boundary value testing, decision tables, and state-transition diagrams. Explore examples of each of these techniques in action. Don’t just pick test cases at random. Learn to selectively choose a set of test cases that maximizes your effectiveness and efficiency to find more defects in less time. Then, examine how to use the test results to evaluate the quality of both your products and your testing. Discover the test design techniques that will make your testing more productive.

Read more
TF Tips for Expanding Your Testing Toolbox
Alan Page, Microsoft
Tue, 06/23/2015 - 1:00pm

Regardless of how long you’ve been testing and learning—only a month or many years—there is always something new to help improve your testing and software development efforts. Although many testers, for better or worse, see test automation as their next—and sometimes only—step to grow their skill set and improve as a tester, there is much more to do. Alan Page discusses, demonstrates, and details concepts and tools that can help everyone test better and provide noticeable technical value to their organization. Alan explores a potpourri of suggestions to help you grow your testing toolbox: techniques for security and performance testing, tools to help you find better bugs, scripting that aids (rather than replaces) your testing, tester tips for code review that can be done with minimal (or zero) knowledge of coding, and more. Finally, you’ll learn simple approaches that will enable you to continue to grow your knowledge and skills throughout your career.

Read more
TG Getting Things Done: What Testers Do in Agile Sprints
Rob Sabourin, AmiBug.com
Tue, 06/23/2015 - 1:00pm

Avoiding siloed development and test is a tricky business—even with agile practices in place. It is easy for agile teams to fall into the rut in which testers only do testing and programmers only do coding. Rob Sabourin explores many ways to apply your testing knowledge and experience inside a Scrum sprint or iteration and throughout an agile project. He finds that testers are among the most skilled team members in story grooming, elicitation, and exploration. Rob describes a host of ways testers add value to an agile sprint—using their analysis skills to help clear the way to make tough technical trade-offs; pairing with programmers to help design and review unit tests; studying static analysis reports to find unexpected code complexity or security; and much more. Join Rob to see how testers can start working hand-in-hand with developers, business analysts, and product owners to get more things done in agile sprints and projects.

Read more

Keynotes

K1 How We NOW Test Software at Microsoft
Alan Page, Microsoft
Wed, 06/24/2015 - 8:30am

In December 2008 when How We Test Software at Microsoft was first published, the software community appreciated the insight into many testing activities and processes popular at Microsoft. Six and a half years later, many companies—including Microsoft—have evolved and changed in a variety of ways, and now much of the book is outdated or obsolete. New products, new ideas, and new strategies for releasing software have emerged. Alan Page explores Microsoft’s current approaches to software testing and quality. He digs into new practices, describes changing roles, rants about long-lived ideas kicked to the curb in the past seven years―and might even share a few tidbits not fit for print and wide-scale distribution. To give organizations food for thought and ideas for growth, Alan reveals what’s new in quality approaches, developer to tester ratios, agile practices, tools, tester responsibilities—and lessons he’s learned along the way.

Read more

Concurrent Sessions

W2 Testing the Internet of Things
Regg Struyk, Polarion Software
Wed, 06/24/2015 - 10:15am

Embedded software—now being referred to as the Internet of Things (IoT)—continues to permeate almost every industry—from household appliances to heart monitors. It is estimated that there are at least a million lines of code in the average car. As IoT explodes from millions of devices to tens of billions in the next few years, new challenges will emerge for software testing. Security, privacy, complexity, and competing standards will fuel the need for innovative testing. Customers don't care why your software failed in the connected chain—only that it did fail. Companies that focus on quality will ultimately be the successful brands. Learn what new approaches are required for testing the “zoo” of interconnected devices. As products increasingly connect physical hardware with applications, we must revisit old testing approaches. IoT is about analyzing data in real time, allowing testers to make quicker and more informed decisions. If IoT testing is in your future, this session is for you.

Read more
W3 From Formal Test Cases to Session-Based Exploratory Testing
Ron Smith, Intuit
Wed, 06/24/2015 - 10:15am

Agile software development is exciting, but what happens when your team is entrenched in older methodologies? Even with support from the organization, it is challenging to lead an organization through the transformation. As you start making smaller, more frequent releases, your manual test cases may not keep up, and your automated tests may not yet be robust enough to fill the gap. Add in the reality of shrinking testing resources, and it is obvious that change is required. But how and what should you change? Learn how Ron Smith and his team tackled these challenges by moving from a test case-driven approach to predominantly session-based exploratory testing, supported by “just enough” documentation. Discover how this resulted in testers who are more engaged, developers who increased their ability and willingness to test, and managers who increased their understanding and insight into the product. Use what you learn from Ron to begin the transformation in your organization.

Read more
W6 Virtualize APIs for Better Application Testing
Lorinda Brandon, SmartBear Software
Wed, 06/24/2015 - 11:30am

In today’s interconnected world, APIs are the glue that allows software components, devices, and applications to work together. Unfortunately, many testers don’t have direct access to manipulate the APIs during testing and must rely on either testing the API separately from the application or testing the API passively through functional application testing. Lorinda Brandon maintains that these approaches miss the most important kind of API testing―uncovering how your application deals with API constraints and failures. Lorinda describes common API failures—overloaded APIs, bad requests, unavailabilities, and API timeouts—that negatively impact applications, and how application testers miss these scenarios, especially in third-party APIs. She explores how and when virtualization can and cannot help, including creating a virtual API that can fail. Lorinda discusses the importance of simulating API failures in web and mobile application testing, and identifies tools and technologies that help virtualize your APIs.

Read more
W15 Automate REST API Testing
Eric Smith, HomeAdvisor
Wed, 06/24/2015 - 3:00pm

As an organization grows, the body of code that needs to be regression tested constantly increases. However, to maintain high velocity and deliver new features, teams need to minimize the amount of manual regression testing. Eric Smith shares his lessons learned in automating RESTful API tests using JMeter, RSpec, and Spock. Gain insights into the pros and cons of each tool, take back practical knowledge about the tools available, and explore reasons why your shop should require RESTful automation as part of its acceptance test criteria. Many decisions must be made to automate API tests: choosing the platform; how to integrate with the current build and deploy process; and how to integrate with reporting tools to keep key stakeholders informed. Although the initial transition caused his teams to bend their traditional roles, Eric says that ultimately the team became more cross-functionally aligned and developed a greater sense of ownership for delivering a quality product.

Read more
T6 Write Your Test Cases in a Domain-Specific Language
Beaumont Brush, Dematic, Inc.
Thu, 06/25/2015 - 11:30am

Manual test cases are difficult to write and costly to maintain. Beaumont Brush suggests that one of the more important but infrequently-discussed reasons is that manual tests are usually written in natural language, which is ineffective for describing test cases clearly. Employing a domain-specific language (DSL), Beaumont and his team approach their manual test cases exactly like programming code and gain the benefits of good development and design practices. He shares their coding standards, reusability approach, and object models that integrate transparently into the version control and code review workflow. Beaumont demonstrates two DSL approaches―a highly specified DSL written in Python and a more functional DSL that leverages Gherkin syntax and does not require a computer language to implement. By making your test cases easier to write and maintain, your team will improve its test suite and have time for automating more tests.

Read more
T7 Transform a Manual Testing Process to Incorporate Automation
Jim Trentadue, Ranorex
Thu, 06/25/2015 - 11:30am

Although most testing organizations have automation, it’s usually a subset of their overall efforts. Typically the processes for the department have been previously defined, and the automation team must adapt accordingly. The major issue is that test automation work and deliverables do not always fit into a defined manual testing process. Jim Trentadue explores what test automation professionals must do to be successful. These include understanding development standards for objects, structuring tests for modularity, and eliminating manual efforts. Jim reviews the revisions required to a V-model testing process to fuse in the test automation work. This requires changes to the manual testing process, specifically at the test plan and test case level. Learn the differences between automated and manual testing process needs, how to start a test automation process that ties into your overall testing process, and how to do a gap analysis for those actively doing automation, connecting better with the functional testing team.

Read more