Skip to main content

Concurrent Sessions

Sessions are offered on Wednesday and Thursday at the conference and do not require a pre-selection. Build your own custom learning schedule, or choose to follow one of our tracks by topic area.

Concurrent Sessions
W1 Reducing the Cost of Software Testing
Matthew Heusser, Excelon Development
Wednesday, October 2, 2013 - 11:30am - 12:30pm

The demand to deliver more software in less time is increasing. Give in to the pressure without thinking, and you end up facing burnout, stress, business risk, and, most likely, even more demands. Refuse, fight the good fight, and it is likely the business will replace you with someone else. Matt Heusser tackles head-on the problem of pressure, sharing his favorite concepts from the book How to Reduce the Cost of Software Testing. Starting with why outsourcing and automation have not tamed the bugbear of cost, Matt moves on to describe the hidden costs of delays, handoffs, batched work, and documentation. Leave with a number of options that can help get the real project done more quickly, with guidance about when those techniques are appropriate, and what you’ll trade off to get there. 

More Information
Learn more about Matthew Heusser.
W2 Testing Lessons Learned from Monty Python
Rob Sabourin, AmiBug.com
Wednesday, October 2, 2013 - 11:30am - 12:30pm

And now for something completely different...Monty Python's Flying Circus revolutionized comedy and brought zany British humor to a worldwide audience. However, buried deep in the hilarity and camouflaged in its twisted wit lie many important testing lessons—tips and techniques you can apply to real world problems to deal with turbulent projects, changing requirements, and stubborn project stakeholders. Rob Sabourin examines some of the most famous Python bits—“The Spanish Inquisition” telling us to expect the unexpected, “The Dead Parrot” asking if we should really deliver this product to the customer, “The Argument” teaching us about bug advocacy, “Self Defense against Fresh Fruit” demonstrating the need to pick the right testing tool, and a host of other goofy gags, each one with a lesson for testers. Learn how to test effectively with persistence, how to make your point with effective communication, and how to clarify project goals and requirements.

More Information
Learn more about Rob Sabourin.
W3 Intelligent Mistakes in Test Automation
Dorothy Graham, Software Test Consultant
Wednesday, October 2, 2013 - 11:30am - 12:30pm

A number of test automation ideas that at first glance seem very sensible actually contain pitfalls and problems that you should avoid. Dot Graham describes five of these “intelligent mistakes”—automated tests will find more bugs more quickly; spending a lot on a tool must guarantee great benefits; it’s necessary to automate all of our manual tests; tools are expensive so we have to show a substantial return on investment; and testing tools must be used by the testers. Dot points out that automation doesn’t find bugs; tests do. Good automation does not come out of the box and is not automatic. Automating everything may not give you better (or faster) testing. Determining the actual rate of return is not only surprisingly difficult but may actually be harmful. Turning testers into test automators may waste their skills and talents. Join Dot for a rousing discussion of intelligent mistakes—so you can be smart enough to avoid them.

More Information
Learn more about Dorothy Graham.
W4 Working Testing Tasks into the Product Backlog
Michael Kelly, DeveloperTown
Wednesday, October 2, 2013 - 11:30am - 12:30pm

If you've worked on an agile project, delivering to production on a regular basis, then you've struggled with the challenge of fitting in all the big tasks—performance, security, usability, and compatibility testing. To make matters worse, over time it becomes more and more challenging just to fit in all the functional testing that needs to take place, and that's even with rigorous unit and acceptance test automation. So how do you fit all that testing into the backlog when it doesn't tie nicely to one specific feature? Michael Kelly explains that by writing specific stories for each testing activity and understanding when to coordinate the timing of those activities with overall project and iteration goals, you can make the testing tasks more visible and acceptable. Backlogs are where teams work out priority, scope, and set expectations for levels of effort. Make sure testing is a part of your project’s backlog.

More Information
Learn more about Michael Kelly.
W5 Rapid Performance Testing: No Load Generation Required
Scott Barber, PerfTestPlus, Inc.
Wednesday, October 2, 2013 - 11:30am - 12:30pm

Load testing is just one—but the most frequently discussed—aspect of performance testing. Luckily, much of performance testing does not demand the same expensive tools, special skills, environments, or time as load testing does. Scott Barber developed the Rapid Performance Testing (RPT) approach to help individuals and teams with the non-load aspects of performance testing. RPT is fast and easy, requires no investment in tools or special skills, is applicable throughout virtually any development cycle by anyone on the team, and most importantly reduces the frequency of those performance issues that threaten, or even negate, the value of load testing. Through examples and case studies, Scott shares the RPT approach and grants you exclusive access to his “Top Secret RPT Tips, Tools & Utilities” webpage. Immediately following this session, join Scott in the TestLab for real-time demonstrations on applications of your choosing and for an opportunity to have Scott coach you while you practice RPT.

More Information
Learn more about Scott Barber.
W6 Creating a Better Testing Future: The World Is Changing and We Must Change With It
Lee Copeland, Software Quality Engineering
Wednesday, October 2, 2013 - 11:30am - 12:30pm

The IEEE 829 Test Documentation standard is thirty years old this year. Boris Beizer’s first book on software testing also turned thirty. Testing Computer Software, the best selling book on software testing, is twenty-five. During the last three decades, hardware platforms have evolved from mainframes to minis to desktops to laptops to tablets to smartphones. Development paradigms have shifted from waterfall to agile. Consumers expect more functionality, demand higher quality, and are less loyal to brands. The world has changed dramatically and testing must change to match it. Testing processes that helped us succeed in the past may prevent our success in the future. Lee Copeland shares his insights into the future of testing, sharing his Do’s and Don’ts in the areas  of technology, organization, test processes, test plans, and automation. Join Lee for a thought provoking look at creating a better testing future.

More Information
Learn more about Lee Copeland.
W7 Key Strategies to Survive the Mega Test Program
Robert Goetz, Kaiser Permanente
Wednesday, October 2, 2013 - 1:45pm - 2:45pm

Sometime in your career as a test manager, you’ll be assigned to lead the effort for a program so large that the CEO and board of directors monitor it. These are programs that bet the organization’s future and come with a high degree of risk, visibility, pressure, and fixed deadlines. Internal audit and external third-party reviews become de rigueur. Your upstream partners—analysis, design, development, and suppliers—all appear (at least to you) to miss their deadlines with no apparent consequences. Everyone looks to you to “make your dates” so the project delivers on time and keeps the organization in business. Bob Goetz shares key strategies, techniques, and processes to help you survive and even thrive in these mega programs. Learn ways to build teams, standardize processes, measure progress, and work transparently so your business will have full confidence in you and your team—and let you sleep at night!

More Information
Learn more about Robert Goetz.
W8 Data Warehouse Testing: It’s All about the Planning
Geoff Horne, NZTester Magazine
Wednesday, October 2, 2013 - 1:45pm - 2:45pm

Today’s data warehouses are complex and contain heterogeneous data from many different sources. Testing these warehouses is complex, requiring exceptional human and technical resources. So how do you achieve the desired testing success? Geoff Horne believes that it is through test planning that includes technical artifacts such as data models, business rules, data mapping documents, and data warehouse loading design logic. Wayne shares planning checklists, a test plan outline, concepts for data profiling, and methods for data verification. He demonstrates how to effectively create a test strategy to discover empty fields, missing records, truncated data, duplicate records, and incorrectly applied business rules—all of which can dramatically impact the usefulness of the data warehouse. Learn common pitfalls, which can cost your business hundreds of thousands of dollars or more, when test planning shortcuts are taken. If you work in an environment that often performs data warehouse testing without proper planning and technical skills, this session is for you.

More Information
Learn more about Geoff Horne.
W9 Model-Based Testing with Keywords
Hans Buwalda, LogiGear
Wednesday, October 2, 2013 - 1:45pm - 2:45pm

Model-based testing can be a powerful alternative to just writing test cases. However, modeling tools are specialized and not suitable for everyone. On the other hand, keyword-driven test automation has gained wide acceptance as a powerful way to create maintainable automated tests, and, unlike models, keywords are simple to use. Hans Buwalda demonstrates different ways that keyword testing and models can be combined to make model-based testing more readily accessible. Learn how you can use keywords to create the models directly. The results of this  "poor man's approach" to model-based testing are clean, concise test cases that are interpreted dynamically. In other words, the model executes the tests rather than generating the tests for execution by another tool. This allows the model to actively respond to changing conditions in the application under test. See this demonstrated with a simple state-transition model, written with keywords, that plays a game until all relevant situations have been visited.

More Information
Learn more about Hans Buwalda.
W10 Agile Code Reviews for Better Software—Sooner
Mark Hammer, SmartBear Software
Wednesday, October 2, 2013 - 1:45pm - 2:45pm

Code reviews are often thought of as anti-agile, cumbersome, and disruptive. However, done correctly, they enable agile teams to become more collaborative and effective, and ultimately to produce higher quality software faster. Mark Hammer describes how lightweight code review practices succeed where more cumbersome methods fail. Mark offers tips on the mechanics of lightweight code reviews and compares five common styles of review. He looks at real-world examples and reveals impressive results. Gain new insights into how much time to spend in review, how much code to review in one session, and how author preparation practices can increase the efficiency of a review. Learn how peer code review can improve the performance of individual developers, their teams, and the software they produce. Mark shares the specific benefits of peer code review, including ROI and the ultimate goal of producing higher quality software faster.

More Information
Learn more about Mark Hammer.
W11 Automated Performance Profiling with Continuous Integration
Ivan Kreslin, Mitchell International
Wednesday, October 2, 2013 - 1:45pm - 2:45pm

Historically, performance tests are run long after the code has been checked in, making performance issues time consuming to resolve and thus not a good fit in the agile process. Ivan Kreslin presents a solution that he’s implemented to address this problem. Learn how Ivan integrates the functionality in Microsoft Performance Profiling tools into a test automation framework to capture performance-related issues during continuous integration. Learn how to extend any desired tests and enable these to be used simultaneously for both functional and performance testing—detecting any performance regressions that may have been introduced from one build to the next. For any regression found learn how the automated process generates a report, listing modules and functions that have changed, by how much, who checked the code in, and when. Learn how you can automate performance profiling for your own projects and detect performance problems earlier.

More Information
Learn more about Ivan Kreslin.
W12 Software Quality Metrics for Testers
Philip Lew, XBOSoft
Wednesday, October 2, 2013 - 1:45pm - 2:45pm

When implementing software quality metrics, we need to first understand the purpose of the metrics and who will be using them. Will the metric be used to measure people or the process, to illustrate the level of quality in software products, or to drive toward a specific objective? QA managers typically want to deliver productivity metrics to management but management may want to see metrics that describe customer or user satisfaction. Philip Lew believes that software quality metrics without actionable objectives toward increasing customer satisfaction are a waste of time. Learn how to connect each metric with potential actions based on evaluating the metric. Metrics for the sake of information may be helpful but often just end up in spreadsheets of interest to no one. Take home methods to identify metrics that support actionable objectives. Once the metrics and their objectives have been established, learn how to define and use metrics for real improvement.

More Information
Learn more about Philip Lew.
W13 Testing to Detect Problems that Will Hurt the Bottom Line
Wednesday, October 2, 2013 - 3:00pm - 4:00pm

Many of our stakeholders don't understand testing like we do, especially those whose focus is on making sales, growing revenues, and watching the bottom line. As testers, how can we support them in their efforts to be successful? How can we provide useful, timely information that helps them make important decisions? Pradeep Soundararajan shares his experiences with changing perceptions of testing for those in sales and the ripple effect it had on the testers’ freedom and responsibilities. Pradeep describes how pair testing the product with sales and marketing people, understanding what they need, and the product claims they make have led to significant increases in product sales, quality, and reputation. He shows how this simple idea can change the way we test and how we may help our customers to see testing differently. Learn how Pradeep helps his team members to become leaders in software testing.

More Information
Learn more about Pradeep Soundararajan.
W14 Model-Based Testing: Concepts, Tools, and Techniques
Adam Richards, Critical Logic
Wednesday, October 2, 2013 - 3:00pm - 4:00pm

For decades, software development tools and methods have evolved with an emphasis on modeling. Standards like UML and SysML are now used to develop some of the most complex systems in the world. However, test design remains a largely manual, intuitive process. Now, a significant opportunity exists for testing organizations to realize the benefits of modeling. Adam Richards describes how to leverage model-based testing to dramatically improve both test coverage and efficiency—and lower the overall cost of quality. Adam provides an overview of the basic concepts and process implications of model-based testing, including its role in agile. A survey of model types and techniques shows different model-based solutions for different kinds of testing problems. Explore tool integrations and weigh the pros and cons of model-based test development against a variety of system and project-level factors. Gain a working knowledge of the concepts, tools, and techniques needed to introduce model-based testing to your organization.

More Information
Learn more about Adam Richards.
W15 iOS Test Automation: The Trifecta
Wednesday, October 2, 2013 - 3:00pm - 4:00pm

In this agile world, as the expectations for rapid mobile application development and delivery get shorter every day, the users’ patience with a buggy app has become almost nonexistent. Elizabeth Taylor shares how to reduce iOS application testing time and gain confidence in your code: use Xcode Instruments with JavaScript to automate your functional tests; verify potentially missed UI elements with manual testing including copy, labels, and images; and learn how to stress test your app. Scripting test functions so they can be run on iPad and iPhone devices also will be discussed as will using Accessibility Labels for Automation to “see” custom controls. With Elizabeth’s Trifecta approach, you will structure your test suites and libraries so they are easy to run, debug, and understand test results. Code snippets, useful JavaScript functions, and a live demo illustrate the Trifecta process in action.

More Information
Learn more about Elizabeth Taylor.
W16 Don’t Go over the Waterfall: Keep Agile Testing Agile
Aaron Barrett, Infusionsoft
Wednesday, October 2, 2013 - 3:00pm - 4:00pm

All too often an agile iteration resembles a mini-waterfall cycle with developers coding for the duration of the iteration and then throwing code “over the wall” to the test team. This results in the all-too-familiar “test squeeze” with testers often testing code after the iteration has already finished. When testing occurs after an iteration’s end, the agile principle of potentially releasable is violated and negatively impacts the next iteration. To avoid these problems we must ensure that all testing is completed before the end of the iteration. But how can we achieve this? Aaron Barrett explains that the solution lies in the planning and processes that govern the agile team. Learn proven strategies that allow your test teams to move testing back inside the iteration and take back a plan to keep you from going over the waterfall.

More Information
Learn more about Aaron Barrett.
W17 Create a One-Page Capacity Model for High-Traffic Web Applications
Dan Bartow, SOASTA, Inc.
Wednesday, October 2, 2013 - 3:00pm - 4:00pm

In the test lab and in production everything hinges on looking at the right performance metrics. A common problem for engineering teams is that they don’t know what metrics they should be analyzing. It’s easy to get lost in an ocean of data from disparate monitoring tools and end up with no answers to the simplest questions about performance and capacity. The reality is that to build an effective capacity model, engineers only need to track three key metrics from each tier. Using a technique perfected during a decade of performance engineering on some of the world’s highest traffic web applications, Dan Bartow shares techniques for building one of the most useful tools a team can have—the one-page capacity model. Learn the critical metrics to monitor at each tier for the most common application technologies and how to turn this knowledge into a reference page that teams will rely on in every aspect of performance management for years to come.

More Information
Learn more about Dan Bartow.
W18 Courage and Freedom in Exploratory Testing
Griffin Jones, Congruent Compliance
Wednesday, October 2, 2013 - 3:00pm - 4:00pm

Exploratory testing (ET) consists of simultaneous learning, test design, test execution, and optimization. Most people are able to adopt the outward behaviors of ET but struggle to adopt an ET mindset. Griffin Jones explains that this mindset requires reflecting on four basic questions: Am I learning and adapting? Am I working on the correct mission? Should I redesign the task? Should I change how I perform the task? Sharing his experiences across project roles, Griffin explains why courage and freedom are critical ingredients in answering those four questions. He describes the warning signs of a superficial commitment to the values of ET. Learn the power of asking the question: What is the best test I can perform, right now? Move beyond mimicry and the superficial. Leave with a way to align yourself with the deeper values of exploratory testing.

More Information
Learn more about Griffin Jones.
T1 Eliminating Software Defects with Jidoka—The Overlooked Pillar of Lean
Thursday, October 3, 2013 - 9:45am - 10:45am

Many development organizations are experimenting—but getting mixed results—with lean development techniques. As a test or development manager, you have the power to help eliminate defects—the largest source of waste in development—and the enormous rework costs they incur. Bill Curtis discusses Jidoka, another pillar of lean, which uses automation to help developers detect and eliminate defects during development. Bill describes a technology framework that uses static analysis, behavioral emulation, and other techniques for analyzing and measuring non-functional quality characteristics such as reliability, security, performance, and maintainability. He presents data showing correlations between improvements in structural quality measures and reductions in operational defects and rework-related costs. In addition, Bill presents the most frequent types of structural defects for each software quality characteristic. Gain a broader vision of how to apply lean principles to your software development and maintenance practices.

More Information
Learn more about Bill Curtis.
T2 Evaluating and Testing Web APIs
Ole Lensmar, SmartBear Software
Thursday, October 3, 2013 - 9:45am - 10:45am

Thanks to the massive adoption of cloud and mobile applications, web APIs are moving to center stage for many business and technology teams. As a direct result, the need to deliver a high-quality API experience is essential. When it comes to quality aspects of web APIs, there is more than first meets the eye. Apart from obvious characteristics related to functionality, performance, and security, several not-so-obvious traits of APIs are crucial for their adoption—many related to the context of the end user and how the API is to be consumed. To give you a thorough understanding of web API quality and to prepare you for testing these APIs, Ole Lensmar dives into both the expected and unexpected quality aspects of web APIs that you as a tester need to be aware of, including the importance of API usability, third-party API handling, and the passionate debate around web API metadata standards.

More Information
Learn more about Ole Lensmar.
T3 Refactoring Automated Functional Tests
Zhimin Zhan, AgileWay Pty Ltd
Thursday, October 3, 2013 - 9:45am - 10:45am

Regarded as one of the most important advances in software development, code refactoring is a disciplined technique to improve the design, readability, and maintainability of source code. You can learn to apply the same refactoring concepts to automated functional test scripts. Zhimin Zhan introduces functional test refactoring, a simple and highly effective approach to refine and maintain automated test scripts. Zhimin shares the approaches he uses to refactor existing tests into a set of reusable functions and page objects, and the concepts you will need to start developing new, automated tests with. Learn about the six most common test refactorings including “extract to page object,” "extract function," and "rename function.” Learn how you can develop a sustainable rhythm for refactoring your automated tests. Take back immediate applicable ideas to achieve test automation success.

More Information
Learn more about Zhimin Zhan.
T4 Mobile Testing Trends and Innovations
Melissa Tondi, ProtoTest
Thursday, October 3, 2013 - 9:45am - 10:45am

As organizations implement their mobile strategy, testing teams must support new technologies while still maintaining existing systems. Melissa Tondi describes the major trends and innovations in mobile technology, usage, and equipment that you should consider when transitioning existing test teams or starting new ones. Based on a year of research with the ProtoTest Mobile team, Melissa focuses on areas that balance efficiency and productivity including using the Device Matrix technique to select devices to test against, and the appropriate use of emulators and simulators rather than physical devices. She offers solutions to ensure you have a comprehensive mobile test strategy and focuses on challenges that have inundated traditional test teams such as understanding mobile-specific integration testing and which automation tools to use. Melissa describes how to build a well-organized device lab and incorporate testing scenarios—such as gesture and interruption testing—unique to mobile.

More Information
Learn more about Melissa Tondi.
T5 Build Your Personal Portfolio of Thinking Skills
Karen N. Johnson, Software Test Management, Inc.
Thursday, October 3, 2013 - 9:45am - 10:45am

How do we improve ourselves as software testers? What are the thinking skills we should develop? How do we refine these skills? Observing is one of the essential skills for software testers. We need to detect changes and differences even when they are subtle. Visual imaging helps us to imagine software that doesn’t exist, to plot testing possibilities. Abstracting helps us to see the outline of a product while not losing focus on small details. Managing distraction and focusing are also vital skills. Recognizing patterns enhances a tester’s ability to detect software defects. Mental modeling helps testers understand information and gives us a method for forming strategies and problem solving. Karen N. Johnson draws immediate connections from theory to practical application of each of these skills. She explores why these skills are necessary and how we can explicitly apply these skills to our craft.

More Information
Learn more about Karen N. Johnson.
T6 Test Automation Challenges in the Gaming Industry
Brett Roark, Blizzard Entertainment
Thursday, October 3, 2013 - 9:45am - 10:45am

Gaming is a multibillion-dollar industry, and good testing is critical to any game’s success. Game testing has traditionally been black-box through the client—a method clearly insufficient with increasingly more complex software incorporating 3D physics, thousands of linked and interacting assets, large databases, and client-server architecture. Automation is an obvious answer, but how do you automate when the user interface is an immersive virtual environment, the data is as vital a part of the software as the code (and actually more likely to create bugs), and the games themselves are often built specifically to prevent automation? Brett Roark describes how Blizzard Entertainment is meeting this challenge by using automation to tame complex asset pipelines and building custom tools that make each tester more efficient. Take away a deeper understanding of the unique complexities of modern game testing, see why they require fresh and creative solutions, and consider how these solutions might apply to non-game testing.

More Information
Learn more about Brett Roark.
T7 Test Status Reporting: Focus Your Message for Executives
Stephan Obbeck, KROLL Consulting AG
Thursday, October 3, 2013 - 11:15am - 12:15pm

Test status reporting is a key factor in the success of test projects. Stephan Obbeck shares some ideas on how to communicate more than just a red-yellow-green status report to executive management and discusses how the right information can influence their decisions. Testers often create reports that are too technical, losing crucial information in a mountain of detailed data. Management needs to make decisions—based on data they do understand—that support the test project. Stephan explains how stakeholder and risk analysis helps you identify recipients of a report and what information is of interest to them. Learn different ways of presenting data to support your message and to get the most possible attention from the executive level. Discover how to avoid pitfalls when generating reports from test automation. Produce a summary of statistics that provides insight into a test project.

More Information
Learn more about Stephan Obbeck.
T8 Become a Big Data Quality Hero
Jason Rauen, LexisNexis
Thursday, October 3, 2013 - 11:15am - 12:15pm

Many believe that regression testing an application with minimal data is sufficient. With big data applications, the data testing methodology becomes far more complex. Testing can now be done within the data fabrication process as well as in the data delivery process. Today, comprehensive testing is often mandated by regulatory agencies—and more importantly by customers. Finding issues before deployment and saving your company’s reputation—and in some cases preventing litigation—are critical. Jason Rauen presents an overview of the architecture, processes, techniques, and lessons learned by an original big data company. Detecting defects up-front is vital. Learn how to test thousands, millions, and in some cases billions—yes, billions—of records directly, rendering sampling procedures obsolete. Save time and money for your organization with better data test coverage than ever before.

More Information
Learn more about Jason Rauen.
T9 Automated Testing of a Dynamically Configurable System
Terry Morrish, Synacor
Thursday, October 3, 2013 - 11:15am - 12:15pm

You provide your clients a service and product, designed so that each component is customizable and can be dynamically changed right down to screen layout and field location. This greatly increases the amount of testing you have to perform on a release since there could be more than fifty variations of the component. So how do you ensure high quality outcomes with so much testing to be performed under tight timeframes? You automate the testing, of course. But how do you efficiently manage and automate the dynamic changes within the automated testing framework when the automated testing has to be continuously changed? Terry Morrish explains how to successfully structure automated testing to minimize the overhead management of the dynamically changing environment using a combination of Selenium, css identifiers, JSON files, and a distributed automation farm.

More Information
Learn more about Terry Morrish.
T10 Mobile Testing Success: Real World Strategies and Techniques
Clint Sprauve, Hewlett-Packard
Thursday, October 3, 2013 - 11:15am - 12:15pm

Today, consumers spend more time on mobile apps than on the web. With this increased demand and paradigm shift toward mobile devices, the role of the software tester is evolving and becoming more complex. Since mobile testing is a relatively new domain, software testers face the challenge of understanding not only what to test but how to test. Clint Sprauve focuses on real world strategies and techniques for mobile app testing including device provisioning, mobile network virtualization, multi-OS platform coverage, and hybrid app testing. Learn how companies across various industries—insurance, finance, and entertainment—are implementing successful mobile testing strategies and techniques to meet this growing challenge. In addition, Clint highlights what is most important when creating a mobile testing strategy for your organization—object recognition options (native, text, and image), mobile app performance, and device security.

More Information
Learn more about Clint Sprauve.
T11 It’s All Fun and Games: Using Play to Improve Tester Creativity
Christin Wiedemann, Professional Quality Assurance, Ltd.
Thursday, October 3, 2013 - 11:15am - 12:15pm

The number of software test tools keeps expanding, and individual tools are continuously becoming more advanced. However, there is no doubt that a tester’s most important—yet often neglected and underused—tool is the mind. As testers, we need to employ our intelligence, imagination, and creativity to gain information about the system under test. Humans are biologically designed to learn through play, and even as adults we can exploit this and harness the power of play to encourage and drive our creativity. Christin Wiedemann shows how you and your team can employ games and puzzles to practice and enhance cognitive skills that are especially important to testers including critical thinking, pattern recognition, and the ability to quickly process and understand new information. Not only will play make you a better tester but it will also make testing more fun. Learn to think critically and question your testing assumptions.

More Information
Learn more about Christin Wiedemann.
T12 Tests and Requirements: Like Ham and Eggs, Sugar and Spice, Lucy and Desi
Ken Pugh, Net Objectives
Thursday, October 3, 2013 - 11:15am - 12:15pm

The practice of agile software development requires a clear understanding of business needs. Misunderstanding requirements causes waste, slipped schedules, and mistrust within the organization. Developers implement their perceived interpretation of requirements; testers test against their perceptions. Disagreement can arise about implementation defects, when the cause is really a disagreement about the requirement. Ken Pugh shows how acceptance tests decrease requirements misunderstandings by both developers and testers. A testable requirement provides a single source that serves as the analysis document, acceptance criteria, regression test suite, and progress tracker for any given feature. Explore the creation, evaluation, and use of testable requirements by the business and developers. Join Ken to examine how to transform requirements into stories, small units of work that have business value, small implementation effort, and easy-to-understand acceptance tests. Learn how testers and requirement elicitors can work together to create acceptance tests prior to implementation.

More Information
Learn more about Ken Pugh.
T13 Swimming with the Salmon: Lessons in Moving Quality Upstream
Colleen Kirtland, The Capital Group
Harish Krishnankutty, Infosys Limited
Thursday, October 3, 2013 - 1:30pm - 2:30pm

Having difficulties getting your organization to recognize the value of QA? Is your “salmon team” losing to currents that impede continuous improvement and strategic planning? Colleen Kirtland and Harish Krishnankutty share their two-year uphill struggle to elevate QA to the position of trusted business partner. Move QA upstream before testing begins by aligning requirements to a business capability model (BCM). Translate the BCM model into key implementation assets with story maps. Before delivering test execution, swim like salmon to frame testing services by connecting day-to-day operational metrics to higher level business value metrics. Partner with your product and/or development teams to inject measurable quality gates upstream in the delivery lifecycle. Learn about the evolution of merging service level management (e.g., ITIL processes) with upstream QA, test, and solution delivery. Create a fun, vibrant team culture of uphill swimmers who advocate formal quality standards. Fight to fund and sustain a multi-year quality strategy while still meeting customer demands.

More Information
Learn more about Colleen Kirtland and Harish Krishnankutty.
T14 User Acceptance Testing: Make the User a Part of the Team
Susan Bradley, Grange Mutual Insurance
Thursday, October 3, 2013 - 1:30pm - 2:30pm

Adding user acceptance testing (UAT) to your testing lifecycle can increase the probability of finding defects before software is released. The challenge is to fully engage users and assist them in becoming effective testers. Help achieve this goal by involving users early and setting realistic expectations. Showing how users add value and taking them through the UAT process strengthens their ability and commitment. Conducting user acceptance testing sessions as software functionality becomes available helps to build confidence and capability—and find defects earlier. Susan Bradley shares a five-step process that you can use in your organization to conduct user acceptance testing. Learn to conduct training, set up daily testing expectations, assign test cases to users, create a shared information site for both test case management and feedback documentation, conduct a review of noted issues with all interested parties, and participate in a retrospective regarding the UAT process to  improve the process for next time.

More Information
Learn more about Susan Bradley.
T15 Confessions of a Test Automation Addict
David Rosskopf, LDS Church
Thursday, October 3, 2013 - 1:30pm - 2:30pm

Feeling fatigued, frustrated, and stressed at work? Wondering how you can stay relevant and highly valued in this fast-changing software development domain? David Rosskopf shares how you can become more productive through a non-traditional approach for automating testing—and much more. David, a self-admitted automation addict, confesses he is easily bored with repetitive tasks and frustrated with inefficiencies. Learn from David how to identify inefficiencies in your workplace and how to develop the right tool to fit each need. He shares his knowledge and experiences using automation to solve day-to-day business problems: building automation frameworks, developing tools that decrease troubleshooting efforts, and creating tools to monitor performance. Get inspired to become the automation addict on your team and start solving problems back at the office.

Warning: Side effects may include increased productivity, more free time, happier management, decreased stress, increased salary, and ecstatic co-workers.

More Information
Learn more about David Rosskopf.
T16 Automate Mobile App Testing—Or Go Crazy
Stewart Stern, Gorilla Logic, Inc.
Thursday, October 3, 2013 - 1:30pm - 2:30pm

During the past decade, test engineers have become experts in browser compatibility testing. Just when we thought everything was under control, along come native mobile applications that need to run across platforms far more diverse than the desktop browser landscape has ever been. The variety of OSs, screen sizes, and hardware technology combine to create hundreds of configurations that need some testing. Manual testing across so many deployment targets will drive anyone crazy. Stu Stern looks at the biggest challenges in mobile testing: functional, platform, display, and device compatibility testing and explores how you can use MonkeyTalk, a free open source tool to create test suites that can be easily run across today’s menagerie of mobile devices. MonkeyTalk can help you automate functional interactive tests for native, mobile, and hybrid iOS and Android apps—everything from simple "smoke tests" to sophisticated data-driven test suites.

More Information
Learn more about Stewart Stern.
T17 Security Testing Mobile Applications
Jeff Payne, Coveros, Inc.
Thursday, October 3, 2013 - 1:30pm - 2:30pm

Due to the sensitive nature of the personal information often stored on mobile phones, security testing is vital when building mobile applications. Jeff Payne discusses some of the characteristics that make testing mobile applications unique and challenging. These characteristics include how mobile devices store data, fluid trust boundaries due to untrusted applications installed on the device, different and unique aspects of device security models, and differences in the types of threats one must be concerned with. Jeff shares hints and tips for effectively testing mobile applications. Tips include how to test for data privacy, secure session management, the presence of malicious applications, and traditional application security vulnerabilities. Leave with an understanding of what it takes to security test your mobile applications.

More Information
Learn more about Jeff Payne.
T18 Get Testing Help from the Crowd
Thursday, October 3, 2013 - 1:30pm - 2:30pm

Crowdsourcing has become widely acknowledged as a productivity solution across numerous industries. However, for companies incorporating crowdsourcing into existing business practices, specific issues must be addressed: What problem are we trying to solve? How do we control the process? How do we incentivize people to achieve our goals? Ultimately, the key to successfully employing a crowdsourcing model is to move beyond the realm of the “mob” to create an engaged, interactive community of diverse and skilled professionals. In the world of quality assurance, crowdsourcing has the potential to effectively solve emerging challenges and take your testing to new heights. Using real-world examples, Matt Johnston explains how you can leverage the crowd to complement your internal systems, ensure systems work as intended under real-world conditions, and effectively manage the scalability of testing efforts.

More Information
Learn more about Matt Johnston.
T19 Beyond Continuous Delivery—All the Way to Continuous Deployment
Kris Lankford, Microsoft
Thursday, October 3, 2013 - 3:00pm - 4:00pm

Just as those in the software world are getting their hands around agile practices, leading software organizations are going beyond continuous delivery for acceptance testing and now adopting continuous deployment—the practice of immediately releasing new code from development into production without human intervention. Continuous delivery promises to provide higher business value through faster deployment and leaner, more productive development and operations (DevOps). Many DevOps teams are concerned about what will happen to quality when they move to continuous deployment. Kris Lankford explores the business drivers that make continuous deployment compelling, the emerging deployment technologies and methodologies, and tools that are available to support quality in this fast-paced DevOps practice. Learn about the fundamental agile methodologies and testing techniques—both manual and automated—required to ensure that quality remains high. Find out what you need to do to get ready for continuous deployment and ensure you are ready to go—without compromising quality.

More Information
Learn more about Kris Lankford.
T20 Decoupled System Interface Testing at FedEx
Chris Reites, FedEx Services
Thursday, October 3, 2013 - 3:00pm - 4:00pm

If you work in a large-scale environment, you know how difficult it is to have all the systems “code complete” and ready for testing at the same time. In order to fully test end-to-end scenarios, you must be able to validate results in numerous systems. But what if all those systems are not available for you to begin testing? Chris Reites describes “decoupled testing,” an enterprise-level solution for managing interface data for capture, injection, simulation, and comparison all along your testing paths. Decoupled testing provides the ability to validate and independently test systems without having to rely on end-to-end testing. This is accomplished by capturing intermediate interface transactions at pre-determined, critical points during processing and comparing them against previously captured or generated expected results. Chris shares a case study on how this approach has benefited FedEx on critical customer-facing systems.

More Information
Learn more about Chris Reites.
T21 End-to-End Automation: Providing Stakeholders Feedback on Quality
Vikas Bhupalam, Intuit, Inc.
Thursday, October 3, 2013 - 3:00pm - 4:00pm

Are you running automated tests during development yet not providing automated feedback to the project stakeholders? Vikas Bhupalam approached this problem by leveraging and integrating monitoring, logging, and defect tracking systems to provide automatic feedback to stakeholders. Tests are executed using a Java-based framework, and the results are sent to a monitoring tool that shows up as traffic lights on a dashboard. The dashboard links to logs on the server that provide insights into failing tests and root causes of problems. Alerts can be triggered for specific conditions. Change requests are then automatically filed in the defect tracking system with the appropriate severity and priority set. The QA sign off in all environments is provided to DevOps and all other stakeholders in this automated process. Learn about the framework and the integration involved in bringing all these pieces together.

More Information
Learn more about Vikas Bhupalam.
T22 Mobile Test Automation with Big Data Analytics
Tarun Bhatia, Microsoft
Thursday, October 3, 2013 - 3:00pm - 4:00pm

Organizations with a mobile presence today face a major challenge of building robust automated tests around their mobile applications. However, organizations often have limited testing resources for these increasingly complex projects, and stakeholders worry about the quality of the product. So how do you plan a mobile test automation project, recognizing the failure rate of such efforts? Discover how Tarun Bhatia used big data analytics to understand where customers spend most of their time out in the wild on their apps. See how they analyzed massive amounts of mobile usage data to create an operational model of carriers, devices, networks, countries, and OS versions. They then developed automation strategies resulting in better tests created with the right priorities. Learn how you can apply mobile automation capabilities in areas of continuous integration, performance, benchmark, compatibility, stress, and performance testing based on analytics data..

More Information
Learn more about Tarun Bhatia.
T23 The Google Hacking Database: A Key Resource to Exposing Vulnerabilities
Kiran Karnad, MIMOS Berhad
Thursday, October 3, 2013 - 3:00pm - 4:00pm

We all know the power of Google—or do we? Two types of people use Google: normal users like you and me, and the not-so-normal users—the hackers. What types of information can hackers collect from Google? How severe is the damage they can cause? Is there a way to circumvent this hacking? As  a security tester, Kiran Karnad uses the GHDB (Google Hacking Database) to ensure their product will not be the next target for hackers. Kiran describes how to effectively use Google the way hackers do, using advanced operators, locating exploits and finding targets, network mapping, finding user names and passwords, and other secret stuff. Kiran provides a recipe of five simple security searches that work. Learn how to automate the Google Hacking Database using Python so security tests can be incorporated as a part of the SDLC for the next product you develop.

More Information
Learn more about Kiran Karnad.
T24 Introducing the New Software Testing Standard
Jon Hagar, Grand Software Testing
Thursday, October 3, 2013 - 3:00pm - 4:00pm

Software testing standards—who cares, anyway? You should! The new ISO/IEC/IEEE 29119 software testing standard, driven by representatives from twenty countries and under development for the past five years, will be released soon. As a professional tester, you need to know about this standard and how it may apply to your environment. Jon Hagar describes the standard, how it was developed, and what types of projects will be impacted by it. This new standard offers risk-based approach to software testing that can be applied to both traditional and agile projects. It is comprehensive—addressing software test basic concepts, definitions, generic test processes, documentation, and techniques—and will replace numerous IEEE and national standards. Many countries, government agencies, and private companies worldwide will start using ISO 29119 in the coming years to benchmark and improve their test practices. Join with Jon to dive in to ISO 29119 and see what it is all about.

More Information
Learn more about Jon Hagar.