Skip to main content

Concurrent Sessions

Sessions are offered on Wednesday and Thursday at the conference and do not require a pre-selection. Build your own custom learning schedule, or choose to follow one of our track schedules.

Concurrent Sessions
W1 Leadership for Test Managers and Testers
Rick Craig, Software Quality Engineering
Wednesday, June 24, 2015 - 10:15am - 11:15am

Many organizations spend a great deal of time and effort acquiring and learning to use the latest techniques and technology, but they make little or no attempt to train or mentor their staff to be better leaders. While it is true that technology is important, test teams without able leaders will struggle to be successful. Rick Craig shares some of the lessons he has learned in his roles as test manager, military leader, and entrepreneur. Initially, Rick discusses some classic leadership topics―leadership traits and styles, the cornerstones of leadership, and principles of leadership. Explore the importance of influence leaders and how to identify and encourage them. Discover the positive and negative indicators of morale and how to maintain high morale within a team. Learn how to give direction without being a micromanager. Discuss what motivates and what de-motivates testers. Rick encourages you to bring your leadership challenges to serve as points of discussion.

More Information
Learn more about Rick Craig.
W2 Testing the Internet of Things
Regg Struyk, Polarion Software
Wednesday, June 24, 2015 - 10:15am - 11:15am

Embedded software—now being referred to as the Internet of Things (IoT)—continues to permeate almost every industry—from household appliances to heart monitors. It is estimated that there are at least a million lines of code in the average car. As IoT explodes from millions of devices to tens of billions in the next few years, new challenges will emerge for software testing. Security, privacy, complexity, and competing standards will fuel the need for innovative testing. Customers don't care why your software failed in the connected chain—only that it did fail. Companies that focus on quality will ultimately be the successful brands. Learn what new approaches are required for testing the “zoo” of interconnected devices. As products increasingly connect physical hardware with applications, we must revisit old testing approaches. IoT is about analyzing data in real time, allowing testers to make quicker and more informed decisions. If IoT testing is in your future, this session is for you.

More Information
Learn more about Regg Struyk.
W3 From Formal Test Cases to Session-Based Exploratory Testing
Ron Smith, Intuit
Wednesday, June 24, 2015 - 10:15am - 11:15am

Agile software development is exciting, but what happens when your team is entrenched in older methodologies? Even with support from the organization, it is challenging to lead an organization through the transformation. As you start making smaller, more frequent releases, your manual test cases may not keep up, and your automated tests may not yet be robust enough to fill the gap. Add in the reality of shrinking testing resources, and it is obvious that change is required. But how and what should you change? Learn how Ron Smith and his team tackled these challenges by moving from a test case-driven approach to predominantly session-based exploratory testing, supported by “just enough” documentation. Discover how this resulted in testers who are more engaged, developers who increased their ability and willingness to test, and managers who increased their understanding and insight into the product. Use what you learn from Ron to begin the transformation in your organization.

More Information
Learn more about Ron Smith.
W4 Testing Mobile App Performance
Brad Stoner, Neotys
Wednesday, June 24, 2015 - 10:15am - 11:15am

The mix of ever-smarter mobile devices and the constant connectivity of wireless networks have changed the way users access applications—and the way we develop and test them. Deployed applications deliver different content and functionality depending on whether the user is accessing them via a browser, smartphone, or tablet. And applications are accessed over myriad network configurations, including wireless and mobile networks. Brad Stoner presents an in-depth look at performance testing challenges for mobile applications including recording from devices, playing back device-specific requests, and accounting for variances in users’ geographical locations. Discover some of the best mobile performance testing approaches such as emulating mobile networks with varying connection speeds, packet loss, and latency during load tests. Find out when to use real devices vs. emulators to ensure high mobile application performance delivery to all end-users, at all times—on any device or network.

More Information
Learn more about Brad Stoner.
W5 Building a World-Class Quality Team at eBay
Wednesday, June 24, 2015 - 11:30am - 12:30pm

Today, many test methodologies can be used to achieve high quality and productivity ―Agile/Scrum, TDD, data modeling, risk analysis, and personas, just to name a few. So how do you pick the best approaches and techniques for your team and projects? Learn how Steve Hares helped build a world-class team from the ground up at eBay through iterative best-fit analysis of processes and methods. Discover why and how they adopted agile processes in some areas, waterfall in others, risk-based testing where appropriate, data model-driven testing, ad-hoc testing, and work-flow testing. At the same time, they incorporated test automation and integrated load/performance testing into the development process to achieve world class quality. Steve’s team now tests everything from enterprise wide products to IVRs, from batch files to voice biometrics. If your methodology isn't working just right, chances are you need to find the best fit methods through a continuous improvement process.

More Information
Learn more about Steve Hares.
W6 Virtualize APIs for Better Application Testing
Lorinda Brandon, SmartBear Software
Wednesday, June 24, 2015 - 11:30am - 12:30pm

In today’s interconnected world, APIs are the glue that allows software components, devices, and applications to work together. Unfortunately, many testers don’t have direct access to manipulate the APIs during testing and must rely on either testing the API separately from the application or testing the API passively through functional application testing. Lorinda Brandon maintains that these approaches miss the most important kind of API testing―uncovering how your application deals with API constraints and failures. Lorinda describes common API failures—overloaded APIs, bad requests, unavailabilities, and API timeouts—that negatively impact applications, and how application testers miss these scenarios, especially in third-party APIs. She explores how and when virtualization can and cannot help, including creating a virtual API that can fail. Lorinda discusses the importance of simulating API failures in web and mobile application testing, and identifies tools and technologies that help virtualize your APIs.

More Information
Learn more about Lorinda Brandon.
W7 Test Automation Strategies and Frameworks: What Should Your Team Do?
Gene Gotimer, Coveros, Inc.
Wednesday, June 24, 2015 - 11:30am - 12:30pm

Agile practices have done a magnificent job of speeding up the software development process. Unfortunately, simply applying agile practices to testing isn't enough to keep testers at the same pace. Test automation is necessary to support agile delivery. Max Saperstone explores popular test automation frameworks and shares the benefits of applying these frameworks, their implementation strategies, and best usage practices. Focusing on the pros and cons of each framework, Max discusses data-driven, keyword-driven, and action-driven approaches. Find out which framework and automation strategy are most beneficial for specific situations. Although this presentation is tool agnostic, Max demonstrates automation with examples from current tooling options. If you are new to test automation or trying to optimize your current automation strategy, this session is for you.

More Information
Learn more about Gene Gotimer.
W8 Usability Testing Goes Mobile
Susan Brockley, ExxonMobil
Wednesday, June 24, 2015 - 11:30am - 12:30pm

The introduction of mobile devices and applications presents new challenges to traditional usability testing practices. Identifying the differences between usability testing techniques for traditional desktop applications and mobile applications is critical to ensuring their acceptance and use. New equipment requirements for usability testing of mobile applications add to transition issues. Join Susan Brockley to discover ways to transition your traditional usability testing program into the mobile environment. Review usability testing fundamentals and then explore additional dimensions—context, affordance, and accessibility—of mobile usability testing. Learn how user expectations influence and change our approach to usability and how new factors such as power, connectivity, and protective covers impact the overall user experience. Get advice from Susan on how to plan and conduct field tests that are representative of your target audience. Finally, assess your organization’s usability maturity and take back positive steps to make your transition into the mobile usability testing field successful.

More Information
Learn more about Susan Brockley.
W9 The Tester’s Role in Agile Planning
Rob Sabourin, AmiBug.com
Wednesday, June 24, 2015 - 1:30pm - 2:30pm

All too often testers passively participate in agile planning. And the results? Important testing activities are missed, late testing becomes a bottleneck, and the benefits of agile development quickly diminish. However, testers can actively advocate customer concerns while helping to implement robust solutions. Rob Sabourin shows how testers contribute to estimation, task definition, and scoping work required to implement user stories. Testers apply their elicitation skills to understand what users need, exploring typical, alternate, and error scenarios. Testers can anticipate cross story interference and the impact of new stories on legacy functionality. Rob discusses examples of how to break agile stories into test-related tasks. He shares experiences of transforming agile testers from passive planning participants into dynamic advocates of effective trade-offs, addressing the product owners’ critical business concerns, the teams’ limited resources, and the software projects’ technical risks. Join Rob to explore test infrastructure, test data, non-functional attributes, privacy, security, robustness, exploration, regression, business rules, and more.

More Information
Learn more about Rob Sabourin.
W10 Inside the Mind of the 21st Century Customer
Alan Page, Microsoft
Wednesday, June 24, 2015 - 1:30pm - 2:30pm

Testers frequently say that they are the voice of the customer or the customer advocate for their organization’s products. In some situations this can be a helpful mindset, but no matter how hard he tries, a software tester is not the customer. In fact, there is no one better suited to evaluate customer experience than the actual customer of your software. However, getting actionable feedback from customers can be time-consuming, difficult, and often too late to have any meaningful impact on the product. Alan Page shares his thoughts and a number of examples of how to get customer feedback quickly, how to make that feedback actionable, and how to use customer data to drive better software development and testing on any team—and for any product. In this fast-paced session of information and fun, Alan discusses product instrumentation, analysis techniques, reporting, A/B testing, and many other facets of customer feedback.

More Information
Learn more about Alan Page.
W12 Techniques, Tools, and Technology for Better Mobile App Testing
Brad Johnson, SOASTA
Wednesday, June 24, 2015 - 1:30pm - 2:30pm

Today, mobile app testing expertise is in high demand and offers an exciting career path in test/QA. However, the recent Future of Testing study, sponsored by TechWell, noted that the biggest challenge in mobile―just behind having enough time to test―is expertise. Brad Johnson shares how companies from banking to retail use data from real production users, continuous integration frameworks, cloud-based testing platforms, and real mobile devices to help ensure every user experiences top-rated performance—all the time. Brad shares insight about what to test for mobile, when to first automate, and a metric that will drive real change. Explore how organizations are  communicating across teams and improving developer-to-tester collaboration with new approaches. Testers need to develop new skills ranging from software coding requirements to data science. Takeaway tips and ideas to impact your company, enhance your skill set, and propel your career with exciting options and new challenges.

More Information
Learn more about Brad Johnson.
W13 Testing for Talent: Leveraging Testing Principles in Building Teams
Joy Toney, ALSAC/St Jude Children's Research Hospital
Wednesday, June 24, 2015 - 3:00pm - 4:00pm

Application development teams today are asked to deliver more with fewer resources. They work together tirelessly under pressure to deliver quality solutions to their stakeholders. Now imagine—just as the delivery team is about to begin its testing cycle, your lead tester suddenly quits. How do you replace a talented contributor within tight time constraints? Just as testing principles enable delivery of better systems, Joy Toney demonstrates how using those same testing principles enables the test manager to select and hire outstanding team members. First, define your testing team’s acceptance criteria for the position. Rethink the application of the validation and verification processes to hiring, while using a combination of static and dynamic testing techniques. Consider using stress, volume, and performance testing to surface your ideal candidate from the pool of possibilities. Discover new ideas and proven techniques for use in your hiring decisions, so you hire the right person for your test team the first time.

More Information
Learn more about Joy Toney.
W14 Testing Hyper-Complex Systems: What Can We Know? What Can We Claim?
Lee Copeland, Software Quality Engineering
Wednesday, June 24, 2015 - 3:00pm - 4:00pm

Throughout history, people have built systems of dramatically increasing complexity. In simpler systems, defects at the micro level are mitigated by the macro level structure. In complex systems, failures at the micro level cannot be compensated for at a higher level, often with catastrophic results. Lee Copeland says that we are building hyper-complex computer systems—so complex that faults can create totally unpredictable behaviors. For example, systems based on the service-oriented architecture (SOA) model can be dynamically composed of reusable services of unknown quality, created by multiple organizations, and communicating through many technologies across the unpredictable Internet. Lee explains that claims about quality require knowledge of test “coverage,” which is an unknowable quantity in hyper-complex systems. Join Lee for a look at your testing future as he describes new approaches needed to measure test coverage in these hyper-complex systems and lead your organization to better quality—despite the challenges.

More Information
Learn more about Lee Copeland.
W15 Automate REST API Testing
Eric Smith, HomeAdvisor
Wednesday, June 24, 2015 - 3:00pm - 4:00pm

As an organization grows, the body of code that needs to be regression tested constantly increases. However, to maintain high velocity and deliver new features, teams need to minimize the amount of manual regression testing. Eric Smith shares his lessons learned in automating RESTful API tests using JMeter, RSpec, and Spock. Gain insights into the pros and cons of each tool, take back practical knowledge about the tools available, and explore reasons why your shop should require RESTful automation as part of its acceptance test criteria. Many decisions must be made to automate API tests: choosing the platform; how to integrate with the current build and deploy process; and how to integrate with reporting tools to keep key stakeholders informed. Although the initial transition caused his teams to bend their traditional roles, Eric says that ultimately the team became more cross-functionally aligned and developed a greater sense of ownership for delivering a quality product.

More Information
Learn more about Eric Smith.
W16 Agile Metrics and the Software Delivery Pipeline
Christopher Davis, Nike, Inc.
Wednesday, June 24, 2015 - 3:00pm - 4:00pm

Today’s build pipelines and agile tracking systems are very advanced and generate lots of data. Christopher Davis has found that many teams face challenges when interpreting that data to show meaningful agile metrics across the entire organization. As a result, measuring agile development ends up being a fuzzy art—when it doesn’t have to be. Using common open source tools, you can automate the collection and aggregation of data from your build pipeline to show the right level metrics to the right people in your organization, track what means the most to your team, and create actionable metrics you can use to improve your team and process.  Join Christopher to learn about open source tools you can use to collect data and create metrics, several key metrics you can use today to help make your team better, and how to implement these tools to automatically collect and distribute them in your build pipeline.

More Information
Learn more about Christopher Davis.
T1 Managing Technical Debt
Philippe Kruchten, Kruchten Engineering Services, Ltd.
Thursday, June 25, 2015 - 10:15am - 11:15am

Technical debt is slowing your software development projects. Any developer who has gone beyond version 1 has encountered it. Technical debt takes different forms, has many different origins, and does not always equate to bad code quality. Much of it is incurred due to the passage of time and a rapidly evolving business environment. Some is in the form of hundreds of little cuts; some is massive and overwhelming, the result of a single poor design choice. Philippe Kruchten explains how to distinguish different types of technical debt, identify their root causes, objectively assess their impact, and develop strategies suitable in your context to limit or selectively reduce the technical debt you incur. Discover what debt you can happily live with. See when to declare bankruptcy. And learn that not all technical debt is bad. Just like in the real world, some technical debt can be a valuable investment for the future.

More Information
Learn more about Philippe Kruchten.
T2 Reduce Test Automation Execution Time by 80%
Tanay Nagjee, Electric Cloud
Thursday, June 25, 2015 - 10:15am - 11:15am

Software testers and quality assurance engineers are often pressured to cut testing time to ensure on-time product releases. Usually this means running fewer test cycles with the risk of worse software quality. As companies embrace a continuous integration (CI) that require frequent build and test cycles, the pressure to speed up automated testing is intense. Tanay Nagjee shows how you can cut the time to run an automated test suite by 80%—for example, from two hours to under 25 minutes. Find out how Tayay’s team broke down their test suites into bite-sized test that could be executed in parallel. Leveraging a cluster of computing horsepower (either on on-premise physical machines or in the cloud), you can refactor large test suites to execute in a fraction of the time it takes now. With real example data and a live demonstration, Tanay outlines a three-step approach to achieve these results within different test frameworks.

More Information
Learn more about Tanay Nagjee.
T3 Create Disposable Test Environments with Vagrant and Puppet
Gene Gotimer, Coveros, Inc.
Thursday, June 25, 2015 - 10:15am - 11:15am

As the pace of development increases, testing has more to do and less time in which to do it. Software testing must evolve to meet delivery goals while continuing to meet quality objectives. Gene Gotimer explores how tools like Vagrant and Puppet work together to provide on-demand, disposable test environments that are delivered quickly, in a known state, with pre-populated test data and automated test fixture provisioning. With a single command, Vagrant provisions one or more virtual machines on a local box, in a private or public cloud. Puppet then takes over to install and configure software, setup test data, and get the system or systems ready for testing. Since the process is automated, anyone on the team can use the same Vagrant and Puppet scripts to get his own virtual environment for testing. When you are finished with it, Vagrant tears it back down and restores it to the same original state.

More Information
Learn more about Gene Gotimer.
T4 Root Cause Analysis for Testers
Jan van Moll, Philips Healthcare
Thursday, June 25, 2015 - 10:15am - 11:15am

Bad product quality can haunt companies long after the product’s release. And root cause analysis (RCA) of product failures is an indispensable step in preventing its recurrence. Unfortunately, the testing industry struggles with doing proper RCA. Moreover, companies often fail to unlock the full potential of RCA by not including testers in the process. Failing to recognize the real value testers bring to RCA is a process failure. Another failure is not recognizing how extremely valuable RCA results are for devising enhanced test strategies. Using real-life—and often embarrassing—examples, Jan van Moll illustrates the added value that testers bring and discusses the pitfalls of RCA. Jan challenges testers and managers to analyze and rethink their own RCA practices. Learn how to increase your value as a professional tester to your business by performing powerful RCA—and avoiding its pitfalls.

More Information
Learn more about Jan van Moll.
T5 The Adventures of a First-Time Test Lead: An Unexpected Journey
Ioan Todoran, Expedia Affiliate Network
Thursday, June 25, 2015 - 11:30am - 12:30pm

When moving to a new position in your organization, you might not always feel confident—and that’s fine. If you have ever wondered how to change your mindset from “I need to learn from someone more experienced than I” to “I need to train and lead a team,” Ioan Todoran shares what he learned during his time as a first-time test team lead. Ioan shares lessons about recruitment (where and how to look for people), interviewing (forget the boring, interrogatory-style interviews; move toward a more conversational approach), training (how to prepare the new testers for work on a commercial project), and navigating through the daily management duties while keeping the automation work going on your project (stop micromanaging; help, but don't suffocate; learn to offer quick solutions.) Learn how to establish better connections and communication channels with upper management while strengthening the relationships with your clients through an honest and direct approach.

More Information
Learn more about Ioan Todoran.
T6 Write Your Test Cases in a Domain-Specific Language
Beaumont Brush, Dematic, Inc.
Thursday, June 25, 2015 - 11:30am - 12:30pm

Manual test cases are difficult to write and costly to maintain. Beaumont Brush suggests that one of the more important but infrequently-discussed reasons is that manual tests are usually written in natural language, which is ineffective for describing test cases clearly. Employing a domain-specific language (DSL), Beaumont and his team approach their manual test cases exactly like programming code and gain the benefits of good development and design practices. He shares their coding standards, reusability approach, and object models that integrate transparently into the version control and code review workflow. Beaumont demonstrates two DSL approaches―a highly specified DSL written in Python and a more functional DSL that leverages Gherkin syntax and does not require a computer language to implement. By making your test cases easier to write and maintain, your team will improve its test suite and have time for automating more tests.

More Information
Learn more about Beaumont Brush.
T7 Transform a Manual Testing Process to Incorporate Automation
Jim Trentadue, Ranorex
Thursday, June 25, 2015 - 11:30am - 12:30pm

Although most testing organizations have automation, it’s usually a subset of their overall efforts. Typically the processes for the department have been previously defined, and the automation team must adapt accordingly. The major issue is that test automation work and deliverables do not always fit into a defined manual testing process. Jim Trentadue explores what test automation professionals must do to be successful. These include understanding development standards for objects, structuring tests for modularity, and eliminating manual efforts. Jim reviews the revisions required to a V-model testing process to fuse in the test automation work. This requires changes to the manual testing process, specifically at the test plan and test case level. Learn the differences between automated and manual testing process needs, how to start a test automation process that ties into your overall testing process, and how to do a gap analysis for those actively doing automation, connecting better with the functional testing team.

More Information
Learn more about Jim Trentadue.
T8 Become an Influential Tester: Learn How to Be Heard
Jane Fraser, Anki, Inc.
Thursday, June 25, 2015 - 11:30am - 12:30pm

As a tester, are you frustrated that no one listens to you? Are you finding bugs and having them ignored? Are you worried that the development process and product quality aren’t as good as they should be? Jane Fraser shares ways to help you be heard―ways to position yourself as a leader within your organization, ways to increase your influence, and ways to report bugs to get them fixed. In this interactive session, Jane leads you to a better understanding of how to be heard in your organization. Learn how to tailor your defect reports depending on who makes the decisions and their area of focus—customer, budget, or design. These details help you determine how to position your defect for action. Through real life examples, Jane shows you how to become a more influential tester.

More Information
Learn more about Jane Fraser.
T9 Giving and Receiving Feedback: A New Imperative
Omar Bermudez, agilecafe.org
Thursday, June 25, 2015 - 1:30pm - 2:30pm

Giving and receiving feedback are tough for everyone. Who wants to criticize others or be criticized? Although managers have a duty to give honest feedback to staff and peers, many people resist change or differ on how to change—leading to interpersonal conflicts and impacting deliverables. Omar Bermudez explains several techniques—Giving Positive Feedback, Acid Reflux (when you get that sick feeling), and SARA (Surprise, Anger, Rationalization, Acceptance)—that allow people to give and receive honest feedback to promote incremental improvements. Omar explains how to give accurate feedback to and receive the same from senior team members or direct superiors, a skill critical to career advancement. To increase self-esteem, happiness index, and your power to influence, Omar teaches you how to present feedback to your peers, your boss, or other colleagues in a diplomatic and efficient way. Take away key insights into how to create a healthy organizational culture with clear and constructive feedback.

More Information
Learn more about Omar Bermudez.
T10 The Power of Pair Testing
Kirk Lee, Infusionsoft
Thursday, June 25, 2015 - 1:30pm - 2:30pm

Perhaps you have heard of pair testing but are unaware of its tremendous benefits. Maybe you have tried pair testing in the past but were dissatisfied with the result. When done correctly, pair testing significantly increases quality, decreases overhead, and improves the relationship between testers and developers. Join Kirk Lee as he shares the essential points of this powerful technique that moves testing upstream and prevents defects from being committed to the codebase. Kirk explores how pair testing facilitates discussion, increases test effectiveness, promotes partnership, and provides cross training. Learn why testers and developers say they love pair testing. Kirk describes key tips to ensure success, including the amount of time required for the pair-testing session, the best way to run the session, and how to know when the session is complete. He provides specific steps to take before, during, and after the pair-testing session to make it even more effective.

More Information
Learn more about Kirk Lee.
T11 Test Automation: Investment Today Pays Back Tomorrow
Thursday, June 25, 2015 - 1:30pm - 2:30pm

The results of a recent survey, authored by IBM and TechWell, showed that testers want to spend more time automating, more time planning, and more time designing tests—and less time setting up test environments and creating test data. So, where should testers and their organizations invest their time and money to achieve the desired results? What is the right level of technical ability for today’s testers to be successful? As this ongoing debate continues, the simple answer remains: It depends. Join Al Wagner as he explores the many opportunities in the world of testing and test automation. Consider the many approaches for building your automated testing skills and the solutions you create, weighing the pros and cons of each. Explore the options for test and dev organizations to consider to speed up releases and deliver more value to their companies. Leave with the ability to determine which approaches make sense for you and your employer.

More Information
Learn more about Al Wagner.
T12 If You Could Turn Back Time: Coaching New Testers
Christin Wiedemann, Professional Quality Assurance, Ltd.
Richard Lu, Professional Quality Assurance, Ltd.
Thursday, June 25, 2015 - 1:30pm - 2:30pm

If you could turn back time, what do you wish you had known when you started working as a tester? When you are new to testing, you are faced with daunting challenges. Recent college graduates may find it difficult to apply academic knowledge in practice. It is easy to get discouraged and start questioning whether testing is really for you. Richard Lu and Christin Wiedemann relate their experiences of starting careers as software testers—with no prior testing experience. They share ideas for how senior testers can keep junior testers engaged, and encourage them to learn and step up in their roles. Easy-to-implement suggestions include explaining the company culture, encouraging relationship building, emphasizing communication, discussing the objective and value of testing, and talking about the different meanings of quality. Instead of leaving your team’s new hires to struggle, join this session and learn how to coach new testers to become their best.

More Information
Learn more about Christin Wiedemann and Richard Lu.