Skip to main content

Concurrent Sessions

Sessions are offered on Wednesday and Thursday at the conference and do not require a pre-selection. Build your own custom learning schedule, or choose to follow one of our tracks by topic area.

Concurrent Sessions
W1 Exposing Test Management Myths
Johanna Rothman, Rothman Consulting Group, Inc.
Wednesday, May 7, 2014 - 11:30am - 12:30pm

We’ve all heard test management myths such as: “Utilize everyone, all the time”, “Don’t let people come to you without solutions to problems,” “Training time is useless,” and my all-time favorite “Work smarter.” Are you supposed to believe them? Much of what you may have heard about management is myth—based not on evidence, but on something from the Industrial Revolution, or something that someone else read in a book that does not fit your context. And it may be wrong—dead wrong. As with many myths, they do contain a tiny nugget of truth. If you would like to learn what to do instead, to avoid the gridlock of 100 percent utilization, to help people develop solutions when they are stuck, to effectively utilize training and which kinds, to work more effectively—the kinds of management that works for knowledge workers—join Johanna in exposing test management myths.

More Information
Learn more about Johanna Rothman.
W2 Testing Lessons Learned from Monty Python
Rob Sabourin, AmiBug.com
Wednesday, May 7, 2014 - 11:30am - 12:30pm

And now for something completely different. Monty Python's Flying Circus revolutionized comedy and brought zany British humor to a worldwide audience. However, buried deep in the hilarity and camouflaged in its twisted wit lie many important testing lessons—tips and techniques you can apply to real world problems to deal with turbulent projects, changing requirements, and stubborn project stakeholders. Rob Sabourin examines some of the most famous Python bits—“The Spanish Inquisition” telling us to expect the unexpected, “The Dead Parrot” asking if we should really deliver this product to the customer, “The Argument” teaching us about bug advocacy, “Self Defense against Fresh Fruit” demonstrating the need to pick the right testing tool, and a host of other goofy gags, each one with a lesson for testers. Learn how to test effectively with persistence, how to make your point with effective communication, and how to clarify project goals and requirements.

More Information
Learn more about Rob Sabourin.
W3 Automation through the Back Door
Seretta Gamba, Steria Mummert ISS GmbH
Wednesday, May 7, 2014 - 11:30am - 12:30pm

When working on test automation, it seems that even though you have done everything right—good architecture, efficient framework, and good tools—you still don’t make progress. The product Seretta Gamba’s team was to automate had become so successful that anyone with even a little domain knowledge was sent to the field while those left on the automation team didn’t really know the full application. In typical Catch-22 fashion, the regression testing workload prevented testers from supporting the automation team who therefore could not automate what would have effectively reduced the regression test load. Seretta reasoned that since testers used exactly the same information needed to execute manual tests, the most efficient way to “harvest” tests would be to extend the test automation framework to also support manual testing. Learn how Seretta succeeded in giving better support to manual testing while simultaneously collecting the data needed to finally move on with test automation.

More Information
Learn more about Seretta Gamba.
W4 The Three Pillars Approach to Your Agile Test Strategy
Bob Galen, Velocity Partners
Wednesday, May 7, 2014 - 11:30am - 12:30pm

Far too often, agile transformations focus just on development teams, agile frameworks, or technical practices as adoption strategies unfold. Often the testing activity and the testing teams are left behind in agile strategy development or worse yet, they are only along for the ride. That’s simply not an effective transformation strategy. Join experienced agile coach Bob Galen as he shares the Three Pillars Framework for establishing a balanced strategic plan to effectively implement agile quality and testing. The pillars focus on development and test automation, testing practices, and whole-team collaboration activities that will ensure you have a balanced approach to agile testing and quality. Specifically the framework focuses on effective tactics of risk-based testing, exploratory testing, paired collaboration around agile requirements, agile test design, and TDD-BDD-functional testing automation. Leave with the tools to immediately initiate or rebalance a much more effective agile testing strategy.

More Information
Learn more about Bob Galen.
W5 Twelve Tips for Becoming a More Professional Tester
Joel Montvelisky, PractiTest
Wednesday, May 7, 2014 - 11:30am - 12:30pm

Many testers feel that their organizations do not treat them with the same level of  professionalism and respect that their development peers receive. Testers attribute this to the fact that testing is a relatively “new” profession, that few universities grant a formal degree in software testing, and all sorts of other external factors—things beyond their control. But, to be perceived as professionals, we need to start by becoming more professional. Joel Montvelisky shares a number of things we can do to improve our professionalism while increasing the value of our work in our organizations. These simple yet important things include expanding our technical involvement in projects, increasing the interaction with our users and their representatives, leading the risk assessment process in our projects, and more. Joel reviews twelve specific points—practical tips you can start using right away to become a more professional tester.

More Information
Learn more about Joel Montvelisky.
W6 Test Process Improvement in Agile
Wednesday, May 7, 2014 - 11:30am - 12:30pm

Current Test Process Improvement (TPI) models have proven to be a mismatch when used to assess testing in an agile context, since it is significantly more difficult to describe how to become more flexible than it is to describe how to become more structured. So what’s missing in the current models and how can we help organizations improve their testing in an agile environment? Jeroen Mengerink introduces a systematic model to improve the testing in agile software development. The Agile Manifesto, the SCRUM process, and the field experience of numerous testers form the basis for this new model. Jeroen describes new key areas to consider and describes good practices that help mature the agile process. Learn how to improve your testing by using a combination of existing and new key process areas. Additionally, discover a set of practices that can add value to your testing immediately.

More Information
Learn more about Jeroen Mengerink.
W7 The Golden Rules for Managing Large Testing Initiatives
Krishna Murthy, Tata Consultancy Services
Wednesday, May 7, 2014 - 1:45pm - 2:45pm

Large technology transformations and undertakings are challenging because they cut across multiple systems and domains of technology and solutions. They involve multiple organizations—from corporate to operations—making communication and collaboration challenging. This complication is amplified when the IT organization in these large enterprises engages multiple vendors. Krishna Murthy shares his experience on how to tackle such situations with customized amalgamations of the best traditional and agile program management practices―Golden Rules of Engagement. Krishna introduces a framework based on these rules to manage a large program running at four key levels―commercial, financial, technical, and personal. Two of the rules are (1) The entire project team—across multiple organizations—should experience the output as a single entity, and (2) Create self-forming, dynamic testing teams within temporary static team structures that can collaborate across boundaries. Join Krishna to learn about these and other Golden Rules that you should incorporate in your test initiative.

More Information
Learn more about Krishna Murthy.
W8 Continuous Testing through Service Virtualization
Wednesday, May 7, 2014 - 1:45pm - 2:45pm

The demand to accelerate software delivery and for teams to continuously test and release high quality software sooner has never been greater. However, whether your release strategy is based on schedule or quality, the entire delivery process hits the wall when agility stops at testing. When software/services that are part of the delivered system or required environments are unavailable for testing, the entire team suffers. Al Wagner explains how to remove these testing interruptions, decrease project risk, and release higher quality software sooner. Using a real-life example, Al shows you how service virtualization can be applied across the lifecycle to shift integration, functional, and performance testing to the left. Gain an understanding of how service virtualization can be incorporated into your automated build and deployment process, making continuous testing a reality for your organization. Learn what service virtualization can do for you and your stakeholders. The ROI is worth it!

More Information
Learn more about Allan Wagner.
W9 Leveraging Open Source Automation: A Selenium WebDriver Example
David Dang, Zenergy Technologies
Wednesday, May 7, 2014 - 1:45pm - 2:45pm

As online activities create more revenue than ever, organizations are turning to Selenium both to test their web applications and to reduce costs. Since Selenium is open source, there is no licensing fee. However, as with purchased tools, the same automation challenges remain, and users do not have formal support and maintenance. Proper strategic planning and the use of advanced automation concepts are a must to ensure successful Selenium automation efforts. Sharing his experience designing and implementing advanced automation frameworks using Selenium WebDriver, David Dang describes the factors necessary to ensure open source automation is right for your project. David helps you understand the real effort required to implement WebDriver in a way that will scale and minimize script development. Additionally, he dives into must-haves in your Selenium framework design, the resource and timeline considerations necessary to implement WebDriver, and the long-term, continual improvement enhancements all automation engineers should consider in their Selenium automation implementations.

More Information
Learn more about David Dang.
W10 Risk-Based Testing for Agile Projects
Erik van Veenendaal, Improve Quality IT Services BV
Wednesday, May 7, 2014 - 1:45pm - 2:45pm

Many projects implicitly use some kind of risk-based approach for prioritizing testing activities. However, critical testing decisions should be based on a product risk assessment process using key business drivers as its foundation. For agile projects, this assessment should be both thorough and lightweight. PRISMA (PRoduct RISk MAnagement) is a highly practical method for performing systematic product risk assessments. Learn how to employ PRISMA techniques in agile projects using risk-poker. Carry out risk identification and analysis, see how to use the outcome to select the best test approach, and learn how to transform the result into an agile one page sprint test plan. Practical experiences are shared and results achieved employing product risk assessments. Learn how to optimize your test effort by including product risk assessment in your agile testing practices.

More Information
Learn more about Erik van Veenendaal.
W11 Succeeding as an Ethnic or Minority Tester
Yousef Harfi, Medavie Blue Cross
Wednesday, May 7, 2014 - 1:45pm - 2:45pm

No one wishes to see himself as different or treat other people differently because of his uniqueness. Unfortunately, we are frequently judged and our skills presumed based on our ethnicity, beliefs, politics, appearance, lifestyle, gender, or sexual orientation. Our professional success and our projects’ success can be derailed because of lack of understanding, stereotyping, or fear. Our professional environment includes us all―brown, black, white, tall, short, male, female, straight, gay, extroverts, and introverts. A team’s strength is built on diversity of knowledge and skills, and our differences should be celebrated while managing the realities of human relations. Yousef Harfi shares his unique experiences, tips, and stories―some hilarious―that are certain to entertain, tickle your thoughts, challenge your knowledge, and provoke some self examination. Let’s share, laugh, learn, and build.

More Information
Learn more about Yousef Harfi.
W12 Improving the Mobile Application User Experience (UX)
Philip Lew, XBOSoft
Wednesday, May 7, 2014 - 1:45pm - 2:45pm

If users can’t figure out how to use your mobile applications and what’s in it for them, they’re gone. Usability and UX are key factors in keeping users satisfied so understanding, measuring, testing and improving these factors are critical to the success of today’s mobile applications. However, sometimes these concepts can be confusing—not only differentiating them but also defining and understanding them. Philip Lew explores the meanings of usability and UX, discusses how they are related, and then examines their importance for today’s mobile applications. After a brief discussion of how the meanings of usability and user experience depend on the context of your product, Phil defines measurements of usability and user experience that you can use right away to quantify these subjective attributes. He crystallizes abstract definitions into concepts that can be measured, with metrics to evaluate and improve your product, and provide numerous examples to demonstrate the concepts on how to improve your mobile app.

More Information
Learn more about Philip Lew.
W13 An Ounce of Prevention...
Kirk Lee, Infusionsoft
Wednesday, May 7, 2014 - 3:00pm - 4:00pm

We QA professionals know that the ideal is to build quality into a product rather than to test defects out of it. We know about the overhead associated with defects and how costs grow over time the later in the development process we find defects. If prevention is better than cure, shouldn’t we invest more time and effort in preventing defects? Kirk Lee shares the things we testers can do before coding begins to keep defects from being created in the first place. Kirk explains how to involve QA at the very beginning of the development process where prevention is most valuable. Learn the early indicators that identify and precede potential defects. Kirk shows how timely conversations can locate lapses in thinking. He demonstrates how reviews, thought experiments, and early-design testing can find flaws in advance. Learn strategies, techniques, and processes you can use to prevent defects at every stage of your development process.

More Information
Learn more about Kirk Lee.
W14 Testing in the Wild: Practices for Testing Beyond the Lab
Matt Johnston, Applause
Wednesday, May 7, 2014 - 3:00pm - 4:00pm

The stakes in the mobile app marketplace are very high, with thousands of apps vying for the limited space on users’ mobile devices. Organizations must ensure that their apps work as intended from day one and to do that must implement a successful mobile testing strategy leveraging in-the-wild testing. Matt Johnston describes how to create and implement a tailored in-the-wild testing strategy to boost app success and improve user experience. Matt provides strategies, tips, and real-world examples and advice on topics ranging from fragmentation issues, to the different problems inherent in web and mobile apps, to deciding what devices you must test vs. those you should test. After hearing real-world examples of how testing in the wild affects app quality, leave with an understanding of and actionable information about how to launch apps that perform as intended in the hands of end-users—from day one.

More Information
Learn more about Matt Johnston.
W15 Implementing Testing for Behavior-Driven Development Using Cucumber
Max Saperstone, Coveros
Wednesday, May 7, 2014 - 3:00pm - 4:00pm

With the behavior-driven development (BDD) methodology, development teams write high level, plain natural language tests to describe and exercise a system. Unfortunately, it is difficult to develop BDD tests that encompass all interfaces and write tests that can be reused in multiple scenarios. Specifying BDD tests to run as part of different test scenarios without duplicating work frequently requires substantial effort and rework. But Cucumber provides a robust framework for writing BDD tests. Max Saperstone shows how—by using Cucumber’s flexible structure in combination with the Java language—to write singular tests to run over multiple testing interfaces. Building on Cucumber basics, this framework provides a generic model for testing any application. Additionally, Max shares some valuable suggestions to build on standard Cucumber reports, gives additional information for debugging and traceability, and describes test runners and their inputs to help you create more dynamic testing scenarios.

More Information
Learn more about Max Saperstone.
W16 Meet Big Agile: Testing on Large-Scale Projects
Geoff Meyer, Dell, Inc.
Wednesday, May 7, 2014 - 3:00pm - 4:00pm

Are you embarking on a large-scale, globally distributed, multi-team scrum project? Have you already identified the potential testing challenges that lie ahead? Or have you belatedly encountered them and are now working on them in real-time? Five years and more than 200 projects into its agile journey, Dell Enterprise Solutions (ESG) has empirically determined that once a project extends beyond three scrum teams, interesting testing challenges arise—inconsistent “done” criteria, integration testing underscored by epic/story interdependencies across teams, test automation inconsistency, and uncoordinated regression testing. Worse yet, the more teams involved, the less likely it is that a single scrum team has the visibility to validate the overall product from a customer usage perspective as the product evolves through sprints. Geoff Meyer serves up some lessons learned from within the Dell ESG Validation organization as it evolved its agile testing and automation strategies from a waterfall-based environment to one that fully embraced agile Scrum across its entire software product portfolio.

More Information
Learn more about Geoff Meyer.
W17 The Impact of Cognitive Biases on Test and Project Teams
Thomas Cagley, The David Consulting Group
Wednesday, May 7, 2014 - 3:00pm - 4:00pm

Teams are a fundamental part of the way we all work. Understanding the ins and outs of team decision making makes us better employees, better co-workers, and even better people. As developers and testers, we continuously make decisions. Most decisions are based on how the decision maker perceives the information at hand. That perception is driven by many factors including cognitive biases—the mental shortcuts we use that lead us to simplify, make quick decisions, and ultimately mess up when we’re trying to attack new problems. Biases can affect how information is perceived, and how teams and individuals behave. Biases are like optical illusions—we see what we want to see, even when we know better. Learn the cognitive biases that affect team thinking and take away strategies for how your team can work with and around these biases. Finally, discover how psychology can make your team more efficient and effective together.

More Information
Learn more about Thomas Cagley.
W18 Making Numbers Count: Metrics That Matter
Mike Trites, Professional Quality Assurance, Ltd.
Wednesday, May 7, 2014 - 3:00pm - 4:00pm

As testers and test managers, we are frequently asked to report on the progress and results of our testing. The question “How is testing going?” may seem simple enough, but our answer is ultimately based on our ability to extract useful metrics from our work and present them in a meaningful way. This is particularly important in agile environments, where clear, concise, and up-to-date metrics are potentially needed multiple times per day. Mike Trites identifies a number of ways metrics can be used to measure progress during a test cycle and, ultimately, to determine when to consider testing complete. Learn the common pitfalls that metrics misuse can lead to and how you can avoid them by giving proper context when communicating metrics to your stakeholders. Take back key metrics for measuring the effectiveness of your testing and discover how to use what is learned on one project to improve your testing process on future projects.

More Information
Learn more about Mike Trites.
T1 A Funny Thing Happened on the Way to User Acceptance Testing
Randy Rice, Rice Consulting Services, Inc.
Thursday, May 8, 2014 - 9:45am - 10:45am

On large enterprise projects, the user acceptance test (UAT) is often envisioned to be a grand event where the users accept the software, money is paid, and the congratulations and champagne flow freely. UAT is expected to go well, even though some minor defects may be found. In reality, acceptance testing can be a very political and stressful activity that unfolds very differently than planned. Randy Rice shares case studies of UAT variances on projects he has facilitated and what can be done in advance to prepare for an acceptance test that is a beauty pageant rather than a monster's ball. Learn how UAT can go in a totally unexpected direction and what you can do to prepare for that situation. Understand the project risks when UAT is performed only as an end-game activity. Learn how to be flexible in staying in sync with stakeholders and user expectations—even test coverage is reduced to its bare minimum.

More Information
Learn more about Randy Rice.
T2 Leaping over the Boundaries of Boundary Value Analysis
Michael Bolton, DevelopSense
Thursday, May 8, 2014 - 9:45am - 10:45am

Many books, articles, classes, and conference presentations tout equivalence class partitioning and boundary value analysis as core testing techniques. Yet many discussions of these techniques are shallow and oversimplified. Testers learn to identify classes based on little more than hopes, rumors, and unwarranted assumptions, while the "analysis" consists of little more than adding or subtracting one to a given number. Do you want to limit yourself to checking the product's behavior at boundaries? Or would you rather test the product to discover that the boundaries aren't where you thought they were, and that the equivalence classes aren't as equivalent as you've been told? Join Michael Bolton as he jumps over the partitions and leaps across the boundaries to reveal a topic far richer than you might have anticipated and far more complex than the simplifications that appear in traditional testing literature and folklore.

Delegates should bring a laptop computer to this session to experience hands-on activities.

More Information
Learn more about Michael Bolton.
T3 Patterns of Automation: Simplify Your Test Code
Thursday, May 8, 2014 - 9:45am - 10:45am

Many organizations are introducing test automation only to discover it is more difficult than they anticipated. The fact is that good test automation requires good coding practices. Good test automation requires good design. To do anything else will lead to spaghetti code that is hard to maintain or update. If you’re new to coding or new to automation, it is difficult to know where to begin. Join Cheezy as he describes and demonstrates lessons he has learned while helping numerous organizations adopt test automation. Cheezy shows the patterns he uses to keep automation code simple and clean, and demonstrates techniques you can use to make your automation code more maintainable. Finally, Cheezy writes code (without a net) to implement these patterns, taking them from theory to implementation.

More Information
Learn more about Jeff "Cheezy" Morgan.
T4 Mobile App Testing Secrets
Jason Arbon, Applause
Thursday, May 8, 2014 - 9:45am - 10:45am

Most app teams aim for 4 stars. Why not 5? Because delivering and maintaining a high-quality app becomes more challenging every day. The requirements of agile and continuous integration put more pressure than ever on testers and quality-focused developers. Add to that the raw complexity of device and platform fragmentation, new sensors, app store processes, star ratings and reviews, increased app competition, mobile automation frameworks that only half work, and users who expect the app to not just work flawlessly but also be intuitive and even beautiful and fun. Jason Arbon shares app testing secrets gleaned from thousands of testers on hundreds of the world’s most popular apps, and data analytics on millions of apps and hundreds of millions of reviews. Learn how the best teams manage their star ratings and app store reviews. Learn what has been tried and failed. Join Jason to get a glimpse of the future of app testing.

More Information
Learn more about Jason Arbon.
T5 Next-Generation Performance Testing with Lifecycle Monitoring
Scott Barber, SmartBear
Thursday, May 8, 2014 - 9:45am - 10:45am

With the increasing market demand for “always on” high performance applications, many organizations find that their traditional load testing programs have failed to keep pace with expectations and competitive pressures. Agile development practices and DevOps concepts of continuous delivery cause old load testing approaches to become unacceptable bottlenecks in the delivery process. Although it remains true that the only way to know for certain how a system will respond to load is to subject it to load, much of what load testing has traditionally accomplished is rooting out performance issues that are detectable and resolvable without actually applying load. The trick is knowing when and how to look for these issues. With specific examples from recent client implementations, Scott Barber shares the T4APM™ approach, a simple and universal process to detect and manage performance issues—with or without applying load—throughout the lifecycle.

More Information
Learn more about Scott Barber.
T6 Using the Cloud to Load Test and Monitor Your Applications
Charles Sterling, Microsoft
Thursday, May 8, 2014 - 9:45am - 10:45am

Load testing is often one of the most difficult testing efforts to set-up—in both time for the deployment and cost for the additional hardware needed. Using cloud-based software, you can transform this most difficult task to one of the easiest. Charles Sterling explains how load testing fits into the relatively new practice of DevOps. Then, by re-using the tests created in the load testing effort to monitor applications, the test team can help solve the challenges in measuring, monitoring, and diagnosing applications―not just in development and test but also into production. Chuck demonstrates web performance test creation, locally run load test creation, cloud executed load test to the cloud, application performance monitoring (APM), global system monitoring (GSM), and usage monitoring (UM) for near real-time customer input for your application.

More Information
Learn more about Charles Sterling.
T7 Bugfest!
Shaun Bradshaw, Zenergy Technologies, Inc.
Thursday, May 8, 2014 - 11:15am - 12:15pm

Know any testers who have bugs opened more than a year ago and still sitting in their defect queue? More than two years ago? Three? The fact is that many software development efforts are focused on delivering new features and functionality, leaving workarounds in place for bugs released in prior versions of applications. Often these defects seem relatively minor—we all have some workarounds for customers—but these are still bugs and ultimately should be dealt with. If you are seeking effective methods to close out those bugs once and for all, Shaun Bradshaw shares his experience eradicating aging bugs—in a Bugfest! Shaun shows how to effectively use kanban techniques to bring visibility to a myriad of outstanding problems left over from previous releases as well as to order and prioritize the work to clear out the nastiest, most offensive defects—and ultimately exterminate those pesky bugs!

More Information
Learn more about Shaun Bradshaw.
T8 Designing for Testability: Differentiator in a Competitive Market
David Campbell, MITRE Corporation
Thursday, May 8, 2014 - 11:15am - 12:15pm

In today’s cost conscious marketplace, solution providers gain advantage over competitors when they deliver measurable benefits to customers and partners. Systems of even small scope often involve distributed hardware/software elements with varying execution parameters. Testing organizations often deal with a complex set of testing scenarios, increased risk for regression defects, and competing demands on limited system resources for a continuous comprehensive test program. Learn how designing a testable system architecture addresses these challenges. David Campbell offers practical guidance on the process to make testability a key discriminator from the earliest phases of product definition and design. Learn approaches that consistently deliver for high achieving organizations, and how these approaches impact schedule and architecture performance. Gain insight on how to select and customize techniques that are appropriate for your organization’s size, culture, and market.

More Information
Learn more about David Campbell.
T9 Accelerate Testing in Agile through a Shared Business Domain Language
Laurent Py, Smartesting
Thursday, May 8, 2014 - 11:15am - 12:15pm

In agile projects, when the cycle from ideas to production shortens from months to hours, each software development activity—including testing—is impacted. Reaching this level of agility in testing requires massive automation. But test execution is only one side of the coin. How do we design and maintain tests at the required speed and scale? Testing should start very early in the development process and be used as acceptance criteria by the project stakeholders. Early test design, using a business domain language to write tests, is an efficient solution to support rapid iterations and helps align the team on the definition of done. These principles are the basis of acceptance test-driven development practices. Laurent Py shows you how the use of business domain languages enables test refactoring and accelerates automation. Come and learn how you can leverage acceptance tests and key test refactoring techniques.

More Information
Learn more about Laurent Py.
T10 Adopting and Implementing the Right Mobile Testing Strategy
PRabhu Meruga, CSS Corp
Thursday, May 8, 2014 - 11:15am - 12:15pm

With the expansion of mobile platforms, software development and testing services have expanded also. A wide variety of applications are entering the consumer world as native, mobile web, and hybrid applications. Adding to this complexity, multiple operating systems, browsers, networks, and BYOD (bring your own device) are used. Successful deployment and adoption of these applications in the consumer world requires a robust, flexible, and scalable testing strategy. PRabhu Meruga shares his thoughts on approaching the complex and dynamic environment of mobile applications testing with a focus on optimization of testing effort, enhanced coverage, and quality with confidence. Join PRabhu to explore the world of mobile applications testing with emphasis on the Start, Grow, and Transform model that focuses on end-to-end testing strategy for mobile applications covering testing tools selection, cross browser and operating system compatibility, device testing, network testing, multipoint function failure, cloud-based testing solutions, and data security testing.

More Information
Learn more about PRabhu Meruga.
T11 Performance Testing in Agile: The Path to 5 Star App Reviews
Shane Evans, Hewlett-Packard
Thursday, May 8, 2014 - 11:15am - 12:15pm

Application performance is the first aspect of quality that every customer experiences. It can mean the difference between winning and losing a customer—between a 5-star app and a 2. No matter how sexy your application is, if it doesn’t load quickly, customers will turn to your competitor. Quality is core to agile, but agile doesn’t mention performance testing specifically. The challenge is that generally user stories don’t include the phrase “…in 3 seconds or less” and developers just focus on developing. Shane Evans shows you how to build performance testing into every user story in every sprint to ensure application performance is maintained in every release. He discusses the need to work with developers to build an automated performance testing framework. With that framework in place, the difficult task of optimizing performance as a competitive differentiator begins. This is where putting the right data in your developers’ hands is critical—and timing is key.

More Information
Learn more about Shane Evans.
T12 DevOps: Where in the World Is Test?
Erik Stensland, Pearson Learning Technology
Thursday, May 8, 2014 - 11:15am - 12:15pm

As the world of software development changes, software testing organizations are challenged to be more innovative to match the speed at which software releases are being deployed. The new software industry buzzword is DevOps; so you might wonder if your software testing organization is still important and how it fits in to this new industry trend. Erik Stensland shares his research into what the DevOps model is, the three ways of implementing DevOps, testing solutions for DevOps, and the benefits of DevOps. Erik discusses the major challenges of a DevOps test team and offers solutions to elevate your own testing automation to become part of the daily-automated deployment process. With a real-world example, see how Erik helped Pearson’s engineering team transform itself through technology and new ideas to successfully build a DevOps team that focuses on reliability, repeatability, and quality of features released to market.

More Information
Learn more about Erik Stensland.
T13 Top Challenges in Testing Requirements
Lloyd Roden, Lloyd Roden Consultancy
Thursday, May 8, 2014 - 1:30pm - 2:30pm

Studies show that at least half of all software defects are rooted in poor, ambiguous, or incomplete requirements. For decades, testing has complained about the lack of solid concrete requirements, claiming that this makes our task more difficult and in some instances—impossible. Lloyd Roden challenges these beliefs and explains why having detailed requirements can be at best damaging and at worst can even be harmful to both testing and the business. Rather than constantly complaining, Lloyd shows how testers and test managers can rise to the challenges of testing without requirements, testing with evolving requirements, testing with vague requirements, and testing with wrong requirements. To help make your testing more effective, Lloyd provides practical tips and techniques for each of these testing challenges.

More Information
Learn more about Lloyd Roden.
T14 Build the Right Regression Suite with Behavior-Driven Testing
Anand Bagmar, ThoughtWorks
Thursday, May 8, 2014 - 1:30pm - 2:30pm

Manual functional testing is a slow, tedious, and error prone process. As we continue to incrementally build software, the corresponding regression test suite continues to grow. Rarely is time allotted to consolidate and keep these test cases in sync with the product under development. If these test cases are used as the basis for automation, the resulting suite is composed of very granular tests that are often quite brittle in nature. Using a case study, Anand Bagmar describes how behavior-driven testing (BDT) can be applied to identify the right type of test cases for manual and automated regression testing. Learn how the BDT technique can be applied in your context and domain, regardless of the tools and technologies used in your project and organization.

More Information
Learn more about Anand Bagmar.

Planning, designing, implementing, and tracking results for QA and test automation can be challenging. It is vital to ensure that any tools selected will work well with other application lifecycle tools, driving the adoption of automation across multiple project teams or departments, and communicating the quantitative and qualitative benefits to key stakeholders. Mike Sowers discusses his experiences creating an automation architecture, establishing tool deployment plans, and selecting and reporting tool metrics at a large financial institution. Mike will share things that went right, such as including the corporate architectural review board; and things that went wrong, such as allowing too much organizational separation to occur between testers and automation engineers. Discover how you can improve the implementation of test automation at your organization.

More Information
Learn more about .
T16 Mobile Testing in the Cloud
Rachel Obstler, Keynote DeviceAnywhere
Thursday, May 8, 2014 - 1:30pm - 2:30pm

Today, organizations are rapidly deploying mobile versions of their customer-facing and internal applications. With the prevalence of more agile-based approaches and the challenge of an ever-increasing diversity of devices and OS versions, testers are being asked to accomplish more testing in less time. Rachel Obstler shares how leading enterprises are improving the efficiency of their mobile testing using automation, and how they identify the right processes and tools for the job. Sharing some fascinating statistics from their recent mobile quality survey of more than 69,000 mobile app developers and QA organizations in the top US enterprises, Rachel dives into the challenges identified in the survey and shares five clear ways to improve your testing process: implementing a collaborative agile process, optimizing with a development tool that naturally facilitates testing, using a combination of real and emulated devices—and when to use them, and more.

More Information
Learn more about Rachel Obstler.
T17 Continuous Test Automation
Jared Richardson, Agile Artisans
Thursday, May 8, 2014 - 1:30pm - 2:30pm

Today’s test organizations often have sizable investments in test automation. Unfortunately, running and maintaining these test suites represents another sizable investment. All too often this hard work is abandoned and teams revert to a more costly, but familiar, manual approach. Jared Richardson says a more practical solution is to integrate test automation suites with continuous integration (CI). A CI system monitors your source code and compiles the system after every change. Once the build is complete, test suites are automatically run. This approach of ongoing test execution provides your developers rapid feedback and keeps your tests in constant use. It also frees up your testers for more involved exploratory testing. Jared shows how to set up an open source continuous integration tool and explains the best way to introduce this technique to your developers and testers. The concepts are simple when presented properly and provide solid benefits to all areas of an organization.

More Information
Learn more about Jared Richardson.
T18 Automated Analytics Testing with Open Source Tools
Marcus Merrell, RetailMeNot, Inc.
Thursday, May 8, 2014 - 1:30pm - 2:30pm

Analytics are an increasingly important capability of any large web site or application. When a user selects an option or clicks a button, dozens—if not hundreds—of behavior-defining “beacons” fire off into a black box of “big data” to be correlated with the usage patterns of thousands of other users. In the end, all these little data points form a constellation of information your organization will use to determine its course. But what if it doesn’t work? A misconfigured site option or an errant variable might seem insignificant, but if 10,000 users are firing 10,000 incorrect values concerning their click patterns, it suddenly becomes a problem for the QA department―a department which is often left out of conversations involving analytics. Join Marcus Merrell to learn how analytics work, how to get involved early, and how to integrate analytics testing into the normal QA process, using Selenium and other open source tools, to prevent those misfires from slipping through.

More Information
Learn more about Marcus Merrell.
T19 Ambiguity Reviews: Building Quality Requirements
Susan Schanta, Cognizant Technology Solutions
Thursday, May 8, 2014 - 3:00pm - 4:00pm

Are you frustrated by the false expectation that we can test quality into a product? By the time an application is delivered to testing, our ability to introduce quality principles is generally limited to defect detection. So how do you begin to shift your team’s perceptions into a true quality assurance organization? Susan Schanta shares her approach to Shift Quality Left by performing ambiguity reviews against requirements documents to reduce requirement defects at the beginning of the project. By helping the business analyst identify gaps in requirements, you can help build quality in and improve the team’s ability to write testable requirements. Learn how to review requirements to identify ambiguities and document the open questions that need to be addressed to make requirements clear, concise, and testable. Susan demonstrates her approach to ambiguity reviews and how she turned lessons learned into a Business Analyst Style Guide to drive quality into the requirements gathering process.

More Information
Learn more about Susan Schanta.
T20 Making Testing at eBay More Realistic
Kamini Dandapani, eBay, Inc.
Thursday, May 8, 2014 - 3:00pm - 4:00pm

Have you had customers report issues that cannot be reproduced in the test environment? Have you had defects leak into production because your test environment is not equivalent to production? In the past, the eBay test environment didn’t mirror production data and had security, feature, and service fidelity issues. Kamini Dandapani shares how eBay solved these problems. They now dedicate a portion of their production environment to enable eBay engineers to do more realistic testing. This includes a highly controlled SOX compliant environment, sharing the same design and architecture as production. The environment has automated build and deployment to achieve eBay’s idea of “operational excellence” and provides the ability to leverage historical data sets from production. Learn from Kamini what goes into building a meaningful “dogfooding” environment, security and infrastructure obstacles they had to overcome, and how to convince stakeholders of the return on investment of such a scheme.

More Information
Learn more about Kamini Dandapani.
T21 Using DevOps to Improve Software Quality in the Cloud
Thursday, May 8, 2014 - 3:00pm - 4:00pm

DevOps is gaining popularity as a way to quickly and successfully deploy new software. With all the emphasis on deployment, software quality can sometimes be overlooked. In order to understand how DevOps and software testing mesh, Glenn Buckholz demonstrates a fully implemented continuous integration/continuous delivery (CI/CD) stack. After describing the internals of how CI/CD works, Glenn identifies the touch points in the stack that are important for testing organizations. With the now accelerated ability to deliver software, the testing groups need to know how this technology works and what to do with it because swarms of manual testers will not be able to keep up. Glenn demonstrates where and how to use automated testing, how to collect and make sense of the massive amount of test results that can be generated from CI/CD, and how to usefully apply manual testing.

More Information
Learn more about Glenn Buckholz.
T22 Top Practices for Successful Mobile Test Automation
Thursday, May 8, 2014 - 3:00pm - 4:00pm

Mobile apps bring a new set of challenges to testing—fast-paced development cycles with multiple releases per week, multiple app technologies and development platforms to support, dozens of devices and form factors, and additional pressure from enterprise and consumers who are less than patient with low quality apps. And with these new challenges comes a new set of mistakes testers can make! Fred Beringer works with dozens of mobile test teams to help them avoid common traps when building test automation for mobile apps. Fred shares some useful best practices, starting with mobile test automation. He explains what and where to automate, how to build testability into a mobile app, how to handle unreliable back-end calls and different device performance, and how to automate the automation. Fred shares real customer stories and shows how small changes in process can make mobile apps ten times more reliable.

More Information
Learn more about Fred Beringer.
T23 Ensuring Security through Continuous Testing
Jeremy Faircloth, UnitedHealth Group
Thursday, May 8, 2014 - 3:00pm - 4:00pm

Many companies develop strong software development practices that include ongoing testing throughout the development lifecycle but fail to account for the testing of security-related use cases. This leads to security controls being tacked on to an application just before it goes to production. With security controls implemented in this manner, more security vulnerabilities will be found with less time to correct them. As more applications move to cloud-based architectures, this will become an even greater problem as some of the protection enjoyed by applications hosted on-premise no longer exists. Jeremy Faircloth discusses a better approach—ensuring that testing throughout the development lifecycle includes the appropriate focus on security controls. Jeremy illustrates this through the establishment of security-related use cases, static code analysis, dynamic analysis, fuzzing, availability testing, and other techniques. Save yourself from last minute security issues by proactively testing the security of your application!

More Information
Learn more about Jeremy Faircloth.
T24 Game On: Automating Sports Video Game Testing
Fazeel Gareeboo, EA Sports
Thursday, May 8, 2014 - 3:00pm - 4:00pm

Sports video games are generally on a short cycle time—tied to the start of a particular sport’s season. Like all video games, the pressure is always on to add more features to sell more games, and the list of “cool” features is endless. Getting buy-in to implement automated testing in this environment can be a challenge. And once you get that buy-in, your next challenge is to ensure it provides significant value to the game team. Fazeel Gareeboo shares the lessons they learned at EA Sports—lessons you can take back to your project. Fazeel describes the sports video game development environment and discusses how you can overcome the reluctance of product owners to spend precious resources on implementing automated testing. Fazeel addresses how to get buy-in from the team and finally how to make your product—game or not—more successful using automated testing.

More Information
Learn more about Fazeel Gareeboo.