Skip to main content

Conference Schedule

Explore all that STARCANADA has to offer.

Conference Schedule PDF

Sunday, June 21

Software Tester Certification—Foundation Level (3-Day)
Conrad Fujimoto, Intellectica Systems Inc.
8:30 AM - 4:30 PM
Agile Tester Certification (2-Day)
Matta Saikali, Testrics Inc.
8:30 AM - 4:30 PM
Mobile Application Testing (2-Day)
Gene Gotimer, Coveros, Inc.
8:30 AM - 4:30 PM
Lunch
12:00 PM - 1:00 PM

Training Classes Resume
1:00 PM - 4:30 PM

Training classes resume

Read more

Tuesday, June 23

Tutorials TB: Testers in Value-Driven Product Development
J.B. Rainsberger, JBRAINS.CA
8:30 AM - 12:00 PM

Even in many agile projects, testers stand aside while others set product and project goals and requirements (stories). These other people aren’t doing poor work but rather are often developing artifacts that are too easily misinterpreted. J.B. Rainsberger presents two value-driven development techniques that testers—who by their very nature are critical thinkers—can use to help the team figure out what to build, which parts to build first, and most importantly, what not to build at all. Learn a powerful modeling technique to reduce a long laundry list of stories down to a clear, high-level path toward a great product. Join J.B. to practice the art of “talking in examples,” which will help you work with product owners, analysts, and programmers to develop a clear picture of what to build. Don’t remain relegated to after-the-fact acceptance testing. Learn how to play a vital role in building the right thing the first time.

Read more
Tutorials TA: Test Estimation in Practice
Rob Sabourin, AmiBug.com
8:30 AM - 12:00 PM

Anyone who has ever attempted to estimate software testing effort realizes just how difficult the task is. The number of factors that can affect the estimate is virtually unlimited. The key to good estimates is to understand the primary variables, compare them to known standards, and normalize the estimates based on their differences. This is easy to say but difficult to accomplish because estimates are frequently required even when very little is known about the project—and what is known is constantly changing. Throw in a healthy dose of politics and a bit of wishful thinking, and estimation can become a nightmare. Rob Sabourin provides a foundation for anyone who must estimate software testing work effort. Learn about the test team’s and tester’s roles in estimation and measurement, and how to estimate in the face of uncertainty. Analysts, developers, leads, test managers, testers, and QA personnel can all benefit from this tutorial.

Read more
Tutorials TC: Essential Test Management and Planning
Rick Craig, Software Quality Engineering
8:30 AM - 12:00 PM

The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.

Read more
Tutorials TD: Get Full Value from Your Automated Tests
Gerard Meszaros, FeedXL.com
8:30 AM - 12:00 PM

Due to the demands for reduced cycle times, automated tests are considered “table stakes” these days—whether you’re working on agile or waterfall projects. How do you minimize the cost of creating and maintaining—and maximize the value you get from—these automated tests? What kinds of tools should you use to avoid getting mired in test automation rework hell? Gerard Meszaros shares a toolkit of techniques for preparing robust, easily-understood automated tests that also serves as a specification of what should be built. Gerard lays out the key success factors—using different levels of tests, consciously managing the scope of each test and its level of detail, writing tests using business (not technical) terminology, and selecting tools that support this strategy. Exercises give you hands-on experience refactoring tests to make them more readable and maintainable. Gain a valuable appreciation for the kinds of tools you’ll need to prepare tests anyone can read and understand.

Read more
Tutorials TE: Fundamental Test Design Techniques
Lee Copeland, Software Quality Engineering
8:30 AM - 12:00 PM

As testers, we know that we can define many more test cases than we will ever have time to design, execute, and report. The key problem in testing is choosing from the almost infinite number of tests available a small, “smart” subset that will find a large percentage of the defects. Join Lee Copeland to discover how to design test cases using formal black-box techniques, including equivalence class testing, boundary value testing, decision tables, and state-transition diagrams. Explore examples of each of these techniques in action. Don’t just pick test cases at random. Learn to selectively choose a set of test cases that maximizes your effectiveness and efficiency to find more defects in less time. Then, examine how to use the test results to evaluate the quality of both your products and your testing. Discover the test design techniques that will make your testing more productive.

Read more
Lunch
12:00 PM - 1:00 PM

Tutorials TF: Tips for Expanding Your Testing Toolbox
Alan Page, Microsoft
1:00 PM - 4:30 PM

Regardless of how long you’ve been testing and learning—only a month or many years—there is always something new to help improve your testing and software development efforts. Although many testers, for better or worse, see test automation as their next—and sometimes only—step to grow their skill set and improve as a tester, there is much more to do. Alan Page discusses, demonstrates, and details concepts and tools that can help everyone test better and provide noticeable technical value to their organization. Alan explores a potpourri of suggestions to help you grow your testing toolbox: techniques for security and performance testing, tools to help you find better bugs, scripting that aids (rather than replaces) your testing, tester tips for code review that can be done with minimal (or zero) knowledge of coding, and more. Finally, you’ll learn simple approaches that will enable you to continue to grow your knowledge and skills throughout your career.

Read more
Tutorials TH: Security Testing for Testing Professionals
Gene Gotimer, Coveros, Inc.
1:00 PM - 4:30 PM

Today’s software applications are often security critical, making security testing an essential part of a software quality program. Unfortunately, most testers have not been taught how to effectively test the security of the software applications they validate. Join Gene Gotimer as he shares what you need to know to integrate effective security testing into your everyday software testing activities. Learn how software vulnerabilities are introduced into code and exploited by hackers. Discover how to define and validate security requirements. Explore effective test techniques for assuring that common security features are tested. Learn about the most common security vulnerabilities in applications, how to identify key security risks and to mitigate them with testing. Understand how to security test applications—both web- and GUI-based—during the software development process. Review examples of how common security testing tools work and assist the security testing process. Take home valuable tools and techniques for effectively testing the security of your applications going forward.

Read more
Tutorials TG: Getting Things Done: What Testers Do in Agile Sprints
Rob Sabourin, AmiBug.com
1:00 PM - 4:30 PM

Avoiding siloed development and test is a tricky business—even with agile practices in place. It is easy for agile teams to fall into the rut in which testers only do testing and programmers only do coding. Rob Sabourin explores many ways to apply your testing knowledge and experience inside a Scrum sprint or iteration and throughout an agile project. He finds that testers are among the most skilled team members in story grooming, elicitation, and exploration. Rob describes a host of ways testers add value to an agile sprint—using their analysis skills to help clear the way to make tough technical trade-offs; pairing with programmers to help design and review unit tests; studying static analysis reports to find unexpected code complexity or security; and much more. Join Rob to see how testers can start working hand-in-hand with developers, business analysts, and product owners to get more things done in agile sprints and projects.

Read more
Tutorials TI: Measurement and Metrics for Test Managers
Rick Craig, Software Quality Engineering
1:00 PM - 4:30 PM

To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.

Read more
Welcome Reception
4:30 PM - 5:30 PM

Bonus Sessions B1: Jump Start to Mobile Testing (Part of the yvrTesting June Meetup)
Melissa Tondi, Denver Automation and Quality Engineering
5:30 PM - 6:30 PM

In this talk, you will learn how to jump-start your team to be able to support the ever-growing mobile arena. We will focus on high-level key questions and points to consider around your company and organization’s mobile strategy and more in-depth points within your testing team.

About yvrTesting: yvrTesting is a Vancouver-based user group focused on sharing experiences and ideas around software quality assurance and testing.

Read more

Wednesday, June 24

Bonus Sessions B2: Win the Race with a Dark Horse—Drive Enterprise Agility through QA and Testing
Prasad MK, TCS Assurance Services
7:15 AM - 8:15 AM

Agility has never been as imperative to business success as it is in today’s hyper-competitive digital world.  Organizations need to launch their products and services to market faster, have scalable and flexible business models to address ever changing customer expectations, and they need to offer superior customer experience each and every time. All this while mitigating business risks.  In this scenario, the quality assurance (QA) and testing function assumes a pivotal role.

In this session, insights will be shared on:

  • How QA and Testing can act as a catalysts across the entire lifecycle
  • How best-in-class tools & techniques and co-existence with Dev and Ops can drive enterprise agility
  • Metrics to consider to help measure customer experience proactively, while mitigating organizational risk
  • Frameworks for a robust engagement and governance model to help drive efficiencies across the board

Read more
Keynotes K1: How We NOW Test Software at Microsoft
Alan Page, Microsoft
8:30 AM - 9:45 AM

In December 2008 when How We Test Software at Microsoft was first published, the software community appreciated the insight into many testing activities and processes popular at Microsoft. Six and a half years later, many companies—including Microsoft—have evolved and changed in a variety of ways, and now much of the book is outdated or obsolete. New products, new ideas, and new strategies for releasing software have emerged. Alan Page explores Microsoft’s current approaches to software testing and quality. He digs into new practices, describes changing roles, rants about long-lived ideas kicked to the curb in the past seven years―and might even share a few tidbits not fit for print and wide-scale distribution. To give organizations food for thought and ideas for growth, Alan reveals what’s new in quality approaches, developer to tester ratios, agile practices, tools, tester responsibilities—and lessons he’s learned along the way.

Read more
Concurrent Sessions W2: Testing the Internet of Things
Regg Struyk, Polarion Software
10:15 AM - 11:15 AM

Embedded software—now being referred to as the Internet of Things (IoT)—continues to permeate almost every industry—from household appliances to heart monitors. It is estimated that there are at least a million lines of code in the average car. As IoT explodes from millions of devices to tens of billions in the next few years, new challenges will emerge for software testing. Security, privacy, complexity, and competing standards will fuel the need for innovative testing. Customers don't care why your software failed in the connected chain—only that it did fail. Companies that focus on quality will ultimately be the successful brands. Learn what new approaches are required for testing the “zoo” of interconnected devices. As products increasingly connect physical hardware with applications, we must revisit old testing approaches. IoT is about analyzing data in real time, allowing testers to make quicker and more informed decisions. If IoT testing is in your future, this session is for you.

Read more
Concurrent Sessions W1: Leadership for Test Managers and Testers
Rick Craig, Software Quality Engineering
10:15 AM - 11:15 AM

Many organizations spend a great deal of time and effort acquiring and learning to use the latest techniques and technology, but they make little or no attempt to train or mentor their staff to be better leaders. While it is true that technology is important, test teams without able leaders will struggle to be successful. Rick Craig shares some of the lessons he has learned in his roles as test manager, military leader, and entrepreneur. Initially, Rick discusses some classic leadership topics―leadership traits and styles, the cornerstones of leadership, and principles of leadership. Explore the importance of influence leaders and how to identify and encourage them. Discover the positive and negative indicators of morale and how to maintain high morale within a team. Learn how to give direction without being a micromanager. Discuss what motivates and what de-motivates testers. Rick encourages you to bring your leadership challenges to serve as points of discussion.

Read more
Concurrent Sessions W4: Testing Mobile App Performance
Brad Stoner, Neotys
10:15 AM - 11:15 AM

The mix of ever-smarter mobile devices and the constant connectivity of wireless networks have changed the way users access applications—and the way we develop and test them. Deployed applications deliver different content and functionality depending on whether the user is accessing them via a browser, smartphone, or tablet. And applications are accessed over myriad network configurations, including wireless and mobile networks. Brad Stoner presents an in-depth look at performance testing challenges for mobile applications including recording from devices, playing back device-specific requests, and accounting for variances in users’ geographical locations. Discover some of the best mobile performance testing approaches such as emulating mobile networks with varying connection speeds, packet loss, and latency during load tests. Find out when to use real devices vs. emulators to ensure high mobile application performance delivery to all end-users, at all times—on any device or network.

Read more
Concurrent Sessions W3: From Formal Test Cases to Session-Based Exploratory Testing
Ron Smith, Intuit
10:15 AM - 11:15 AM

Agile software development is exciting, but what happens when your team is entrenched in older methodologies? Even with support from the organization, it is challenging to lead an organization through the transformation. As you start making smaller, more frequent releases, your manual test cases may not keep up, and your automated tests may not yet be robust enough to fill the gap. Add in the reality of shrinking testing resources, and it is obvious that change is required. But how and what should you change? Learn how Ron Smith and his team tackled these challenges by moving from a test case-driven approach to predominantly session-based exploratory testing, supported by “just enough” documentation. Discover how this resulted in testers who are more engaged, developers who increased their ability and willingness to test, and managers who increased their understanding and insight into the product. Use what you learn from Ron to begin the transformation in your organization.

Read more
Industry Technical Presentations ITP1: “Works on our PCs”: Improving Collaboration between Test and Dev
Dan Levy, Telerik
10:15 AM - 11:15 AM
  • Testers must have a strong voice—get involved from day one

  • Work with Dev’s early on—explain issues that you anticipate and what you need from them
  • Use tooling to streamline your process

Read more
Industry Technical Presentations ITP2: Painless Mobile Application Quality Testing
Christopher De Kok, IBM
10:15 AM - 11:15 AM
  • Submit defects in seconds while using the app from your mobile device
  • Rapid insight into why an app fails
  • Mine app store ratings and reviews to extract actionable feedback before they go viral
  • Get feedback about your app straight from your customers
  • Get the latest app version in the hands of testers as soon as it is available

Read more
Concurrent Sessions W5: Building a World-Class Quality Team at eBay
Steve Hares, eBay
11:30 AM - 12:30 PM

Today, many test methodologies can be used to achieve high quality and productivity ―Agile/Scrum, TDD, data modeling, risk analysis, and personas, just to name a few. So how do you pick the best approaches and techniques for your team and projects? Learn how Steve Hares helped build a world-class team from the ground up at eBay through iterative best-fit analysis of processes and methods. Discover why and how they adopted agile processes in some areas, waterfall in others, risk-based testing where appropriate, data model-driven testing, ad-hoc testing, and work-flow testing. At the same time, they incorporated test automation and integrated load/performance testing into the development process to achieve world class quality. Steve’s team now tests everything from enterprise wide products to IVRs, from batch files to voice biometrics. If your methodology isn't working just right, chances are you need to find the best fit methods through a continuous improvement process.

Read more
Concurrent Sessions W6: Virtualize APIs for Better Application Testing
Lorinda Brandon, SmartBear Software
11:30 AM - 12:30 PM

In today’s interconnected world, APIs are the glue that allows software components, devices, and applications to work together. Unfortunately, many testers don’t have direct access to manipulate the APIs during testing and must rely on either testing the API separately from the application or testing the API passively through functional application testing. Lorinda Brandon maintains that these approaches miss the most important kind of API testing―uncovering how your application deals with API constraints and failures. Lorinda describes common API failures—overloaded APIs, bad requests, unavailabilities, and API timeouts—that negatively impact applications, and how application testers miss these scenarios, especially in third-party APIs. She explores how and when virtualization can and cannot help, including creating a virtual API that can fail. Lorinda discusses the importance of simulating API failures in web and mobile application testing, and identifies tools and technologies that help virtualize your APIs.

Read more
Concurrent Sessions W8: Usability Testing Goes Mobile
Susan Brockley, ExxonMobil
11:30 AM - 12:30 PM

The introduction of mobile devices and applications presents new challenges to traditional usability testing practices. Identifying the differences between usability testing techniques for traditional desktop applications and mobile applications is critical to ensuring their acceptance and use. New equipment requirements for usability testing of mobile applications add to transition issues. Join Susan Brockley to discover ways to transition your traditional usability testing program into the mobile environment. Review usability testing fundamentals and then explore additional dimensions—context, affordance, and accessibility—of mobile usability testing. Learn how user expectations influence and change our approach to usability and how new factors such as power, connectivity, and protective covers impact the overall user experience. Get advice from Susan on how to plan and conduct field tests that are representative of your target audience. Finally, assess your organization’s usability maturity and take back positive steps to make your transition into the mobile usability testing field successful.

Read more
Industry Technical Presentations ITP3: Time to Cut the Cord: Bringing DevOps Back to Mobile
Dan McFall, Mobile Labs
11:30 AM
  • USB-tethering of devices for use by developers,  QA, and support professionals diminishes team agility
  • Without devices sunder management as part of a highly available infrastructure, DevOps efficiencies can evaporate in mobility
  • Considerations for types of mobile testing and the wisest places to spend your time

Read more
Concurrent Sessions W7: Test Automation Strategies and Frameworks: What Should Your Team Do?
Gene Gotimer, Coveros, Inc.
11:30 AM - 12:30 PM

Agile practices have done a magnificent job of speeding up the software development process. Unfortunately, simply applying agile practices to testing isn't enough to keep testers at the same pace. Test automation is necessary to support agile delivery. Max Saperstone explores popular test automation frameworks and shares the benefits of applying these frameworks, their implementation strategies, and best usage practices. Focusing on the pros and cons of each framework, Max discusses data-driven, keyword-driven, and action-driven approaches. Find out which framework and automation strategy are most beneficial for specific situations. Although this presentation is tool agnostic, Max demonstrates automation with examples from current tooling options. If you are new to test automation or trying to optimize your current automation strategy, this session is for you.

Read more
Lunch - Meet the Speakers
12:30 PM - 1:30 PM

Concurrent Sessions W12: Techniques, Tools, and Technology for Better Mobile App Testing
Brad Johnson, SOASTA
1:30 PM - 2:30 PM

Today, mobile app testing expertise is in high demand and offers an exciting career path in test/QA. However, the recent Future of Testing study, sponsored by TechWell, noted that the biggest challenge in mobile―just behind having enough time to test―is expertise. Brad Johnson shares how companies from banking to retail use data from real production users, continuous integration frameworks, cloud-based testing platforms, and real mobile devices to help ensure every user experiences top-rated performance—all the time. Brad shares insight about what to test for mobile, when to first automate, and a metric that will drive real change. Explore how organizations are  communicating across teams and improving developer-to-tester collaboration with new approaches. Testers need to develop new skills ranging from software coding requirements to data science. Takeaway tips and ideas to impact your company, enhance your skill set, and propel your career with exciting options and new challenges.

Read more
Concurrent Sessions W10: Inside the Mind of the 21st Century Customer
Alan Page, Microsoft
1:30 PM - 2:30 PM

Testers frequently say that they are the voice of the customer or the customer advocate for their organization’s products. In some situations this can be a helpful mindset, but no matter how hard he tries, a software tester is not the customer. In fact, there is no one better suited to evaluate customer experience than the actual customer of your software. However, getting actionable feedback from customers can be time-consuming, difficult, and often too late to have any meaningful impact on the product. Alan Page shares his thoughts and a number of examples of how to get customer feedback quickly, how to make that feedback actionable, and how to use customer data to drive better software development and testing on any team—and for any product. In this fast-paced session of information and fun, Alan discusses product instrumentation, analysis techniques, reporting, A/B testing, and many other facets of customer feedback.

Read more
Concurrent Sessions W11: Establish an Integrated Test Automation Architecture
1:30 PM - 2:30 PM

Implementing test automation requires more than selecting the best tool and starting to write automated tests. Because test automation tools must integrate with your development lifecycle and its various project management, build, integration, and release tools, you need an automation comprehensive architecture implementation plan. Drawing on his experiences at a large financial institution, Mike Sowers discusses the steps for a test automation architecture, identifying tool dependencies, establishing deployment plans, and selecting and reporting metrics. Challenges Mike’s organization faced included ensuring that the selected tools worked well with other application lifecycle tools, driving the adoption of automation across multiple project teams or departments, and communicating the quantitative and qualitative benefits to key stakeholders in management. Mike discusses things that went right (such as including the corporate architectural review board) and things that went wrong (allowing too much organizational separation between testers and automation engineers). Take back a To Do list of opportunities and issues to improve your test automation implementation or start a new one.

Read more
Concurrent Sessions W9: The Tester’s Role in Agile Planning
Rob Sabourin, AmiBug.com
1:30 PM - 2:30 PM

All too often testers passively participate in agile planning. And the results? Important testing activities are missed, late testing becomes a bottleneck, and the benefits of agile development quickly diminish. However, testers can actively advocate customer concerns while helping to implement robust solutions. Rob Sabourin shows how testers contribute to estimation, task definition, and scoping work required to implement user stories. Testers apply their elicitation skills to understand what users need, exploring typical, alternate, and error scenarios. Testers can anticipate cross story interference and the impact of new stories on legacy functionality. Rob discusses examples of how to break agile stories into test-related tasks. He shares experiences of transforming agile testers from passive planning participants into dynamic advocates of effective trade-offs, addressing the product owners’ critical business concerns, the teams’ limited resources, and the software projects’ technical risks. Join Rob to explore test infrastructure, test data, non-functional attributes, privacy, security, robustness, exploration, regression, business rules, and more.

Read more
Industry Technical Presentations ITP4: Reimagining Assurance and Testing in the Digital Era
Tom Edwards, Nielson
Kirpal Shahani , TCS
1:30 PM - 2:30 PM
  • The impact of the Digital Five Forces on QA and Testing

  • The Nielsen Reimagination experience in business and QA in the context of opportunity creation and go-to-market
  • Enhanced role of QA and Testing functions in the “Default is digital” world

  • Key elements (next-gen skills, change in the mindset, tools and techniques) of QA reimagination

Read more
Concurrent Sessions W15: Automate REST API Testing
Eric Smith, HomeAdvisor
3:00 PM - 4:00 PM

As an organization grows, the body of code that needs to be regression tested constantly increases. However, to maintain high velocity and deliver new features, teams need to minimize the amount of manual regression testing. Eric Smith shares his lessons learned in automating RESTful API tests using JMeter, RSpec, and Spock. Gain insights into the pros and cons of each tool, take back practical knowledge about the tools available, and explore reasons why your shop should require RESTful automation as part of its acceptance test criteria. Many decisions must be made to automate API tests: choosing the platform; how to integrate with the current build and deploy process; and how to integrate with reporting tools to keep key stakeholders informed. Although the initial transition caused his teams to bend their traditional roles, Eric says that ultimately the team became more cross-functionally aligned and developed a greater sense of ownership for delivering a quality product.

Read more
Concurrent Sessions W13: Testing for Talent: Leveraging Testing Principles in Building Teams
Joy Toney, ALSAC/St Jude Children's Research Hospital
3:00 PM - 4:00 PM

Application development teams today are asked to deliver more with fewer resources. They work together tirelessly under pressure to deliver quality solutions to their stakeholders. Now imagine—just as the delivery team is about to begin its testing cycle, your lead tester suddenly quits. How do you replace a talented contributor within tight time constraints? Just as testing principles enable delivery of better systems, Joy Toney demonstrates how using those same testing principles enables the test manager to select and hire outstanding team members. First, define your testing team’s acceptance criteria for the position. Rethink the application of the validation and verification processes to hiring, while using a combination of static and dynamic testing techniques. Consider using stress, volume, and performance testing to surface your ideal candidate from the pool of possibilities. Discover new ideas and proven techniques for use in your hiring decisions, so you hire the right person for your test team the first time.

Read more
Concurrent Sessions W14: Testing Hyper-Complex Systems: What Can We Know? What Can We Claim?
Lee Copeland, Software Quality Engineering
3:00 PM - 4:00 PM

Throughout history, people have built systems of dramatically increasing complexity. In simpler systems, defects at the micro level are mitigated by the macro level structure. In complex systems, failures at the micro level cannot be compensated for at a higher level, often with catastrophic results. Lee Copeland says that we are building hyper-complex computer systems—so complex that faults can create totally unpredictable behaviors. For example, systems based on the service-oriented architecture (SOA) model can be dynamically composed of reusable services of unknown quality, created by multiple organizations, and communicating through many technologies across the unpredictable Internet. Lee explains that claims about quality require knowledge of test “coverage,” which is an unknowable quantity in hyper-complex systems. Join Lee for a look at your testing future as he describes new approaches needed to measure test coverage in these hyper-complex systems and lead your organization to better quality—despite the challenges.

Read more
Industry Technical Presentations ITP5: 7 Steps to Pragmatic Mobile Testing
Tom Chavez, SOASTA
3:00 PM - 4:00 PM
  • Feeling pain from complex test plans, shorter product cycles, and more device platforms?
  • Trying to move from functional and validation testing to analysis?
  • Come learn the 7 Steps to Pragmatic Mobile Testing and tools to accomplish them

Read more
Concurrent Sessions W16: Agile Metrics and the Software Delivery Pipeline
Christopher Davis, Nike, Inc.
3:00 PM - 4:00 PM

Today’s build pipelines and agile tracking systems are very advanced and generate lots of data. Christopher Davis has found that many teams face challenges when interpreting that data to show meaningful agile metrics across the entire organization. As a result, measuring agile development ends up being a fuzzy art—when it doesn’t have to be. Using common open source tools, you can automate the collection and aggregation of data from your build pipeline to show the right level metrics to the right people in your organization, track what means the most to your team, and create actionable metrics you can use to improve your team and process.  Join Christopher to learn about open source tools you can use to collect data and create metrics, several key metrics you can use today to help make your team better, and how to implement these tools to automatically collect and distribute them in your build pipeline.

Read more
Keynotes K2: Lightning Strikes the Keynotes
Lee Copeland, Software Quality Engineering
4:15 PM - 5:15 PM

Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes. Some of the best-known experts in testing will step up to the podium and give you their best shot of lightning. Get multiple keynote presentations for the price of one—and have some fun at the same time.

Read more

Thursday, June 25

Keynotes K3: Build the Right Product Right: Transitioning Test from Critiquing to Defining
Gerard Meszaros, FeedXL.com
8:30 AM - 9:45 AM

Do you find yourself with limited influence over what gets shipped on products you test? Is your report card on product quality often ignored? Do you think you can contribute more? Join Gerard Meszaros as he describes ways to transition from approaching quality with brute force testing to an enlightened and strategic perspective that will have real impact on product quality. Instead of criticizing the product, become an integral part of the development process and learn how you can help define what should be built. Gerard explores design for testability concepts and describes key testability requirements that will afford better, more efficient testing. He explains test design techniques that describe software functionality in layers of plain language tests. Gerard shows how a collaborative approach for building the right product results in much better outcomes from both quality and schedule perspectives. Stop rushing through multiple test-and-fix cycles that result in a less than quality product. Be part of the solution!

Read more

Networking break.

Read more
Concurrent Sessions T1: Managing Technical Debt
Philippe Kruchten, Kruchten Engineering Services, Ltd.
10:15 AM - 11:15 AM

Technical debt is slowing your software development projects. Any developer who has gone beyond version 1 has encountered it. Technical debt takes different forms, has many different origins, and does not always equate to bad code quality. Much of it is incurred due to the passage of time and a rapidly evolving business environment. Some is in the form of hundreds of little cuts; some is massive and overwhelming, the result of a single poor design choice. Philippe Kruchten explains how to distinguish different types of technical debt, identify their root causes, objectively assess their impact, and develop strategies suitable in your context to limit or selectively reduce the technical debt you incur. Discover what debt you can happily live with. See when to declare bankruptcy. And learn that not all technical debt is bad. Just like in the real world, some technical debt can be a valuable investment for the future.

Read more
Concurrent Sessions T2: Reduce Test Automation Execution Time by 80%
Tanay Nagjee, Electric Cloud
10:15 AM - 11:15 AM

Software testers and quality assurance engineers are often pressured to cut testing time to ensure on-time product releases. Usually this means running fewer test cycles with the risk of worse software quality. As companies embrace a continuous integration (CI) that require frequent build and test cycles, the pressure to speed up automated testing is intense. Tanay Nagjee shows how you can cut the time to run an automated test suite by 80%—for example, from two hours to under 25 minutes. Find out how Tayay’s team broke down their test suites into bite-sized test that could be executed in parallel. Leveraging a cluster of computing horsepower (either on on-premise physical machines or in the cloud), you can refactor large test suites to execute in a fraction of the time it takes now. With real example data and a live demonstration, Tanay outlines a three-step approach to achieve these results within different test frameworks.

Read more
Concurrent Sessions T4: Root Cause Analysis for Testers
Jan van Moll, Philips Healthcare
10:15 AM - 11:15 AM

Bad product quality can haunt companies long after the product’s release. And root cause analysis (RCA) of product failures is an indispensable step in preventing its recurrence. Unfortunately, the testing industry struggles with doing proper RCA. Moreover, companies often fail to unlock the full potential of RCA by not including testers in the process. Failing to recognize the real value testers bring to RCA is a process failure. Another failure is not recognizing how extremely valuable RCA results are for devising enhanced test strategies. Using real-life—and often embarrassing—examples, Jan van Moll illustrates the added value that testers bring and discusses the pitfalls of RCA. Jan challenges testers and managers to analyze and rethink their own RCA practices. Learn how to increase your value as a professional tester to your business by performing powerful RCA—and avoiding its pitfalls.

Read more
Industry Technical Presentations ITP6: Driving Test Innovation through Automation
John Hensel, Jazz Aviation
10:15 AM

Join John Hensel as he:

  • Shares Jazz Aviation’s QA journey, experiences, and lessons learned
  • Explains their selection process and implementation of the Tricentis Tosca TestSuite
  • Discusses their enterprise testing strategy and future plan

Read more
Concurrent Sessions T3: Create Disposable Test Environments with Vagrant and Puppet
Gene Gotimer, Coveros, Inc.
10:15 AM - 11:15 AM

As the pace of development increases, testing has more to do and less time in which to do it. Software testing must evolve to meet delivery goals while continuing to meet quality objectives. Gene Gotimer explores how tools like Vagrant and Puppet work together to provide on-demand, disposable test environments that are delivered quickly, in a known state, with pre-populated test data and automated test fixture provisioning. With a single command, Vagrant provisions one or more virtual machines on a local box, in a private or public cloud. Puppet then takes over to install and configure software, setup test data, and get the system or systems ready for testing. Since the process is automated, anyone on the team can use the same Vagrant and Puppet scripts to get his own virtual environment for testing. When you are finished with it, Vagrant tears it back down and restores it to the same original state.

Read more
Concurrent Sessions T6: Write Your Test Cases in a Domain-Specific Language
Beaumont Brush, Dematic, Inc.
11:30 AM - 12:30 PM

Manual test cases are difficult to write and costly to maintain. Beaumont Brush suggests that one of the more important but infrequently-discussed reasons is that manual tests are usually written in natural language, which is ineffective for describing test cases clearly. Employing a domain-specific language (DSL), Beaumont and his team approach their manual test cases exactly like programming code and gain the benefits of good development and design practices. He shares their coding standards, reusability approach, and object models that integrate transparently into the version control and code review workflow. Beaumont demonstrates two DSL approaches―a highly specified DSL written in Python and a more functional DSL that leverages Gherkin syntax and does not require a computer language to implement. By making your test cases easier to write and maintain, your team will improve its test suite and have time for automating more tests.

Read more
Concurrent Sessions T7: Transform a Manual Testing Process to Incorporate Automation
Jim Trentadue, Ranorex
11:30 AM - 12:30 PM

Although most testing organizations have automation, it’s usually a subset of their overall efforts. Typically the processes for the department have been previously defined, and the automation team must adapt accordingly. The major issue is that test automation work and deliverables do not always fit into a defined manual testing process. Jim Trentadue explores what test automation professionals must do to be successful. These include understanding development standards for objects, structuring tests for modularity, and eliminating manual efforts. Jim reviews the revisions required to a V-model testing process to fuse in the test automation work. This requires changes to the manual testing process, specifically at the test plan and test case level. Learn the differences between automated and manual testing process needs, how to start a test automation process that ties into your overall testing process, and how to do a gap analysis for those actively doing automation, connecting better with the functional testing team.

Read more
Concurrent Sessions T8: Become an Influential Tester: Learn How to Be Heard
Jane Fraser, Anki, Inc.
11:30 AM - 12:30 PM

As a tester, are you frustrated that no one listens to you? Are you finding bugs and having them ignored? Are you worried that the development process and product quality aren’t as good as they should be? Jane Fraser shares ways to help you be heard―ways to position yourself as a leader within your organization, ways to increase your influence, and ways to report bugs to get them fixed. In this interactive session, Jane leads you to a better understanding of how to be heard in your organization. Learn how to tailor your defect reports depending on who makes the decisions and their area of focus—customer, budget, or design. These details help you determine how to position your defect for action. Through real life examples, Jane shows you how to become a more influential tester.

Read more
Industry Technical Presentations ITP7: Tableau Public: Most Beautiful Data Visualization Site in the World or Stress Test for Tableau Server?
Chris Macey-Cushman, Tableau
11:30 AM - 12:30 PM
  • Architecture review of how we scale Tableau Server to serve millions of happy users
  • Stability metrics of how Tableau Public behaves over the course of beta releases of Tableau Server
  • Interesting issues found leading into our most recent releases

Read more
Concurrent Sessions T5: The Adventures of a First-Time Test Lead: An Unexpected Journey
Ioan Todoran, Expedia Affiliate Network
11:30 AM - 12:30 PM

When moving to a new position in your organization, you might not always feel confident—and that’s fine. If you have ever wondered how to change your mindset from “I need to learn from someone more experienced than I” to “I need to train and lead a team,” Ioan Todoran shares what he learned during his time as a first-time test team lead. Ioan shares lessons about recruitment (where and how to look for people), interviewing (forget the boring, interrogatory-style interviews; move toward a more conversational approach), training (how to prepare the new testers for work on a commercial project), and navigating through the daily management duties while keeping the automation work going on your project (stop micromanaging; help, but don't suffocate; learn to offer quick solutions.) Learn how to establish better connections and communication channels with upper management while strengthening the relationships with your clients through an honest and direct approach.

Read more
Lunch - Meet the Speakers
12:30 PM - 1:30 PM

Concurrent Sessions T10: The Power of Pair Testing
Kirk Lee, Infusionsoft
1:30 PM - 2:30 PM

Perhaps you have heard of pair testing but are unaware of its tremendous benefits. Maybe you have tried pair testing in the past but were dissatisfied with the result. When done correctly, pair testing significantly increases quality, decreases overhead, and improves the relationship between testers and developers. Join Kirk Lee as he shares the essential points of this powerful technique that moves testing upstream and prevents defects from being committed to the codebase. Kirk explores how pair testing facilitates discussion, increases test effectiveness, promotes partnership, and provides cross training. Learn why testers and developers say they love pair testing. Kirk describes key tips to ensure success, including the amount of time required for the pair-testing session, the best way to run the session, and how to know when the session is complete. He provides specific steps to take before, during, and after the pair-testing session to make it even more effective.

Read more
Concurrent Sessions T9: Giving and Receiving Feedback: A New Imperative
Omar Bermudez, agilecafe.org
1:30 PM - 2:30 PM

Giving and receiving feedback are tough for everyone. Who wants to criticize others or be criticized? Although managers have a duty to give honest feedback to staff and peers, many people resist change or differ on how to change—leading to interpersonal conflicts and impacting deliverables. Omar Bermudez explains several techniques—Giving Positive Feedback, Acid Reflux (when you get that sick feeling), and SARA (Surprise, Anger, Rationalization, Acceptance)—that allow people to give and receive honest feedback to promote incremental improvements. Omar explains how to give accurate feedback to and receive the same from senior team members or direct superiors, a skill critical to career advancement. To increase self-esteem, happiness index, and your power to influence, Omar teaches you how to present feedback to your peers, your boss, or other colleagues in a diplomatic and efficient way. Take away key insights into how to create a healthy organizational culture with clear and constructive feedback.

Read more
Concurrent Sessions T12: If You Could Turn Back Time: Coaching New Testers
Christin Wiedemann, Professional Quality Assurance, Ltd.
Richard Lu, Professional Quality Assurance, Ltd.
1:30 PM - 2:30 PM

If you could turn back time, what do you wish you had known when you started working as a tester? When you are new to testing, you are faced with daunting challenges. Recent college graduates may find it difficult to apply academic knowledge in practice. It is easy to get discouraged and start questioning whether testing is really for you. Richard Lu and Christin Wiedemann relate their experiences of starting careers as software testers—with no prior testing experience. They share ideas for how senior testers can keep junior testers engaged, and encourage them to learn and step up in their roles. Easy-to-implement suggestions include explaining the company culture, encouraging relationship building, emphasizing communication, discussing the objective and value of testing, and talking about the different meanings of quality. Instead of leaving your team’s new hires to struggle, join this session and learn how to coach new testers to become their best.

Read more
Concurrent Sessions T11: Test Automation: Investment Today Pays Back Tomorrow
Al Wagner, IBM
1:30 PM - 2:30 PM

The results of a recent survey, authored by IBM and TechWell, showed that testers want to spend more time automating, more time planning, and more time designing tests—and less time setting up test environments and creating test data. So, where should testers and their organizations invest their time and money to achieve the desired results? What is the right level of technical ability for today’s testers to be successful? As this ongoing debate continues, the simple answer remains: It depends. Join Al Wagner as he explores the many opportunities in the world of testing and test automation. Consider the many approaches for building your automated testing skills and the solutions you create, weighing the pros and cons of each. Explore the options for test and dev organizations to consider to speed up releases and deliver more value to their companies. Leave with the ability to determine which approaches make sense for you and your employer.

Read more
Keynotes K4: The Next Decade of Agile Software Development and Test
J.B. Rainsberger, JBRAINS.CA
2:45 PM - 3:45 PM

After almost fifteen years of history with agile practices, J.B. Rainsberger sees some alarming trends in our attitudes, practices, and even what we teach about agile. At the same time, he sees some progress in approaches and technologies—e.g., behavior-driven development, naked planning, and continuous delivery. Sadly, we still have maturity models, complicated process checklists, and unnecessary certification schemes. In the coming decade, unless we begin to focus on fundamental ingredients absent from many agile teams, J.B. fears we are doomed to miss many opportunities for getting better. It's not good enough anymore just to be a great agile tester. J.B. says testers, programmers, product analysts, and managers must encourage workplace transformations so we can take full advantage of new tools and techniques. He shares a vision of these transformations and calls on testers and test managers, who work with all stakeholder groups, to stand up and lead us into the next decade of agile.

Read more