Skip to main content

Concurrent Sessions

Sessions are offered on Wednesday and Thursday at the conference and do not require a pre-selection. Build your own custom learning schedule, or choose to follow one of our track schedules.

Concurrent Sessions
W1 The Seven Deadly Sins of Software Testing
Rex Black, RBCS, Inc.
Wednesday, April 10, 2013 - 10:30am - 11:30am

Many smart, otherwise-capable testers sabotage their own careers by committing one or more of the deadly sins of testing: irrelevance/redundancy, ignorance of relevant skills or facts, obstructionism, adversarialism, nit-picking, blindness to project/organizational priorities, and last-moment-ism. Are you your own worst enemy? Join Rex Black to discuss these seven deadly sins. You might recognize your own behaviors—or behaviors of others on your test team. Using examples of these behaviors through case studies, Rex tells you how to stop the behaviors and solve the problems they have created. For both sinners and non-sinners alike, Rex offers ideas on how to become a testing saint.  

More Information
Learn more about Rex Black.
W2 The Role of Emotion in Testing
Michael Bolton, DevelopSense
Wednesday, April 10, 2013 - 10:30am - 11:30am

Software testing is a highly technical, logical, rational effort. There's no place for squishy emotional stuff here. Not among professional testers. Or is there? Because of commitment, risk, schedule, and money, emotions can run high in software development and testing. It is easy to become frustrated, confused, or bored; angry, impatient, and overwhelmed. However, Michael Bolton says that, if we choose to be aware of our emotions and are open to them, feelings can be a powerful source of information for testers, alerting us to problems in the product and in our approaches to our work. People don't decide things based on the numbers; they decide based on how they feel about the numbers. Our ideas about quality and bugs are rooted in our desires, which in turn are rooted in our feelings. You'll laugh, you'll cry...and you may be surprised as Michael discusses the important role that emotions play in excellent testing.

More Information
Learn more about Michael Bolton.
W3 The Tester's Role in Agile Planning
Rob Sabourin, AmiBug.com
Wednesday, April 10, 2013 - 10:30am - 11:30am

If testers sit passively through agile planning, important testing activities will be missed or glossed over. Testing late in the sprint becomes a bottleneck, quickly diminishing the advantages of agile development. However, testers can actively advocate for customers’ concerns while helping the team implement robust solutions. Rob Sabourin shows how testers contribute to the estimation, task definition, clarification, and the scoping work required to implement user stories. Testers apply their elicitation skills to understand what users need, collecting great examples that explore typical, alternate, and error scenarios. Rob shares many examples of how agile stories can be broken into a variety of test-related tasks for implementing infrastructure, data, non-functional attributes, privacy, security, robustness, exploration, regression, and business rules. Rob shares his experiences helping transform agile testers from passive planning participants into dynamic advocates who address the product owner’s critical business concerns, the team’s limited resources, and the project’s technical risks.

More Information
Learn more about Rob Sabourin.
W4 Usability Testing: Personas, Scenarios, Use Cases, and Test Cases
Koray Yitmen, UXservices
Wednesday, April 10, 2013 - 10:30am - 11:30am

To create better test cases, Koray Yitmen says you must know your users. And the path to better test case creation in usability testing starts with the segmentation and definition of users, a concept known as personas. Contrary to common market-wise segmentation that focuses on users' demographic information, personas focus on users’ behavioral characteristics, animating them in the minds of designers, developers, and testers. Put these personas “on stage” and let them play their roles in user scenarios. Then, turn these scenarios into use cases and turn use cases into test cases—and you have created better test cases. Koray shares stories from his usability testing projects for multinational clients. Learn how to define personas and scenarios, and convert them into use cases and test cases. Using a few concepts and skills from engineering, psychology, sociology, and art, this is no ordinary test case creation session.

More Information
Learn more about Koray Yitmen.
W5 Creating Dissonance: Overcoming Organizational Bias toward Software Testing
Keith Klain, Barclays Capital
Wednesday, April 10, 2013 - 12:45pm - 1:45pm

Overcoming organizational bias toward software testing can be a key factor in the success of your testing effort. Negative bias toward testing can impact its perceived value—just as inaccurate positive bias can set your team up for failure through mismanaged expectations. A structured approach to identifying, understanding, and overcoming bias is an integral part of any successful enterprise testing strategy. Keith Klain describes the origins of organizational bias and what it means for your testing effort, including what the test team and the industry do—and don’t do—to support those perceptions. He explores what you can do to identify your particular organization’s bias toward testing, how it evolved, evidence of those attitudes, and what you can do to change perceptions. Through case studies, Keith shares his successes and failures in navigating and running change programs focused on software testing and discusses the obstacles he’s encountered—and overcome.

More Information
Learn more about Keith Klain.
W6 Concurrent Testing Games: Developers and Testers Working Together
Nate Oster, CodeSquads, LLC
Wednesday, April 10, 2013 - 12:45pm - 1:45pm

The best software development teams find ways for programmers and testers to work closely together. These teams recognize that programmers and testers each bring their own unique strengths and perspectives to the project. However, working in agile teams requires us to unlearn many of the patterns that traditional development taught us. In this interactive session with Nate Oster, learn how to use the agile practice of concurrent testing to overcome common testing dysfunctions by having programmers and testers work together—rather than against each other—to deliver quality results throughout an iteration. Join Nate and practice concurrent testing with games that demonstrate just how powerfully the wrong approaches can act against your best efforts and how agile techniques can help you escape the cycle of poor quality and late delivery. Bring your other team members to this session and get the full effect of these revealing and inspiring games!

More Information
Learn more about Nate Oster.
W7 Testing Challenges within Agile Teams
Janet Gregory, DragonFire, Inc.
Wednesday, April 10, 2013 - 12:45pm - 1:45pm

In her book Agile Testing: A Practical Guide for Testers and Agile Teams, Janet Gregory recommends using the automation pyramid as a model for test coverage. In the pyramid model, most automated tests are unit tests written and maintained by the programmers,and tests that execute below the user interface—API-level tests that can be developed and maintained collaboratively by programmers and testers. However, as agile becomes mainstream, some circumstances may challenge this model. Many applications are transferring logic back to the client side by using programming languages such as JavaScript. Legacy systems, using languages such as COBOL, don’t have access to unit testing frameworks. Janet shows how to adapt the model to your needs and addresses some of these automation issues. During the session, delegates are encouraged to share their challenges and success stories.

More Information
Learn more about Janet Gregory.
W8 Build Your Own Performance Test Lab in the Cloud
Leslie Segal, Testware Associates, Inc.
Wednesday, April 10, 2013 - 12:45pm - 1:45pm

Many cloud-based performance and load testing tools claim to offer “cost-effective, flexible, pay-as-you-go pricing.” However, the reality is often neither cost-effective nor flexible. With many vendors, you will be charged whether or not you use the time (not cost effective), and you must pre-schedule test time (not always when you want and not always flexible). In addition, many roadblocks are thrown up—from locked-down environments that make it impossible to load test anything other than straight-forward applications, to firewall, security, and IP spoofing issues. Join Leslie Segal to discover when it makes sense to set up your own cloud-based performance test lab, either as a stand-alone or as a supplement to your current lab. Learn about the differences in licensing tools, running load generators on virtual machines, the real costs, and data about various cloud providers. Take home a road map for setting up your own performance test lab—in less than twenty-four hours.

More Information
Learn more about Leslie Segal.
W9 Collaboration without Chaos
Griffin Jones, Congruent Compliance
Wednesday, April 10, 2013 - 2:00pm - 3:00pm

Sometimes software testers overvalue the adherence to the collective wisdom embodied in organizational processes and the mechanical execution of tasks. Overly directive procedures work—to a point—projecting an impression of firm, clear control. But do they generate test results that are valuable to our stakeholders? Is there a way to orchestrate everyone’s creative contributions without inviting disorganized confusion? Is there a model that leverages the knowledge and creativity of the people doing the work, yet exerts reliable control in a non-directive way? Griffin Jones shares just such a model, describing its prescriptive versus discretionary parts and its dynamic and adaptive nature. Task activities are classified into types and control preferences. Griffin explores archetypes of control and their associated underlying values. Leave with an understanding of how you can leverage the wisdom and creativity of your people to make your testing more valuable and actionable.

More Information
Learn more about Griffin Jones.
W10 Cause-Effect Graphing: Rigorous Test Case Design
Gary Mogyorodi, Software Testing Services
Wednesday, April 10, 2013 - 2:00pm - 3:00pm

A tester’s toolbox today contains a number of test case design techniques—classification trees, pairwise testing, design of experiments-based methods, and combinatorial testing. Each of these methods is supported by automated tools. Tools provide consistency in test case design, which can increase the all-important test coverage in software testing. Cause-effect graphing, another test design technique, is superior from a test coverage perspective, reducing the number of test cases needed to provide excellent coverage. Gary Mogyorodi describes these black box test case design techniques, summarizes the advantages and disadvantages of each technique, and provides a comparison of the features of the tools that support them. Using an example problem, he compares the number of test cases derived and the test coverage obtained using each technique, highlighting the advantages of cause-effect graphing. Join Gary to see what new techniques you might want to add to your toolbox.

More Information
Learn more about Gary Mogyorodi.
W11 An Agile Test Automation Strategy for Everyone
Gerard Meszaros, Independent Consultant
Wednesday, April 10, 2013 - 2:00pm - 3:00pm

Most systems are not designed to make test automation easy! Fortunately, the whole-team approach, prescribed by most agile methodologies, gives us an opportunity to break out of this rut. Gerard Meszaros describes the essential elements of a practical and proven agile test automation strategy. He describes the different kinds of tests we need to have in place and which team members should prepare and automate each kind of test. All project roles—software developers, testers, BAs, product owner or product manager, and even the software architect—have a part to play in this strategy, and all will benefit from a deeper understanding. Join Gerard and learn how to avoid the doomed classical approach of automating your manual tests and hoping that they find new bugs.

More Information
Learn more about Gerard Meszaros.
W12 How Spotify Tests World Class Apps
Wednesday, April 10, 2013 - 2:00pm - 3:00pm

In today’s competitive world, more and more HTML5 applications are being developed for mobile and desktop platforms. Spotify has partnered with world-renowned organizations to create high quality apps to enrich the user experience. Testing a single application within a few months can be a challenge. But it's a totally different beast to test multiple world-class music discovery apps every week. Alexander Andelkovic shares insights into the challenges they face coordinating all aspects of app testing to meet their stringent testing requirements. Alexander describes an agile way to use the Kanban process to help out. He shares lessons learned including the need for management of acceptable levels of quality, support, smoke tests, and development guidelines. If you are thinking of starting agile app development or want to streamline your current app development process, Alexander’s experience gives you an excellent starting point.

More Information
Learn more about Alexander Andelkovic.
T1 Maybe We Don’t Have to Test It
Eric Jacobson, Turner Broadcasting, Inc.
Thursday, April 11, 2013 - 10:30am - 11:30am

Testers have been taught they are responsible for all testing. Some even say “It’s not tested until I run the product myself.” Eric Jacobson thinks this old school way of thinking can hurt a tester’s reputation and—even worse—may threaten team success. Learning to recognize opportunities where you may NOT have to test can eliminate bottlenecks and make you everyone’s favorite tester. Eric shares eight patterns from his personal experiences where not testing was the best approach. Examples include patches for critical production problems that can’t get worse, features that are too technical for the tester, cosmetic bug fixes with substantial test setup, and more. Challenge your natural testing assumptions. Become more comfortable with approaches that don’t require testing. Eliminate waste in your testing process by asking, “Does this need to be tested? By me?” Take back ideas to manage not testing including using lightweight documentation for justification. Not testing may actually be a means to better testing.

More Information
Learn more about Eric Jacobson.
T2 Whiteboarding—for Testers, Developers, and Customers, Too
Rob Sabourin, AmiBug.com
Thursday, April 11, 2013 - 10:30am - 11:30am

How can testers spend more time doing productive testing and waste less time preparing "useless" project documentation? Rob Sabourin employs whiteboarding techniques to enable faster, easier, and more powerful communication and collaboration—without all the paperwork. Rob uses whiteboarding to help identify technical risks, understand user needs, and focus testing on what really matters to business stakeholders. Whiteboard block diagrams visualize technical risk to stakeholders. Whiteboard fault models highlight failure modes to developers and testers. Testers can elicit usage scenarios directly from customers using storyboard diagrams. Rob shows how simple whiteboarding strategies help testers learn new concepts, design better tests, and estimate their activities. Rob shares his experiences whiteboarding all kinds of visual models: time sequences, block diagrams, storyboards, state models, control flows, data flows, and mind maps. Save time and avoid the pain of back-and-forth or written document reviews that testers, developers, customers, and users often come to despise.

More Information
Learn more about Rob Sabourin.
T3 It Seemed a Good Idea at the Time: Intelligent Mistakes in Test Automation
Dorothy Graham, Software Test Consultant
Thursday, April 11, 2013 - 10:30am - 11:30am

Some test automation ideas seem very sensible at first glance but contain pitfalls and problems that can and should be avoided. Dot Graham describes five of these “intelligent mistakes”—1. Automated tests will find more bugs quicker. (Automation doesn’t find bugs, tests do.) 2. Spending a lot on a tool must guarantee great benefits. (Good automation does not come “out of the box” and is not automatic.) 3. Let’s automate all of our manual tests. (This may not give you better or faster testing, and you will miss out on some benefits.) 4. Tools are expensive so we have to show a return on investment. (This is not only surprisingly difficult but may actually be harmful.) 5. Because they are called “testing tools,” they must be tools for testers to use. (Making testers become test automators may be damaging to both testing and automation.) Join Dot for a rousing discussion of “intelligent mistakes”—so you can be smart enough to avoid them.

More Information
Learn more about Dorothy Graham.
T4 Bad Testing Metrics—and What To Do About Them
Paul Holland, Testing Thoughts
Thursday, April 11, 2013 - 10:30am - 11:30am

Many organizations use software testing metrics extensively to determine the status of their projects and whether or not their products are ready to ship. Unfortunately most, if not all, of the metrics in use are so flawed that they are not only useless but possibly dangerous—misleading decision makers, inadvertently encouraging unwanted behavior, or providing overly simplistic summaries out of context. Paul Holland reviews Goodhart’s Law and its applicability to software testing metrics. Paul identifies four characteristics that will enable you to recognize the bad metrics in your organization. Despite showing how the majority of metrics used today are bad, all is not lost as Paul shares the collection of information he has developed that is more effective. Learn how to create status reports that provide details sought after by upper management—and avoid the problems that bad metrics cause.

More Information
Learn more about Paul Holland.
T5 Snappy Visualizations for Test Communications
Thomas Vaniotis, Liquidnet
Thursday, April 11, 2013 - 12:45pm - 1:45pm

Do you struggle to find the best way to explain your testing status and coverage to your stakeholders? Do numbers and metrics make your stakeholders’ eyes glaze over, or, even worse, do you feel dirty giving metrics that you know are going to be abused? Do you have challenges explaining your strategy to fellow testers and developers? Visualizations are a great way to turn raw data into powerful communications. Thomas Vaniotis presents eleven powerful visual tools that can be created easily with simple materials around the office—sticky notes, graph paper, markers, and whiteboards. Thomas shows you how to use these tools to facilitate conversations about testing status, product quality, test planning, and risk—and how to combine them to provide a holistic view of the project status. Join Thomas in a hands-on, interactive session to construct examples of some of these visuals. Return to your office with a toolkit of powerful, snappy images to take your test communication to the next level.

More Information
Learn more about Thomas Vaniotis.
T6 Using Mindmaps to Develop a Test Strategy
Fiona Charles, Quality Intelligence
Thursday, April 11, 2013 - 12:45pm - 1:45pm

Your test strategy is the design behind your plan—the set of big-picture ideas that embodies the overarching direction of your test effort. It captures the stakeholders’ values that will inspire, influence, and ultimately drive your testing. It guides your overall decisions about the ways and means of delivering on those values. The weighty test strategy template mandated in many organizations is not conducive to thinking through the important elements of a test strategy and then communicating its essentials to your stakeholders. A lightweight medium like a mindmap is far more flexible and direct. In this interactive session Fiona Charles works with you to develop your own strategic ideas in a mindmap, exploring along the way what really matters in a test strategy and how best to capture it using a mindmap. Fiona shares tips on how to use your mindmap to engage your stakeholders’ interest, understanding, and buy-in to your strategy.

More Information
Learn more about Fiona Charles.
T7 Designing Self-maintaining UI Tests for Web Applications
Marcus Merrell, WhaleShark Media, Inc.
Thursday, April 11, 2013 - 12:45pm - 1:45pm

Test automation scripts are in a constant state of obsolescence. New features are added, changes are made, and testers learn about these changes long after they've been implemented. Marcus Merrell helped design a system in which a "model" is created each time a developer changes code that affects the UI. That model is checked against the suite of automated tests for validity. Changes that break the tests are apparent to the developer before his code is even checked in. Then, when features are added, the model is regenerated and automation can immediately address brand-new areas of the UI. Marcus describes fundamental test design and architecture best practices, applicable to any project. Then he demonstrates this new approach: parsing an application's presentation layer to generate an addressable model for testing. Marcus shows several case studies and successful implementations, as well as an open-source project that can have you prototyping your own model before you leave for home.

More Information
Learn more about Marcus Merrell.
T8 Testing After You’ve Finished Testing
Jon Bach, eBay, Inc.
Thursday, April 11, 2013 - 12:45pm - 1:45pm

Stakeholders always want to release when they think we’ve “finished testing”. They believe we have revealed “all of the important problems” and “verified all of the fixes,” and now it’s time to reap the rewards. However, as testers we still can assist in improving software by learning about problems after code has rolled “live-to-site”—especially if it’s a website. At eBay we have a post-ship “site quality” mindset in which testers continue to learn from A/B testing, operational issues, customer sentiment analysis, discussion forums, and customer call patterns—just to name a few. Jon Bach explains how and what eBay’s Live Site Quality team learns every day about what they just released to production. Take away some ideas on what you can do to test and improve value—even after you’ve shipped.

More Information
Learn more about Jon Bach.
T9 Risk-based Testing: Not for the Fainthearted
George Wilkinson, Grove Consultants
Thursday, April 11, 2013 - 2:00pm - 3:00pm

If you’ve tried to make testing really count, you know that “risk” plays a fundamental part in deciding where to direct your testing efforts and how much testing is enough. Unfortunately, project managers often do not understand or fully appreciate the test team’s view of risk—until it is too late. Is it their problem or is it ours? After spending a year on a challenging project that was set up as purely a risk mitigation exercise, George Wilkinson saw first-hand how risk management can play a vital role in providing focus for our testing activities, and how sometimes we as testers need to improve our communication of those risks to the project stakeholders. George provides a foundation for anyone who is serious about understanding risk and employing risk-based testing on projects. He describes actions and behaviors we should demonstrate to ensure the risks are understood, thus allowing us to be more effective during testing.

More Information
Learn more about George Wilkinson.
T10 Quantifying the Value of Static Analysis
William Oliver, Lawrence Livermore National Laboratory
Thursday, April 11, 2013 - 2:00pm - 3:00pm

During the past ten years, static analysis tools have become a vital part of software development for many organizations. However, the question arises, “Can we quantify the benefits of static analysis?” William Oliver presents the results of a Lawrence Livermore National Laboratory study that first measured the cost of finding software defects using formal testing on a system without static analysis; then, they integrated a static analysis tool into the process and, over a period of time, recalculated the cost of finding software defects. Join William as he shares the results of their study and discusses the value and benefits of static testing. Learn how commercial and open source analysis tools can perform sophisticated source code analysis over large code bases. Take back proof that employing static analysis can not only reduce the time and cost of finding defects and their subsequent debugging but ultimately can reduce the number of defects making their way into your releases.

More Information
Learn more about William Oliver.
T11 Mobile App Testing: Moving Outside the Lab
Thursday, April 11, 2013 - 2:00pm - 3:00pm

No matter how thorough the test team or how expansive the test lab, Chris Munroe knows that defects still abound in mobile apps after launch. With more “non-software” companies launching mobile apps every day, testers have increased pressure to ensure apps are secure and function as intended. In retail and media especially, audiences are incredibly diverse and expect apps to work every time, everywhere, and on every device. These expectations make it imperative for companies to take every possible step to make their mobile apps defect free. This is increasingly difficult to do when all your testing occurs within the confines of the lab—and your users live in the wild. Using real-world examples from USA Today, Chris identifies why you need to test your mobile apps both inside and outside the lab—and do so in a way that is secure, effective, and timely.

More Information
Learn more about Chris Munroe.
T12 Driving Down Requirements Defects: A Tester’s Dream Come True
Richard Bender, BenderRBT
Thursday, April 11, 2013 - 2:00pm - 3:00pm

The software industry knows that the majority of software defects have their root cause in poor requirements. So how can testers help improve requirements? Richard Bender asserts that requirements quality significantly improves when testers systematically validate the requirements as they are developed. Applying scenario-driven reviews ensures that the requirements have the proper focus and scope. Ambiguity reviews quantitatively identify unclear areas of the specification leading to early defect detection and defect avoidance. Modeling the requirements via cause-effect graphing helps find missing requirements and identifies logical inconsistencies. Going further with this approach, domain experts and developers should review the tests derived from the requirements models to find additional defects. Join Richard to learn how testers—applying these processes in partnership with analysts—can reduce the percentage of defects caused by poor requirements from the usual 55–60 percent down to low single digits.

More Information
Learn more about Richard Bender.
T13 Changing the Testing Conversation from Cost to Value
Thursday, April 11, 2013 - 3:15pm - 4:15pm

The software testing business is in the grip of a commoditization trend in which enterprises routinely flip flop between vendors—vendors who are engaged in a race to the bottom on price. This trend introduces perverse incentives for service providers, undervalues skill, and places excessive emphasis on processes, tools, and methods. The result is a dumbing down of testing and the creation of testing services that are little more than placebos. Using examples drawn from three recent projects in the banking industry, Iain McCowatt explores the dynamics of commoditization and introduces a quality model that can be used for framing the value of testing services. As a testing vendor, learn how to pursue a differentiation strategy, shifting the emphasis of the testing conversation from cost to value; as a customer of testing, learn how to make informed decisions about the value of what you are buying; as a tester, learn how to buck the trend and find professional growth.

More Information
Learn more about Iain McCowatt.
T14 Structural Testing: When Quality Really Matters
Jamie Mitchell, Jamie Mitchell Consulting, Inc.
Thursday, April 11, 2013 - 3:15pm - 4:15pm

Jamie Mitchell explores an underused and often forgotten test technique—white-box testing. Also known as structural testing, this technique requires some programming expertise and access to the code. Using only black-box testing, you could easily ship a system having tested only 50 percent or less of the code base. Are you comfortable with that? For mission-critical systems, such low test code coverage is clearly insufficient. Although you might believe that the developers have performed sufficient unit and integration testing, how do you know that they have achieved the level of coverage that your project requires? Jamie describes the levels of code coverage that the business and your customers may need—from statement coverage to modified condition/decision coverage. He explains when you should strive to achieve different code coverage target levels and leads you through examples of pseudocode. Even if you have no personal programming experience, understanding structural testing will make you a better tester. So, join Jamie in this code-diving session.

More Information
Learn more about Jamie Mitchell.
T15 Android Mobile Development: A Test-driven Approach
Thursday, April 11, 2013 - 3:15pm - 4:15pm

Few topics are hotter these days than mobile software development. It seems that every company is rushing to release its own mobile application. However, when it comes time to build that software, companies quickly discover that things are different now. Many developers claim that it is very difficult, if not impossible, to test drive an application. Traditional testing tools are unable to automate the application in the emulator or on the device so testers usually are left with a manual testing approach. Join Cheezy Morgan and Levi Wilson as they reveal the secret of delivering a fully-tested, high-quality Android application. Using an acceptance test-driven approach, Cheezy will write automated tests prior to development. While Levi is test driving the Android code to make Cheezy’s tests pass, Cheezy will perform exploratory testing on Levi’s unfinished work. This fast-paced, hands-on session will demonstrate how close collaboration and test automation can be used to successfully deliver high-quality mobile applications.

More Information
Learn more about Jeff "Cheezy" Morgan.
T16 Integrating Canadian Accessibility Requirements into Your Projects
Dan Shire, IBM Canada
David Best, IBM Canada
Thursday, April 11, 2013 - 3:15pm - 4:15pm

In 2014, most Canadian businesses will face significant challenges as government regulations go into effect, requiring websites to be accessible to users with disabilities. Are your project teams knowledgeable about the technical accessibility standards? Is your business ready to comply with the regulations? Dan Shire and David Best review the key principles of web accessibility (WCAG 2.0) and the government regulations (including Ontario’s AODA) that your organization must meet. Dan provides specific guidance on planning and executing effective accessibility testing and for building your test team skills. David demonstrates testing tools and techniques, including the use of assistive technology including the JAWS screen reader. Together, they will review IBM’s practical experiences: focusing your testing efforts on the most critical standards, selecting your testing tools, building and training your test teams, and prioritizing the results of your accessibility testing to achieve the maximum benefits for the business while minimizing cost and schedule impacts to the project.

More Information
Learn more about Dan Shire.