STARWEST Software Testing Analysis & Review
 
SQE Home
 
 
 
STARWEST 2010 Pre-conference Tutorials

Go To:   Monday  |  Tuesday  

Tutorials for Tuesday, September 28, 2010  8:30 a.m. — 4:30 p.m.
TA  
Becoming an Influential Test Team Leader              
Randy Rice, Rice Consulting Services, Inc.
 
Have you been thrust into the role of test team leader? Are you in this role now and want to hone your leadership skills? Test team leadership has many unique challenges, and many test team leaders—especially new ones—find themselves ill-equipped to deal with the problems they face. The test team leader must motivate and support the team while keeping testing on track, within time and budget constraints. Randy Rice focuses on how you can grow as a leader, influence your team and those around you, and positively impact those outside your team. Learn how to become a person of influence, deal with interpersonal issues, and help your team build their skills and value to the team and the organization. Discover how to communicate your team’s value to management, how to stand firm when asked to compromise principles, and how to learn from your successes and failures. Develop your own action plan to become an influential test team leader.                
Learn more about Randy Rice  
 
TB  
Successful Test Automation
Dorothy Graham, Software Testing Consultant
 

Many organizations never achieve the significant benefits that are promised from automated test execution tools. What are the secrets to test automation success? There are no secrets, but the paths to success are not commonly understood. Dorothy Graham describes the most important automation issues that you must address, both management and technical, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use. If you don’t begin with good objectives for your automation, you will set yourself up for failure later. If you don’t show Return on Investment from automation, your automation efforts may be doomed, no matter how good they are technically. Join Dorothy to learn how to identify achievable and realistic objectives for automation, show ROI from automation, understand how testware architecture affects scalability, pick up practical tips for technical issues, learn what works in practice, and devise an effective automation strategy.

  
Learn more about Dorothy Graham  
 
TC  
Test Process Improvement with TPI® Next      
Martin Pol & Ruud Teunissen, POLTEQ IT Services BV
 
Looking for a systematic approach to improve the maturity of your organization’s test process? Want to know how your test practices compare to other organizations and to industry standards? Join Martin Pol and Ruud Teunissen for an introduction to TPI® Next, the updated version of the TPI® (Test Process Improvement) model— the most popular industry standard for test process maturity assessment. Improving your testing requires three things: understanding key test process areas, knowing your current position in each of these areas, and having the tools and skills to implement needed improvements. Rather than guessing what you should do, begin with the TPI® Next model as your guide. Using as examples real world assessments they have performed, Martin and Ruud describe a practical assessment approach that is suitable for both small, informal organizations and larger companies with more formal processes. Participate in exercises to practice applying the model to your situation. Take back valuable references, templates, examples, Web links, and the results of a quick assessment of your own organization.
 
TPI® Next is a registered trademark of Sogeti USA LLC.
   
Learn more about Martin Pol & Ruud Teunissen  
 
TD  
Finding Ambiguities in Requirements
Richard Bender, Bender RBT, Inc.
 
Through the years, studies have shown that poor requirements are one of the most significant contributors to project failure—and that half of all defects have their origin in bad requirements. We know that the earlier a defect is found, the cheaper it is to fix. Our experience tells us that if specifications are ambiguous, there is nearly a 100% chance that there will be one or more defects in the corresponding code. Richard Bender explains how to review specifications quickly and quantitatively to identify what is unclear about them. Learn how your feedback can lead to early defect detection and future defect avoidance. Discover how applying these review techniques can reduce the ambiguity rate by 95% on subsequent specifications and how that translates into a significant reduction in the number of defects in the code even before testing begins. Join Richard to learn how this process also can be applied to design specifications, user manuals, training materials, and online help, as well as agreements and contracts ensuring clarity of communications. 
Learn more about Richard Bender  
 
TE  
Testing Enterprise Applications and ERP Systems    
Linda Hayes, Worksoft, Inc.
 
Are you facing a major upgrade or rollout of a major enterprise application or ERP system and wondering how you can possibly test it with the time and team you have? If so, Linda Hayes has the answers. Classic testing approaches are designed for internally developed systems and generally aren’t applicable to packaged applications. Testing package and ERP deployments is all about validating end-to-end business processes and the rules that govern in such way as to remove or reduce maximum operational risk—all within tight resource and schedule constraints. Learn how to apply a top-down risk assessment model and a bottom-up business process validation approach that can be executed manually or automated. Discover the roles, skills, and tools you need to succeed and how to persuade your management to provide them. Take away a practical, systematic project plan that delivers measurable results for testing enterprise applications and ERP implementations while creating a reusable foundation for future changes that are certain to come.  
 Learn more about Linda Hayes  
 
TF  
Software Performance Testing—Preparing for a Successful Test  
Dale Perry, Software Quality Engineering
What does it take to properly plan, implement, and report the results of a performance test? What factors need to be considered? What is your performance test tool telling you? Do you really need a performance test? Is it worth the cost? These questions plague all performance testers. In addition, many performance tests do not appear to be worth the time it takes to run them, and the results never seem to resemble—yet alone predict—production system behavior. Performance tests are some of the most difficult tests to create and run, and most organizations don’t fully appreciate the time and effort required to properly perform them. Dale Perry discusses the key issues and realities of performance testing—what can and cannot be done with a performance test, what is required to do a performance test, and how to present what the test “really” tells you.  
Learn more about Dale Perry  
 Tutorials for Tuesday, September 28, 2010  8:30 a.m. — 12:00 p.m.
TG  
A Test Leader's Guide to Going Agile    
Bob Galen, iContact
 
Much of the work of moving traditional test teams toward agile methods is focused on the individual tester and agile methods. Often, the roles of test director, test manager, test team leader, and test-centric project manager are marginalized—but not in this workshop where we’ll focus on agile testing from the test leader’s perspective. Join experienced agile test leader and long-time coach Bob Galen to explore the central leadership challenges associated with agile adoption: how to transform your team’s skills toward agile practices, how to hire agile testers, how to create a “whole-team” view toward quality by focusing on executable requirements, and how to create powerful done-ness criteria. Beyond the tactical leadership issues, Bob explores strategies for becoming a partner in agile adoption pilot projects, making changes to test automation strategies, and how to reinvent your traditional planning and metrics for more agile-centric approaches that engage stakeholders.
Learn more about Bob Galen  
 
TH  
Exploratory Software Testing Interactive 
Jonathan Kohl, Kohl Concepts, Inc.
 

Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of the tester to continually optimize the value of his work. It is the process of three mutually supportive activities performed in parallel—learning, test design, and test execution. With skill and practice, exploratory testers typically uncover considerably more problems than when the same amount of effort is spent on scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer testers can articulate the process. Jonathan Kohl describes specific heuristics and techniques of exploratory testing to help you get the most from this highly productive approach. Jonathan focuses on the skills and dynamics of exploratory testing itself and how it can be combined with scripted approaches.

   Laptop required. This is a hands-on course. A laptop—preferably with Microsoft Windows capability—is required for some of the exercises.

 
Learn more about Jonathan Kohl  
 
TI  
James Bach on Testing     
James Bach, Satisfice, Inc.
 
In the late eighties, James Bach moved from programming to begin his career as a tester. Although he read books on testing, most of them seemed firmly—even proudly—out of touch with the gritty and colorful reality of software innovation. Most test managers seemed happy to go along with documentation templates that no one understood and terminology that was vague or contradictory. Encouraged by his management at Apple Computer, James began to rethink and reinvent the way he tested. Through his long and cluttered testing career he has brought ideas like exploratory testing and generative heuristics from obscurity into common use within the testing community. In this enlightening session, James summarizes the way he sees the state of the testing art today and discusses the core of testing excellence as he understands it—it's all about people using their brains, so training methods are of paramount importance. Join James Bach to explore a wide range of fascinating testing topics from his experiences applying exploratory testing to regulated software, and his recent coaching of more than one-hundred testers around the world via Skype.   
Learn more about James Bach  
 
TJ  
Essential Test Management and Planning 
Rick Craig, Software Quality Engineering
 
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking test management to the next level in your organization.  
Learn more about Rick Craig  
 Tutorials for Tuesday, September 28, 2010 1:00 p.m. — 4:30 p.m.
TK  
How to Break Embedded Software      
Jon Hagar, Consultant  
 
In the tradition of James Whittaker’s book series How to Break … Software, Jon Hagar teaches you how to apply the “attack” concept for testing embedded software systems. Jon defines the sub-domains of embedded software and examines the issues of product failure and recall caused by defects in each. Jon shares and demonstrates a set of software attacks based on common failure modes in embedded software. He targets operating systems, computation and control structures, clock-time factors, interrupts, data, hardware-software interfaces, user interfaces, and communications. For each specific attack, Jon explains when and how to conduct the attack, who should conduct the attack, where it can be executed, why the attack works, and what to look for during the attack. To practice their new embedded testing skills, participants will have the opportunity to practice attacks on a robot device that Jon will bring to the class.      
Learn more about Jon Hagar  
 
TL  
Exploratory Testing: Now in Session    
Jon Bach, Quardev, Inc.
 
The nature of exploration, coupled with the ability of testers to rapidly apply their skills and experience, make exploratory testing a widely used test approach—especially when time is short. Unfortunately, exploratory testing often is dismissed by project managers who assume that it is not reproducible, measurable, or accountable. If you have these concerns, you may find a solution in a technique called Session-Based Test Management (SBTM), developed by Jon Bach and his brother James to specifically address these issues. In SBTM, testers are assigned areas of a product to explore, and testing is time boxed in “sessions” that have mission statements called “charters” to create a meaningful and countable unit of work. Jon Bach discusses—and you practice—the skills of exploration using the SBTM approach. He demonstrates a freely available, open source tool to help manage your exploration and prepares you to implement SBTM in your test organization.

 Laptop required.
 
Learn more about Jon Bach  
 
TM  
State Model Testing         
Rob Sabourin, AmiBug.com
 
State models are tools for describing complex system behaviors in a precise way. You’ll find stateful systems when transactions flow from inception to completion, when embedded devices respond in real time to external triggers, or when a system must respond in a specific way depending on its current input and what has transpired before. State diagrams provide a valuable, visual model to help testers focus their test designs and discover important bugs buried deep below the realm of traditional testing. Rob Sabourin demystifies state models and shows you how to test a state model by examining issues such as: Can we visit every state? Are there redundant states? Can a system be in multiple states concurrently? Can a transaction become stuck in a state? Rob describes how you can employ state models to design robust test cases and how exploratory testers can observe states from visual, data-centric, and historic perspectives. Examples from embedded systems, financial services, intellectual property management, systems security, medical software, and interactive gaming take you through state model-based test design from problem to solution.  
Learn more about Rob Sabourin  
 
TN  
Planning Your Agile Testing: A Practical Guide         SOLD OUT
Janet Gregory, DragonFire, Inc.
 
Traditional test plans are incompatible with agile software development because we don't know all the details about all the requirements up front. However, in an agile software release, you still must decide what types of testing activities will be required—and when you need to schedule them. Janet Gregory explains how to use the Agile Testing Quadrants, a model identifying the different purposes of testing, to help your team understand your testing needs as you plan the next release. Janet introduces you to alternative, lightweight test planning tools that allow you to plan and communicate your big picture testing needs and risks. Learn how to decide who does what testing—and when. Determine what types of testing to consider when planning an agile release, the infrastructure and environments needed for testing, what goes into an agile “test plan,” how to plan for acquiring test data, and lightweight approaches for documenting your tests and recording test results.  
Learn more about Janet Gregory  
 
TO  
Measurement and Metrics for Test Managers 
Rick Craig, Software Quality Engineering
 

To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms, including Goal-Question-Metric, and discusses the pros and cons of each.

Participants are urged to bring their metrics problems and issues for use as discussion points.

 
Learn more about Rick Craig  
 

Top of Page


 
Send us Your Feedback