Home About Software Quality Engineering Sponsorship Opportunities Contact Us SQE.com  
STAREAST
 
Register Now
SQE Home
 
 
STAREAST 2009 Preconference Tutorials

Go To:   Monday  |  Tuesday  

Tutorials for Tuesday, May 5, 2009 —  8:30 a.m. — 4:30 p.m.
TA  
Adapting to Agile 
Dale Emery, Software Testing Consultant

When a development team adopts an agile process, such as Scrum or XP, testers find that their traditional practices no longer fit. The extensive up-front test planning and heavyweight test documentation used in traditional development environments just get in the way in an agile world. In this experiential workshop, you experience the transition to agile through a paper-based simulation (no programming required). In a series of iterations, the team attempts to deliver a product that the customer is willing to buy, thus generating revenue for the company. As with real projects, producing a working product on a tight schedule can be challenging. After each iteration, your team reflects on key events and adjusts to increase productivity for the next iteration. Learn to apply the principles of visibility, feedback, communication, and collaboration to increase the team’s rate of delivery. By the end of the workshop, you will have an intuitive understanding of agile and, in particular, the shifting role of Test/QA in agile development.

 
Learn more about Dale Emery  

TB  
How to Build, Support, and Add Value to Your Test Team 
Lloyd Roden and Julie Gardiner, Grove Consultants
As a new or current test manager, you may have many questions—How do I create a new team? How can I help my current team become more efficient and effective? How can I build my organization’s confidence in our work? How can I find needed resources? Based on a people-oriented—rather than task-oriented—approach to software testing, Lloyd Roden and Julie Gardiner describe how to build and retain successful test teams. Discover the characteristics of successful testers and test managers. Identify the qualities you should look for to recruit the right people. Learn what you must do for your team and what they should do for themselves. Discuss how to promote the value of testing within the organization while building good working relationships with developers and other organizations. Discuss these relevant issues with others facing the same challenges. Lloyd and Julie provide utilities, spreadsheets, and templates to help you become a successful test manager.  
Learn more about Lloyd Roden
Learn more about Julie Gardiner
 

TC  
Essential Test Management and Planning 
Rick Craig, Software Quality Engineering

The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking test management to the next level in your organization.

 
Learn more about Rick Craig  

TE  
Just-In-Time Testing  
Rob Sabourin, AmiBug.com, Inc.

NEW
Turbulent Web development and other market-driven projects experience almost daily requirements modifications, changes to user interfaces, and the continual integration of new functions, features, and technologies. Rob Sabourin shares proven, practical techniques to keep your testing efforts on track while reacting to fast-paced projects with changing priorities, technologies, and user needs. Rob covers test planning techniques and organization strategies, scheduling and tracking, blending scripted and exploratory testing, identifying key project workflows, and using testing and test management tools. Learn how to create key decision-making workflows for test prioritization and bug triage, adapt testing focus as priorities change, identify technical risks, and respect business priorities. Take away a new perspective on your testing challenges and discover ways to take control of the situation—rather than to be controlled by it.  
Learn more about Rob Sabourin  

TF  
SQL Database Fundamentals: A Practicum for Testers 
Karen N. Johnson, Software Test Management, Inc.
Database and SQL skills are invaluable tools in your testing toolkit. Learn the skills necessary to create and interact with your applications’ data. Knowing where your data is stored and how to access that data for testing gives you powerful new testing capabilities. Join Karen Johnson for an introduction to databases and SQL designed specifically for testers. Learn how to write SQL queries and gain practice working with joins. Karen discusses several of the most common databases, including MySQL, Oracle, Sybase, and Microsoft SQL Server. You will learn about the different data types supported by major relational SQL databases and the defects that can be found by understanding those data types; learn to read and understand data models to detect tables and data most likely to grow rapidly, and then use this knowledge to plan performance testing.  

Laptop Required

Participants should bring a laptop computer. Admin privileges and knowing how to open a firewall to allow connection to a database are required as software will be installed.  

 
Learn more about Karen N. Johnson  

TG  
Test Automation: The Smart Way
Dorothy Graham, Software Testing Consultant

NEW

Many organizations never achieve the significant benefits that are promised from automated test execution tools. What are the secrets to test automation success? There are no secrets, but the paths to success are not commonly understood. Dorothy Graham describes the most important automation issues that you must address, both management and technical, and helps you understand and choose the smartest approaches for your organization—no matter which automation tools you use. If you don’t begin with legitimate objectives for your automation, you will set yourself up for failure later. For example, if “find more bugs” is your goal, automating regression tests will not achieve it. Even objectives that seem sensible, such as “run tests overnight,” or “automate x% of tests” can be counter-productive. Join Dorothy to learn how to assess your current automation maturity, identify achievable and realistic objectives for automation, build testware architecture for future scalability, and devise an effective automation strategy.

 
   
Learn more about Dorothy Graham  

TH  
Test Process Improvement 
Martin Pol and Ruud Teunissen, POLTEQ IT Services BV

What is the maturity of your testing process? How do you compare to other organizations and to industry standards? To find out, join Martin Pol and Ruud Teunissen for an introduction to the Test Process Improvement (TPI®) model, an industry standard for testing maturity assessments. Although many organizations want to improve testing, they lack the foundation required for success. Improving your testing requires three things: (1) understanding key test process areas, (2) knowing your current position in each of these areas, and (3) having the tools and skills to implement needed improvements. Rather than guessing what to do, begin with the TPI® model as your guide. Using examples of real world TPI® assessments that they have performed, Martin and Ruud describe a practical assessment approach that is suitable for both smaller, informal organizations and larger, formal companies. Take back valuable references, templates, examples, and links to start your improvement program.

TPI® is a registered trademark of Sogeti USA LLC.

  
Learn more about Martin Pol
Learn more about Rudd Teunissen
 

TI  
Fundamentals of Keyword-Driven Test Automation 
Hans Buwalda, LogiGear Corporation
 

Keyword-driven test automation has entered the software testing mainstream. It has proven to be a powerful approach to reach a high level of automation with the lowest possible effort. It brings the flexibility, manageability, and maintainability that cost-effective software test automation demands. Hans Buwalda introduces keyword-driven test automation and shows you how to make it successful. The basis for the tutorial are his deliberately challenging "5% Rules of Test Automation"—no more than 5% of your test cases should be executed manually, and no more than 5% of your total testing effort should be used to achieve this level of automation. Although Hans describes the technical aspects of keyword automation, a significant part of the session addresses the many non-technical ingredients required for success—having an automation-friendly test design, managing processes, and working in teams. In addition, Hans addresses the specific issues of offshoring, agile test development, and non-UI automation, enabling you to make keyword-driven testing work for you in a variety of situations and environments.

 
Learn more about Hans Buwalda  

Tutorials for Tuesday, May 5, 2009 —  8:30 a.m. — 12:00 p.m.
TJ  
James Whittaker: On Testing 
James Whittaker, Microsoft

NEW

Here’s your chance to spend an educational and entertaining afternoon with expert tester, teacher, and author James Whittaker as he discusses the testing topics that are challenging test managers and testers everywhere. James is a master of hot topics and reads the industry tea leaves for new trends and technologies that will stand the test of time. Topics will include discussions on insourcing vs. outsourcing vs. crowdsourcing, manual vs. automated testing, and developer vs. tester as the owners of quality. James also expounds on technical topics, including exploratory testing techniques, test automation, and less technical topics like managing your testing career, becoming a better tester, and making your organization more quality focused. Although James has prepared material based on these topics of the day, the real value of this session is  engaging in a conversation that will take your understanding to the next level. This could be a unique opportunity to impact your career and help your organization achieve its goals.

 
Learn more about James Whittaker  

TK  
Finding Ambiguities in Requirements 
Richard Bender, BenderRBT, Inc.

NEW

Over the years, studies have shown that poor requirements are one of the most significant contributors to project failure, and that half of all defects have their origin in bad requirements.  We know that the earlier you find a defect, the cheaper it is to fix.  Our experience tells us that if specifications are ambiguous, there is nearly a 100% chance that there will be one or more defects in the corresponding code.  Richard Bender explains how to review specifications quickly and quantitatively to identify what is unclear about them.  Learn how your feedback can then lead to early defect detection and future defect avoidance.  Discover how applying these review techniques can reduce the ambiguity rate by 95% on subsequent specifications and how that translates into a significant reduction in the number of defects in the code even before testing begins. Learn how this process can also be applied to design specifications, user manuals, training materials, and online help, as well as agreements and contracts ensuring clarity of communications. 

 
Learn more about Richard Bender  

TL  

Tester's Clinic: Dealing with Tough Questions and Testing Myths
Michael Bolton, DevelopSense

When are you going to be finished testing? Why didn't you find that bug? Why does testing cost so much? You’ve probably heard these questions before.  How do you answer them?  Do you just guess or can you respond confidently about the problem that underlies the question? Michael Bolton presents strategies and skills, including critical thinking, context-driven thinking, and general systems thinking, that can help you respond confidently and thoughtfully in difficult testing situations. In this interactive workshop, largely guided by the delegates themselves, we examine some myths about software testing, common cognitive biases, and critical thinking tools. Learn general systems approaches to manage observational challenges and complexity, work through exercises that model difficult testing problems, and discover approaches to solving them. Excellent testing is less about confirming, verifying, and validating, and more about questioning, exploring, discovering, and learning. Come and learn the skills that will help you succeed.  

Laptop Optional


 Participants are encouraged to bring a Windows-based  laptop computer to this session.
  

 
Learn more about Michael Bolton  

TM  
AJAX Testing: Inside and Out
Paco Hope, Cigital

AJAX—Asynchronous, JavaScript, and XML—is a modern application development technique that allows a Web-based application to look and feel just like a full-fledged desktop or client/server program. AJAX applications pose unique testing challenges because so much of the application's logic runs inside the Web browser. To thoroughly test applications employing AJAX techniques, you need to understand the technology and adopt specific testing approaches and special testing tools. Paco Hope presents a short introduction to dynamic HTML, JSON, and the core technologies that make AJAX possible. Then, he explores the approaches required to adequately test AJAX applications—from the outside-in and the inside-out. Paco demonstrates an AJAX application and shows you his strategies to test it. Finally, he discusses trade-offs of different testing tools—some open source and some commercial—that enable you to interactively and automatically test AJAX.

Learn more about Paco Hope  

Tutorials for Tuesday, May 5, 2009 —  1:00 p.m. — 4:30 p.m.

TN  
Cause-Effect Graphing 
Richard Bender, BenderRBT, Inc.

NEW

Cause-Effect Graphing is the most rigorous of all software testing approaches.  It identifies missing requirements and logical inconsistencies in the specifications and is the only software testing technique that addresses the problem of defect observability—multiple defects can cancel each other out in a test execution or that something going right on one part of the path hides something going wrong elsewhere.  Richard Bender demonstrates how test cases generated from the graphs are not only highly optimized, but they guarantee that if there is a defect anywhere in the logic that it will show up at an observable point.  All other test design techniques focus only on reducing the test set to a manageable number without a claim of completeness. The graphing technique allows you to design 90% of the functional tests needed for the project (there are still design dependent and coding dependent issues to address).  Join Richard to learn how this process moves most of test design effort early in the project, where it is most effective and efficient.

 
Learn more about Richard Bender  

TO  
Managing Exploratory Testing  
Jonathan Kohl, Kohl Concepts, Inc.

NEW

Exploratory testing has become a popular approach to software testing, and managers like the results that they see in their own teams and what they hear from others. Although many managers would like to embrace exploratory testing, they worry that they might lose track of what is going on with their testing efforts. Jonathan Kohl addresses these concerns and shows how to effectively manage exploratory testing teams. Jonathan shares methods to help you track testing progress, determine test coverage, and use information discovered through exploratory testing to help stakeholders make better project decisions. Examine lightweight practices to help understand what people are testing and enable managers to have confidence in the testing work that is being done. Take back an approach to add exploratory testing to your team without disrupting the practices and procedures that are already in place. 

 
Learn more about Jonathan Kohl  

TP  
Spend Wisely, Test Well: Making a Financial Case for Testing 
Susan Herrick, EDS-Global Testing Practice

Organizations that develop software always profess absolute commitment to product quality and customer satisfaction. At the same time, they often believe that “all that testing isn’t really necessary.” Test managers must be able to quantify the financial value of testing and substantiate their claims with empirical data. Susan Herrick provides experienced test managers with quantitative approaches to dispel the prevailing myths about the negative bottom-line impact of testing, make a compelling business case for testing throughout the project lifecycle, and provide decision-makers with information that allows them to make fiscally responsible choices about test efforts. During a hands-on activity, you will calculate, analyze, and substantiate answers to questions such as “What will it cost if we don’t test at all?” “Should we rely on the system and acceptance testers to find all the defects?” “Can our experienced developers test their own code?” and “Should experienced users perform the acceptance testing?” Answer these and more questions, with the numbers at hand to back up your claims.

   

Laptop Required

To benefit fully from the hands-on activity, each participant should bring a PC laptop, running Windows XP or higher with a CD drive. All participants will receive as a takeaway a CD containing a calculation tool (with full instructions). 

   
Learn more about Susan Herrick  

TQ  
Making Test Automation Work in Agile Projects  
Lisa Crispin, ePlan Services, Inc.

NEW

Agile teams must deliver production-ready software every four-, two- or one-week iteration—or possibly every day! This goal can't be achieved without automated tests. Without automated tests, no team can finish the regression testing and exploratory testing needed to deliver high quality software in such short intervals. However, many teams just can't seem to get traction on test automation.  The challenge of automating all regression tests strikes fear into the hearts of many testers.  How do we succeed when we have to release so often?  By combining a collaborative team approach with an appropriate mix of tools designed for agile teams, you can, over time, automate your regression tests, leverage automation to enhance your exploratory testing, and continue to automate new tests during each programming iteration. Lisa Crispin describes what tests should be automated, some common barriers to test automation, and ways to overcome those barriers. Learn how to create data for tests, evaluate automated test tools, implement test automation, and assess your automation efforts.  An agile approach to test automation helps even if you are a tester on a more traditional project without the support of programmers on your team. 

 
Learn more about Lisa Crispin  




Top of Page

 
Send us Your Feedback
Software Quality Engineering  •  330 Corporate Way, Suite 300  •  Orange Park, FL 32073
Phone: 904.278.0524 or 888.268.8770  •  Fax: 904.278.4380  •  Email: [email protected]
© 2009 Software Quality Engineering, All rights reserved.