Home About Software Quality Engineering Sponsors Contact Us SQE.com
STARWEST 2008
 
 
 
STARWEST 2008 Concurrent Sessions

GO TO:   Wednesday  |  Thursday  |  Friday  

Concurrent Sessions for Wednesday, October 1 — 11:30 a.m.
W
1
 
The Myth of Risk Management
Pete McBreen, Software Craftsmanship, Inc.
 
Although test managers are tasked with helping manage project risks, risk management practices used on most software projects produce only an illusion of safety. Many software development risks cannot be managed because they are unknown, unquantifiable, uncontrollable, or unmentionable. Rather than planning only for risks that have previously occurred, project and test managers must begin with the assumption that something new will impact their project. The secret to effective risk management is to create mechanisms that provide for the early detection and quick response to such events—not simply to create checklists of problems you’ve previously seen. Pete McBreen presents risk “insurance” as a better alternative to classic risk management. He offers a risk insurance model, which helps insure projects against incomplete information, minor slippages that add up to major delays, late breaking bad news, and failure to learn from the past. Join Pete to learn how your testing projects can be flexible, responsive, and better able to deal with your project’s risks—both known and unknown.

Learn more about Pete McBreen
Pete McBreen 

W
2
 
Using Failure Modes to Power Up Your Testing
Dawn Haynes, PerfTestPlus, Inc.
 
When a tester uncovers a defect, it usually gets fixed. The tester validates the fix and may add the test to a regression test suite. Often, both the test and defect are then forgotten. Not so fast—defects hold clues about where other defects may be hiding and often can help the team learn to not make the same mistake again. Dawn Haynes explores methods you can use to generate new test ideas and improve software reliability at the same time. Learn to use powerful analysis tools, including FMEA—failure modes and effects analysis—and cause/effect graphing. Go further with these techniques by employing fault injections and forensically analyzing bugs that customers find. Discover ways to correct the cause of a problem rather than submitting a “single instance defect” that will result in a “single instance patch” that fixes one problem and does nothing to prevent new ones. Learn how to power up your testing to reveal defect patterns and root causes for recurring defects.

Learn more about Dawn Haynes
 Dawn Haynes

W
3
 
Adventures with Test Monkeys
John Fodeh, Hewlett-Packard
 
Most test automation focuses on regression testing—repeating the same sequence of tests to reveal unexpected behavior. Despite its many advantages, this traditional test automation approach has limitations and often misses serious defects in the software. John Fodeh describes “test monkeys,” automated testing that employs random inputs to exercise the software under test. Unlike regression test suites, test monkeys explore the software in a new way each time a test case executes and offers the promise of finding new and different types of defects. The good news is that test monkey automation is easy to develop and maintain and can be used early in development before the software is stable. Join John to discover different approaches you can take to implement test monkeys, depending on the desired “intelligence” level. Learn to use weighted probability tables to direct your test monkeys into specific areas of interest, and find out how monkeys can work with model-based testing to make your testing even more powerful.

Learn more about John Fodeh
John Fodeh 

W
4
 
Five Things Every Tester Must Do
Julie Gardiner, Grove Consultants
 
Are you a frustrated tester or test manager? Are you questioning whether or not a career in testing is for you? Do you wonder why others in your organization seem unenthusiastic about quality? If the answer is yes to any of these questions, this session is for you. Julie Gardiner explores five directives to help testers make a positive impact within their organization and increase professionalism in testing. Remember quality—it’s not just time, it’s time and quality; it’s date and quality; it’s functionality and quality. Learn to enjoy testing and have fun—the closest job to yours is blowing up things for the movies. Relish the testing challenge—it’s you against the software and sometimes, it seems, the world. Choose your battles—take a stand on issues that are vital and let the small things go. And most importantly, remember that the only real power we have springs from our integrity—don’t sell that at any price. Join Julie for this important and inspirational session. You’ll be glad you did.

Learn more about Julie Gardiner
 Julie Gardiner

W
5
 
Fun with Regulated Testing
John McConda, Moser Consulting
 
Does your test process need to pass regulatory audits (FDA, SOX, ISO, etc.)? Do you find that an endless queue of documentation and maintenance is choking your ability to do actual testing? Is your team losing good testers due to boredom? With the right methods and attitude, you can do interesting and valuable testing while passing a process audit with flying colors. It may be easier than you think to incorporate exploratory techniques, test automation, test management tools, and iterative test design into your regulated process. You’ll be able to find better bugs more quickly and keep those pesky auditors happy at the same time. John McConda shares how he uses exploratory testing with screen recording tools to produce the objective evidence auditors crave. He explains how to optimize your test management tools to preserve and confidently present accountability and traceability data. Learn to negotiate which test activities are auditable and create tests with an iterative test design approach that quickly adapts to change and makes auditors smile.

Learn more about John McConda
John McConda 

Concurrent Sessions for Wednesday, October 1 — 1:45 p.m.
W
6
 
Great Test Teams Don’t Just Happen
Jane Fraser, Electronic Arts
 
Test teams are just groups of people who work on projects together. But how do great test teams become great? More importantly, how can you lead your team to greatness? Jane Fraser describes the changes she made after several people on her testing staff asked to move out of testing and into other groups—production and engineering—and how helping them has improved the whole team and made Jane a much better leader. Join Jane as she shares her team’s journey toward greatness. She started by getting to really know the people on the team—what makes them tick, how they react to situations, what excites them, what makes them feel good and bad. She discovered the questions to ask and the behaviors to observe that will give you the insight you need to lead. Join Jane to learn how to empower your team members with the responsibility, authority, and accountability to get the job done while you concentrate on removing roadblocks to their success. And most importantly, remember it’s all about them—it’s not about you.

Learn more about Jane Fraser
Jane Fraser 

W
7
 
Understanding Test Coverage
Michael Bolton, DevelopSense
 
Test coverage of application functionality is often poorly understood and always hard to measure. If they do it at all, many testers express coverage in terms of numbers, as a percentage or proportion—but a percentage of what? When we test, we develop two parallel stories. The “product story” is what we know and can infer about the software product—important information about how it works and how it might fail. The “testing story” is how we modeled the testing space, the oracles that we used, and the extent to which we configured, operated, observed, and evaluated the product. To understand test coverage, we must know what we did not test and that what we did test was good enough. Michael Bolton proposes alternatives for obtaining and describing test coverage—diagrams, strategy models, checklists, spreadsheets and matrices, and dashboards—and suggests how we can use these tools to build a clearer understanding of coverage to illuminate both the product story and the testing story.

Learn more about Michael Bolton
 Michael Bolton

W
8
 
Automate API Tests with Windows PowerShell
Nikhil Bhandari, Intuit
Although a myriad of testing tools have emerged over the years, only a few focus on the area of API testing for Windows-based applications. Nikhil Bhandari describes how to automate these types of software tests with Windows PowerShell, the free command line shell and scripting language. Unlike other scripting shells, PowerShell works with WMI, XML, ADO, COM, and .NET objects as well as data stores, such as the file system, registry, and certificates. With PowerShell, you can easily develop frameworks for testing—unit, functional, regression, performance, deployment, etc.—and integrate them into a single, consistent overall automation environment. With PowerShell, you can develop scripts to check logs, events, process status, registry check, file system management, and more. Use it to parse XML statements and other test files. Reduce your testing cycle times to better support iterative development and, at the same time, have more fun testing your Windows applications.

Learn more about Nikhil Bhandari

 Nikhil Bhandari


W
9
 
What Price Truth? When a Tester is Asked to Lie
Fiona Charles, Quality Intelligence, Inc.
 
As testers and test managers, our job is to tell the truth about the current state of the software on our projects. Unfortunately, in the high-stakes business of software development, often there is pressure—subtle or overt—to distort our messages. When projects are late or product reliability is poor, managers’ and developers’ reputations—and perhaps even their jobs—may be on the line. Fiona Charles discusses the importance to testers of refusing to compromise the truth, recognizing a potential cover-up before it occurs, knowing the legal position around securing project information, and developing a strategy to maintain integrity and still get out alive. She examines the early warning signs and discusses the practical tactics available to testers, including signaling your unwillingness to lie, getting accurate and detailed reports of project progress and status on the record, keeping notes of disturbing conversations and events, and choosing whether to “blow the whistle” (and if so, to whom), leave the organization—or both.

Learn more about Fiona Charles
 Fiona Charles

W
10
 
The Case Against Test Cases
James Bach, Satisfice, Inc.
 
A test case is a kind of container. You already know that counting the containers in a supermarket would tell you little about the value of the food they contain. So, why do we count test cases executed as a measure of testing’s value? The impact and value a test case actually has varies greatly from one to the next. In many cases, the percentage of test cases passing or failing reveals nothing about the reliability or quality of the software under test. Managers and other non-testers love test cases because they provide the illusion of both control and value for money spent. However, that doesn’t mean testers have to go along with the deceit. James Bach stopped managing testing using test cases long ago and switched to test activities, test sessions, risk areas, and coverage areas to measure the value of his testing. Join James as he explains how you can make the switch—and why you should.

Learn more about James Bach
James Bach 

Concurrent Sessions for Wednesday, October 1 — 3:00 p.m.
W
11
 
Test Estimation: Painful or Painless?
Lloyd Roden, Grove Consultants
 
As an experienced test manager, Lloyd Roden believes that test estimation is one of the most difficult aspects of test management. You must deal with many unknowns, including dependencies on development activities and the variable quality of the software you test. Lloyd presents seven proven ways he has used to estimate test effort. Some are easy and quick but prone to abuse; others are more detailed and complex but may be more accurate. Lloyd discusses FIA (finger in the air), formula/percentage, historical reference, Parkinson’s Law vs. pricing, work breakdown structures, estimation models, and assessment estimation. He shares spreadsheet templates and utilities that you can use and take back to help you improve your estimations. By the end of this session, you might just be thinking that the once painful experience of test estimation can, in fact, be painless. Useful utilities will be given out during the session to help with estimation.

Learn more about Lloyd Roden
 Lloyd Roden

W
12
 
Exploratory Testing: The Next Generation
David Gorena Elizondo, Microsoft
 
Exploratory testing is sometimes associated with “ad hoc” testing, randomly navigating through an application. However, emerging exploratory techniques are anything but ad hoc. David Gorena Elizondo describes new approaches to exploratory testing that are highly effective, very efficient, and supported by automation. David describes the information testers need for exploration, explains how to gather that information, and shows you how to use it to find more bugs and find them faster. He demonstrates a faster and directed (not accidental) exploratory bug finding methodology and compares it to more commonly used approaches. Learn how test history and prior test cases guide exploratory testers; how to use data types, value ranges, and other code summary information to populate test cases; how to optimize record and playback tools during exploratory testing; and how exploratory testing can impact churn, coverage, and other metrics.

Learn more about David Gorena Elizondo
David Gorena Elizondo 

W
13
 
Test Automation Techniques for Dynamic and Data Intensive Systems
Chris Condron, The Hanover Group
 
If you think you’re doing everything right with test automation but it just won't scale, join the crowd. If the amount of data you're managing and the dynamic changes in applications and workflows keep you in constant maintenance mode, this is the session for you. Encountering these problems, Chris Condron’s group reviewed their existing automation successes and pain points. Based on this analysis, they created a tool agnostic architecture and automation process that allowed them to scale up their automation to include many more tests. By aligning their test scripts with the business processes, his team developed a single test case model they use for both manual and automated tests. They developed a test data management system incorporating storage of and a review process for three types of test data: scenarios, screen mappings, and references. Their new test scripts contain only the application flow information and reference the test data system for the input values. Join Chris and find out how you can enjoy the same success.

Learn more about Chris Condron
Chris Condron 

W
14
 
Top Ten Non-Technical Skills for Better Testing
Krishna Iyer and Mukesh Mulchandani, ZenTEST Labs
 
In the era of SOA and Web 2.0, as it becomes more and more difficult to accomplish comprehensive testing, Krishna Iyer and Mukesh Mulchandani describe ten non-technical skills that will make you a better tester. The first five are qualities we often look for in testers yet seldom practice scientifically and diligently—collaboration, creativity, experimentation, passion, and alertness. The second five are abilities that are seldom mentioned, yet equally important for testers—connect the dots, challenge the orthodox, picture and predict, prioritize, and leave work at work. Drawing from their experiences of building a testing team for their organization and consulting with global firms in building “testing capability,” Krishna and Mukesh show how you and your test team can improve each of these ten non-technical skills. Practice these skills during the session and take back techniques you can use to hone your skills at work.

Learn more about Krishna Iyer
Learn more about Mukesh Mulchandani
Krishna Iyer Mukesh Mulchandani 

W
15
 
The Power of Specially Gifted Software Testers
Thorkil Sonne, Specialisterne
 
Specialisterne (“The Specialists”) is a Danish company that employs people with very special capabilities to perform complex and difficult tasks, including software testing, quality control, and data conversion. Their customers are companies such as Computer Sciences Corporation (CSC), Microsoft, and leading Danish IT organizations. Their founder and our presenter, Thorkil Sonne, received the IT Award 2008 from the Danish IT Industry Association for the company’s ability to find and employ especially talented people in IT. Seventy-five percent of the employees of Specialisterne have autism—Autistic Spectrum Disorder (ASD)—typically Asperger’s Syndrome. Traditionally, society has viewed people with ASD as handicapped. Yet, their abilities to concentrate, stick to tasks, and quickly absorb highly complex technical information are exactly the characteristics of the best software testers. Hear Thorkil’s vision and strategy on how he believes the software testing industry worldwide can derive considerable benefit from employing special people with autism.

Learn more about Thorkil Sonne
Thorkil Sonne 

Top of Page
 
Send us Your Feedback

Software Quality Engineering  •  330 Corporate Way, Suite 300  •  Orange Park, FL 32073
Phone: 904.278.0524  •  Toll-free: 888.268.8770  •  Fax: 904.278.4380  •  Email: [email protected]
© 2008 Software Quality Engineering, All rights reserved.