|
|
|
|
|
|
Rob Sabourin, AmiBug.com, Inc.
Over the years, Rob Sabourin has discovered testing lessons in
the Looney Tunes gang, the Great Detectives, and Dr. Seuss. Now he turns his attention
to the Simpsons, a primetime cartoon television show entertaining audiences since
1989. Rob believes that Matt Groening’s popular characters can teach us important
lessons about software testing. Homer’s twisted ideas tell us about test automation—why
it works and why it fails. Could your software stand up to Bart’s abuse? Lisa
Simpson, the brilliant but neglected middle child, provides a calming influence
on projects. Apu, the Kwik-E-Mart operator, works 100 hours a week—should
you? When is Montgomery Burn’s authoritarian management style effective? And
can we bribe stakeholders as easily as Police Chief Wiggum takes a donut? Inside
this simple cartoon are lessons on personas, context, organization, ethics, situational
leadership, and motivation. Just like you, the people of Springfield commit to absurdly
complex projects, such as the Monorail, all of which ultimately fail miserably.
Join Rob in a revealing “Simpsons retrospective” loaded with tons of
testing lessons from Springfield.
Learn more about Rob
Sabourin |
|
|
|
|
Jon Bach, Quardev, Inc.
What do you say when your manager asks, “How did it go
today?” As a test manager, you might say, “I’ll check to see how
many test cases the team executed today.” As a tester with a pile of test
cases on your desk, you could say, “I ran 40% of these tests today,”
or “At the rate I’m going I’ll be finished with these test cases
in 40 days.” However, if you’re using exploration as part of your testing
approach, it might be terrifying to try to give a status report—especially
if some project stakeholders think exploratory testing is irresponsible and reckless
compared to test cases. So how can you retain the power and freedom of exploration
and still give a report that earns your team credibility, respect, and perhaps more
autonomy? Jon Bach offers ways for you to explain the critical and creative thinking
that makes exploratory testing so powerful. Learn how to report your exploration
so stakeholders have a better understanding and appreciation of the value of exploratory
testing to your project.
Learn more about Jon Bach |
|
|
|
|
Julian Harty, Google
Our testing is only as good as our thinking—and all too
often we are hampered by limiting ideas, poor communication, and pre-set roles and
responsibilities. Based on the work of Edward de Bono, the Six Thinking Hats for
software testers have helped Julian, and numerous others, work more effectively
as testers and managers. The concepts are simple and easy to learn. For instance,
we can use these concepts: as individuals performing reviews and while testing,
and in groups during team meetings. Each of the six hats has a color representing
a direction of thinking— the blue hat provides the overview and helps to keep
us productive; the white hat helps us to collect facts; the red is a way to express
intuition and feelings without having to justify them; the yellow hat seeks the
best possible outcome; the black hat helps us to discover what might go wrong—not
only with the software but also with our tests and our assumptions! Finally, the
green hat enables us to find creative solutions to ideas and issues discovered with
the other five hats. Come and learn how to apply the six testing hats and other
“thinking skills” on your test projects.
Learn more about Julian
Harty |
|
|
|
|
Julie Gardiner, Grove Consultants
Classification trees are a structured, visual approach to identify
and categorize equivalence class partitions for test objects. They enable testers
to create better test cases faster. Classification trees visually document test
requirements to make them easy to create and comprehend. Julie Gardiner explains
this powerful technique and how it helps all stakeholders understand exactly what
is involved in testing and offers an easier way to validate test designs. Using
examples, Julie shows you how to create classification trees, how to construct test
cases from them, and how they complement other testing techniques in every stage
of testing. Julie demonstrates a free classification tree editing tool that helps
you build, maintain, display, and use classification trees. Using the classification
tree technique and tool, you keep test documentation to a minimum, more easily create
and maintain regression tests, and drastically reduce test case bloat to make your
test suites more usable.
Learn more about Julie
Gardiner |
|
|
|
|
Gerard Meszaros, Independent Consultant
The concept of an independent test organization is considered
a “best practice” by many experts in the industry. Is this degree of
autonomy actually a good thing in the real world today? In such a structure, some
testers can only play “Battleship” with the delivered software, shouting
gleefully when they find a defect. On their first tours of Toyota’s factories,
American automakers were astonished to find no “rework area.” Toyota
engineers didn’t subscribe to the approach of inserting defects on the production
line only to remove them later in the quality control and rework area. Yet this
is exactly what the independent test group excels at! Is it time to discard this
organizational model and focus on working together with developers to prevent defects
in the first place? Gerard Meszaros examines the sacred concept of independent test
teams based on experiences from the agile software movement and Lean production
systems. Both have shown that it is possible to replace the often dysfunctional,
blaming relationship between the builders and the customers with one of mutual respect
and cooperation. By applying the same “whole team” model within the
technology organization, Gerard proposes to build quality in from the beginning
rather than trying to test it in after the fact.
Learn more about Gerard
Meszaros |
|
|
|
|
Tara Roth, Microsoft
Have you experienced those weeks when the new features being
added to builds just flat out don’t work? Do you strive to have a testable
build throughout the full product development cycle? Are you tired of the mountain
of bugs crushing you just before time to ship? Experienced test manager Tara Roth
discusses how the Microsoft Office team is working to drive the level of test coverage
up during the earlier phases of product development to improve build quality later
in development. Tara describes two approaches, adopted by Microsoft Office, that
improved efficiency and quality—Feature Crews and Big Button. Feature Crews
is a tight-knit partnership of the developer, tester, and program manager who
work together on a private release of new code prior to checking it in to the main
build. Big Button is an approach to having the team kick off an automated suite
of tests prior to checking in to the main build. Tara explains their successes and
describes how you can apply these concepts in your organization. In addition to
sharing her Microsoft Office experience, Tara describes how other Microsoft projects
apply these techniques and how you can do the same.
Learn more about Tara
Roth
|
|
|
Top of Page
|
|
|
|
Software Quality Engineering • 330
Corporate Way, Suite 300 • Orange Park, FL 32073
Phone: 904.278.0524 • Toll-free: 888.268.8770 • Fax:
904.278.4380 • Email:
[email protected]
© 2008 Software Quality Engineering, All rights reserved.
|
|
|
|