|
|
|
|
STARWEST 2008 Preconference Tutorials
|
|
Randall Rice, Rice Consulting Services, Inc. |
|
|
Have you been thrust into the role of test team leader or are
you in this role now and want to hone your leadership skills? Test team leadership
has many unique challenges, and many test team leaders—especially new ones—find
themselves ill-equipped to deal with the problems they face. The test team leader
must motivate and support her people while keeping the testing on track within time
and budget constraints. Randy Rice focuses on how you can grow as a leader, influence
your team and those around you, and impact those outside your team. Learn how to
become a person of influence, deal with interpersonal issues, and help your team
build their skills and value to the team and the organization. Discover how to communicate
your team’s value to management, how to stand firm when asked to compromise
principles, and how to learn from your successes and failures. Develop your own
action plan to become an influential test team leader. |
|
Learn more about Randall
Rice |
|
|
|
|
Dale Perry, Software Quality Engineering |
|
|
All too often testers are thrown into the quality assurance/testing
process without the knowledge and skills essential to perform the required tasks.
To be truly effective, you first must understand what testing is supposed to accomplish
and then understand how it relates to the bigger project management and application
development picture. After that, you can ask the right questions: What should be
tested? How can I design effective and efficient test cases? How much testing is
enough? How do I know when I’m finished? How much documentation do I need?
Dale Perry explores a testing lifecycle that parallels software development and
focuses on defect prevention and early error detection. As Dale shares the basics
for implementing a systematic, integrated approach to testing software, learn when,
what, and how to test—plus ways to improve the testability of your system. |
|
Learn more about Dale
Perry |
|
|
|
|
Rizwan Mallal, Mamoon Yunus, and Jason Macy, Crosscheck
Networks |
|
|
Based on today’s Web services standards, SOA (Service
Oriented Architecture) has ushered in a new era of how applications are designed,
developed, tested, and deployed. The promise of SOA to increase development productivity
and application flexibility poses new challenges for testers—multiple Web
services standards and implementations, legacy applications (of questionable quality)
now exposed as Web services, weak or non-existent security controls, and services
of possibly diverse origins chained together to create applications. Join Mamoon
Yunus, Jason Macy and Rizwan Mallal as they lead you through an intensive tutorial
that includes hands-on lab work. Roll up your sleeves and dive into the process
of testing SOA Web services. Beginning with the Four Pillars of SOA testing, you
will learn new concepts to master SOA testing challenges through techniques such
as WSDL chaining, schema mutation, and automated filtration. Learn how traditional
techniques such as black-, gray-, and white-box testing are applied to SOA testing
to maximize test coverage, minimize effort, and release better products.
Laptop Required |
Admin privileges are also required as software will be installed
on them. |
|
|
Learn more about Rizwan
Mallal
Learn more about Mamoon
Yunus
Learn more about Jason
Macy |
|
|
|
|
Jonathan Kohl, Kohl Concepts, Inc. |
|
|
Exploratory testing is an approach to testing that emphasizes
the freedom and responsibility of the tester to continually optimize the value of
his work. It is the process of three mutually supportive activities performed in
parallel: learning, test design, and test execution. With skill and practice, exploratory
testers typically uncover an order of magnitude more problems than the same amount
of effort spent on procedurally scripted testing. All testers conduct exploratory
testing in one way or another, but few know how to do it systematically to obtain
the greatest benefits. Even fewer testers can articulate the process. Jonathan Kohl
describes specific heuristics and techniques of exploratory testing to help you
get the most from this highly productive approach. Jonathan focuses on the skills
and dynamics of exploratory testing itself and how it can be combined with scripted
approaches. (For insight into how to manage and measure ET, attend Jonathan Bach's
tutorial on Session-Based Exploratory Testing.) |
|
Laptop Required |
This is a hands-on course. A laptop—preferably with
Microsoft Windows capability—is required for some of the exercises.
|
|
|
Learn more about Jonathan
Kohl |
|
|
|
|
James Bach, Satisfice, Inc. |
|
|
Are you in control of your testing education? Do you have a
systematic approach to learning the skills a great tester needs? James Bach shares
his personal system of testing self-education. It's a system based on analyzing
personal experiences and questioning conventional wisdom. He explains and demonstrates
the methods that he has used to develop context-driven testing ideas since 1987.
You can use similar methods to draw out and codify the lessons of your own experiences.
James discusses how to sort through the differing schools of testing; the entry
points for personal testing education; a syllabus of software testing concepts;
how to identify, articulate, and test your own heuristics; and how to assess your
progress. Whether you are new to testing, working to be a great test lead, or want
to become a better testing consultant, this tutorial will take you down the road
to more effective learning. |
|
Learn more about
James Bach |
|
|
|
|
|
Adopting an agile development methodology
changes many familiar practices for both developers and testers. Join Bob Hartman
to examine the challenges many testers face as agile development practices move
into the mainstream and into their organizations. Teams new to agile or exploring
agile practices have discovered that the transition from traditional testing practices
to the lean-agile “test first” approach is a significant challenge for
the development team and, in particular, for test engineers. Learn how requirements
practices and documents differ when the team is using agile development practices.
Find out about new workflows needed for test development and execution, and process
changes for tracking and repairing defects. Discover how faster release schedules
can affect testing and the entire team. Using case studies—both successes
and failures—Bob discusses transition strategies and solutions for test and
development teams. Learn from these experiences and apply their lessons to the challenges
you may face as you enter the land of agile development. |
|
Learn more about Bob
Hartman |
|
|
|
|
Dion Johnson, DiJohn Innovative Consulting,
Inc. |
|
|
Automating functional tests for highly dynamic applications
is a daunting task. Unfortunately, most testers rely on automation tools that produce
static test suites that are difficult and expensive to change. With complex automation
frameworks and expensive testing tools, it is no wonder that automated testing often
fails to live up to its promise. But, there is another way that is simple and almost
free! By learning basic scripting language skills, you can begin immediately to
automate time-consuming, everyday testing tasks. Scripting saves valuable time doing
repetitive tasks so that you can focus on more important work. Using the Ruby scripting
language and Internet Explorer, you will practice scripted automation techniques
on an HTML application. These techniques address many of your test automation needs,
including dynamic data creation, automated input entry, and exception handling—all
of which can increase the coverage, maintainability, scalability, and robustness
of your tests. Participants should have scripting experience or knowledge of basic
programming control-flow statements and logic—if-then-else, for-next, etc. |
|
Laptop Required |
Be sure to bring your Windows laptop with Internet Explorer
and Excel. Because working in pairs is encouraged, feel free to bring a friend to
share your PC. |
|
|
Learn more about Dion
Johnson |
|
|
|
|
Lee Copeland, Software Quality Engineering |
|
|
All testers know that we can create many more test cases than
we will ever have time to create and execute. The major problem in testing is choosing
a small, “smart” subset from the almost infinite number of possibilities
available. Join Lee Copeland to discover how to design test cases using formal black-box
techniques, including equivalence class and boundary value testing, decision tables,
state-transition diagrams, and all-pairs testing. Also, explore white-box techniques
and their associated coverage metrics. Evaluate more informal approaches, such as
random and hunch-based testing, and learn about the importance of exploratory testing
to enhance your testing ability. Choose the right test case documentation format
for your organization. Use the test execution results to continually improve your
test designs. |
|
Learn more about Lee
Copeland |
|
|
|
|
Elisabeth Hendrickson, Quality Tree Software, Inc. |
|
|
When a development team adopts an agile process such as Scrum
or XP, testers find that their traditional practices no longer fit. The extensive
up-front test planning and heavyweight test documentation used in traditional development
environments just get in the way in an agile world. In this experiential workshop,
you experience the transition to agile through a paper-based simulation (no programming
required). In a series of iterations, the team attempts to deliver a product that
the customer is willing to buy, thus generating revenue for the company. As with
real projects, producing a working product on a tight schedule can be challenging.
After each iteration, your team reflects on key events and adjusts to increase productivity
for the next iteration. Learn to apply the principles of visibility, feedback, communication,
and collaboration to increase the team’s rate of delivery. By the end of the
workshop, you will have an intuitive understanding of agile and, in particular,
the shifting role of Test/QA in agile development. |
|
Learn more about
Elisabeth Hendrickson |
|
|
|
|
Lloyd Roden, Grove Consultants |
|
|
How can test managers present information about test results
so that the decision-makers receive the correct message? Testing generates a huge
amount of raw data, which must be analyzed, processed, summarized, and presented
to management so the best decisions can be made quickly. Using his experiences as
a test manager and consultant, Lloyd Roden shares ways to communicate with and disseminate
information to management. Develop your skills so you become a “trusted advisor”
to senior management rather than the “bearer of bad news.” Discover
innovative ways to keep the information flowing to and from management and avoid
losing control of the test process, particularly near the delivery date. Learn how
to deal effectively with various controversies that prevent senior managers from
taking us seriously. |
|
Learn more about Lloyd
Roden |
|
|
|
|
Ruud Teunissen, POLTEQ IT Services BV |
|
|
How do you estimate your test effort? And how reliable is that
estimate? Ruud Teunissen presents a practical and useful test estimation technique
directly related to the maturity of your test and development process. A reliable
effort estimation approach requires five basic elements: (1) Strategy – Determine
what to test (performance, functionality, etc.) and how thoroughly it must be tested.
(2) Size – Yes, it does matter—not only the size of the system but also
the scope of your tests. (3) Expected Quality – What factors have been established
to define quality? (4) Infrastructure and Tools – Define how fast you can
test. Without the proper organizational support and the necessary tools, you’ll
need more time. (5) Productivity – How experienced and efficient is your team?
While it’s fun to learn new techniques, it means your time is not being spent
finding defects. |
|
Learn more about
Ruud Teunissen |
|
|
|
|
Rob Sabourin, AmiBug.com, Inc. |
|
|
Designing test cases is a fundamental skill that all testers
should master. Rob Sabourin shares a graphical technique he has employed to design
powerful test cases that will surface important bugs quickly. These skills can be
used in exploratory, agile, or engineered contexts—anytime you are having
problems designing a test. Rob illustrates how you can use Mindmaps to visualize
test designs and better understand variables being tested, one-at-a-time and in
complex combinations with other variables. He presents the Application-Input-Memory
(AIM) heuristic through a series of interactive exercises. We’ll use a widely
available free, open-source tool called FreeMind to help implement great test cases
and focus our testing on what matters to quickly isolate critical bugs. If you are
new to testing, these techniques will remove some of the mystery of good test case
design. If you’re a veteran tester, these techniques will sharpen your skills
and give you some new test design approaches. |
|
Laptop Optional |
Participants are encouraged to bring a laptop computer to
this session. |
|
|
Learn more about Rob
Sabourin |
|
|
|
|
Rick Craig, Software Quality Engineering |
|
|
To be most effective, test managers must develop and use metrics
to help direct the testing effort and make informed recommendations about the software’s
release readiness and associated risks. Because one important testing activity is
to “measure” the quality of the software, test managers must measure
the results of both the development and testing processes. Collecting, analyzing,
and using metrics is complicated because many developers and testers feel that the
metrics will be used “against them.” Rick Craig addresses common metrics:
measures of product quality, defect removal efficiency, defect density, defect arrival
rate, and testing status. Rick offers guidelines for developing a test measurement
program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.”
Various metrics paradigms, including Goal-Question-Metric, are addressed with a
discussion of the pros and cons of each. Participants are urged to bring their metrics
problems and issues for use as discussion points. |
|
Learn more about Rick
Craig |
|
|
|
|
Michael Bolton, DevelopSense |
|
|
Testers live in a world of great complexity, scarce information,
and extraordinary time pressures. With all these challenges, really good testing
is less about confirming, verifying, and validating, and more about thinking, questioning,
exploring, investigating, and discovering. While technical testing skills are important,
you need better thinking skills to solve your biggest testing questions. Michael
Bolton teaches you the skills—questioning, critical thinking, context-driven
analysis, and general systems thinking—that can help you deal confidently
and thoughtfully with your testing challenges. In this interactive workshop, you
will examine common cognitive biases within testing and practice the thinking tools
you need to overcome them. You’ll learn to use modeling and general systems
approaches to manage complexity and see more clearly. Work with Michael and others
to explore your most difficult testing questions—and find innovative approaches
to answer them. |
|
Laptop Encouraged |
Participants are encouraged to bring a laptop computer to
this session. |
|
|
Learn more about Michael
Bolton |
|
|
|
|
Julie Gardiner, Grove Consultants |
|
|
Risks are endemic in every phase of every project. One key to
project success is to identify, understand, and manage these risks effectively.
However, risk management is not the sole domain of the project manager, particularly
with regard to product quality. It is here that the effective tester can significantly
influence the project outcome. Julie Gardiner explains how risk-based testing can
shape the quality of the delivered product in spite of such time constraints. Join
Julie as she reveals how you can apply product risk management to a variety of organizational,
technology, project, and skills challenges. Through interactive exercises, receive
practical advice on how to apply risk management techniques throughout the testing
lifecycle—from planning through execution and reporting. Take back a practical
process and the tools you need to apply risk analysis to testing in your organization. |
|
Learn more about Julie
Gardiner |
|
|
|
|
Susan Herrick, EDS-Global Testing Practice |
|
|
Organizations that develop software always profess absolute
commitment to product quality and customer satisfaction. At the same time, they
often believe that “all that testing isn’t really necessary.”
Test managers must be able to quantify the financial value of testing and substantiate
their claims with empirical data. Susan Herrick provides experienced test managers
with quantitative approaches to dispel the prevailing myths about the negative bottom-line
impact of testing, make a compelling business case for testing throughout the project
lifecycle, and provide decision-makers with information that allows them to make
fiscally responsible choices about test efforts. During a hands-on activity, you
will calculate, analyze, and substantiate answers to such questions as, “What
will it cost if we don’t test at all?” “Should we rely on the
system and acceptance testers to find all the defects?” “Can our experienced
developers test their own code?” and “Should experienced users perform
the acceptance testing?” Answer these and more questions with the numbers
at hand to back up your claims.
|
|
Laptop Required |
To benefit fully from the hands-on activity, each participant
should bring a laptop. All participants will receive as a takeaway a CD containing
a calculation tool (with full instructions). |
|
|
Learn more about Susan
Herrick |
|
|
|
|
Rob Sabourin, AmiBug.com, Inc. |
|
|
If you think you have already explored all of the important
boundaries as part of your testing, this dynamic, interactive presentation will
open your eyes to some often-missed edges and offer you great techniques to expose
and explore them. You’ll dive into the rich universe of boundaries related
to systems behavior, environments, system limits, design limitations, and even eccentric
user behaviors. Rob Sabourin helps you learn to see and explore the final frontiers
of your software and look beyond the confines of common knowledge to see the aliens
and strange monsters lurking. In this hands-on workshop, you’ll participate
in a series of fun, interactive exercises and experience rich boundary examples
from Rob’s recent projects. Practice identifying and exercising the data conditions
that influence a system’s behavior and understand how critical values lead
to emergent behaviors, which can make or break software projects. In addition to
practicing traditional boundaries value analysis and equivalence partitioning techniques,
you will learn about exploratory testing, failure mode analysis, and several stress
testing experiments you can perform. |
|
Learn more about Rob
Sabourin |
|
|
|
|
Jon Bach, Quardev, Inc. |
|
|
At testing conferences, many presentations mention techniques
and processes meant to help you find bugs, but few talk about what to do when you
actually find one. If it’s as simple as filing a report about what you saw,
how do you know that’s the real problem? What do you do when you file a bug,
but the developer wants you to give more information? How do you reproduce pesky,
intermittent bugs that come in from customers? Join Jon Bach in this hands-on tutorial
to help you practice investigation and analysis skills such as questioning, conjecturing,
branching, and backtracking that might help you unearth more context about the problems.
If you’re telling stories about the bug that got away, this tutorial gives
you the opportunity to try some techniques that may trap it so you can earn more
credibility, respect, and autonomy from your stakeholders.
Collaboration is encouraged during the session, so bring your tool suggestions,
tester’s notebook, and scientific mindset. |
|
Learn more about Jon Bach |
|
|
|
|
Martin Pol, POLTEQ IT Services BV |
|
|
When outsourcing all or part of your testing efforts to a third-party
vendor, you need a special approach to make testing effective and controlled. Martin
Pol explains his roadmap to successful outsourcing and offers ways to define the
objectives, the strategy, and the scope (what tasks should be outsourced and what
tasks should not—at least not yet). He describes how to select your supplier
and how to migrate, implement, and cope with organizational issues. Martin discusses
contracts, service levels, and ways to monitor and control tasks. He focuses on
a technique for scoping the project, defining service levels, and establishing a
specific set of metrics. The good news for testers is that outsourcing requires
more testing—not less—and that new testing jobs are coming into existence.
Testing the outsourcing is becoming a very important control mechanism for outsourcing
in general. |
|
Learn more about Martin
Pol |
|
|
Top of Page
|
|
|
|
Software Quality Engineering • 330
Corporate Way, Suite 300 • Orange Park, FL 32073
Phone: 904.278.0524 • Toll-free: 888.268.8770 • Fax:
904.278.4380 • Email:
[email protected]
© 2008 Software Quality Engineering, All rights reserved.
|
|
|
|