|
|
|
|
STARWEST 2008 Concurrent Sessions
|
|
Rob Sabourin, AmiBug.com, Inc. |
|
|
Traditional testing teams often agonize over exploratory testing.
How can they plan and design tests without detailed up-front documentation? Stubborn
testers may want to quit because they are being asked to move out of their comfort
zone. Can a team’s testing culture be changed? Rob Sabourin describes how
several teams have undergone dramatic shifts to embrace exploratory testing. Learn
how to blend cognitive thinking skills, subject matter expertise, and “hard
earned” experience to help refocus your team and improve your outcomes. Learn
to separate bureaucracy from thinking and paperwork from value. Explore motivations
for change and resistance to it in different project contexts. Leverage Parkinson’s
Law—work expands to fill the time available—and Dijkstra’s Principle—testing
can show the presence of bugs, but not their absence—to inspire and motivate
you and your team to get comfortable in the world of exploratory testing.
Learn more about Rob
Sabourin |
|
|
|
|
Paul Anderson, GrammaTech |
|
|
Identifying defects with static analysis tools has advanced
significantly in the last few years. Yet, there still are many misconceptions about
the capabilities and limits of these innovative tools—and sales propaganda
such as “100% path coverage” has not helped at all. Paul Anderson debunks
common myths and clarifies the strengths and limitations of static-analysis technology.
You’ll learn about the types of defects that these tools can catch and the
types they miss. Paul demystifies static analysis jargon, explaining terms such
as "object-sensitive" and "context-sensitive". Find out how
the FDA uses static analysis today to evaluate medical device software. Paul jump-starts
your understanding of static analysis so you can decide where to apply this technology
and have more knowledge and confidence in your interactions with tool vendors.
Learn more about Paul
Anderson |
|
|
|
|
Antony Marcano, testingReflections.com |
|
|
Acceptance Test-Driven Development (ATDD), an application of
the test-first practice of XP and agile development, can add enormous value to agile
teams that are proficient in these practices. Moving from awareness of ATDD to being
proficient at practicing ATDD comes about only after learning some important lessons.
First, no one group can “own” the process. Second, ATDD is first about
helping the customer and the team understand the problem; then it is about testing.
Third, writing automated acceptance tests in ATDD is not the same as writing automated
tests with typical automation tools. Antony Marcano shares his experiences with
ATDD—the good, the bad, and the ugly—and the many other lessons he’s
learned in the process. Discover the benefits and pitfalls of ATDD and take advantage
of Antony’s experiences so that you avoid common mistakes that teams make
on their journey to becoming proficient practitioners of ATDD.
Learn more about Antony
Marcano |
|
|
|
|
Paco Hope, Cigital |
|
|
Although all teams want to test their applications for security,
our plates are already full with functional tests. What if we could automate those
security tests? Fortunately, most Web-based and desktop applications submit readily
to automated testing. Paco Hope explores two flexible, powerful, and totally free
tools that can help to automate security tests. cUrl is a free program that issues
automatic basic Web requests; Perl is a well-known programming language ideally
suited for writing test scripts. Paco demonstrates the basics of automating tests
using both tools and then explores some of the more complicated concerns that arise
during automation—authentication, session state, and parsing responses. He
then illustrates simulated malicious inputs and the resulting outputs that show
whether the software has embedded security problems. The techniques demonstrated
in this session apply equally well to all Web platforms and all desktop operating
systems. You'll leave with an understanding of the basics and a long list of resources
you can reference to learn more about Web security test automation.
Learn more about Paco
Hope |
|
|
|
|
Justin Callison, Luxoft Canada |
|
|
Database locking is a complicated technical issue for some testers.
Although we often think that this issue belongs in the realm of the developer and
the DBA—“It’s not my problem”—database locking is
the enemy of functional and performance testers. As Justin Callison can personally
attest, locking defects have led to many disasters in production systems. However,
there is hope! Justin sheds light on the problem of database locking, how it varies
among different platforms, and the application issues that can arise. Armed with
a new understanding of database locking, you can develop effective testing strategies.
Join in and learn about these strategies: designing explicit locking tests, ensuring
appropriate test data, implementing sufficient monitoring, and combining manual
with automated testing to avoid disaster.
Learn more about Justin
Callison |
|
|
|
|
Lee Copeland, Software Quality Engineering |
|
|
As testers, we focus our efforts on measuring the quality of
our organization’s products. We count defects and list them by severity; we
compute defect density; we examine the changes in those metrics over time for trends,
and we chart customer satisfaction. While these are important, Lee Copeland suggests
that to reach a higher level of testing maturity, we must apply similar measurements
to ourselves. He suggests you count the number of defects in your own test cases
and the length of time needed to find and fix them; compute test coverage—the
measure of how much of the software you have actually exercised under test conditions—and
determine Defect Removal Effectiveness—the ratio of the number of defects
you actually found divided by the total number you should have found. These and
other metrics will help you evaluate and then improve the effectiveness and efficiency
of your testing process.
Learn more about Lee
Copeland |
|
|
|
|
Fiona Charles, Quality Intelligence, Inc. |
|
|
Scenario-based testing is a powerful method for finding problems
that really matter to users and other stakeholders. By including scenario tests
representing actual sequences of transactions and events, you can uncover the hidden
bugs often missed by other functional testing. Designing scenarios requires you
to use your imagination to create narratives that play out through systems from
various points of view. Basing scenarios on a structured analysis of the data provides
a solid foundation for a scenario model. Good scenario design demands that you combine
details of business process, data flows—including their frequency and variations—and
clear data entry and verification points. Fiona Charles describes a framework for
modeling scenario-based tests and designing structured scenarios according to these
principles. Fiona works through a real-life project example, showing how she applied
this framework to design tests that found hundreds of bugs in a system—and
this after the company had completed their testing and delivered the system into
acceptance.
Learn more about Fiona
Charles |
|
|
|
|
|
FitNesse is an open-source test automation tool that enables
business users, developers, and testers to cooperate on agile acceptance testing.
FitNesse allows them to build a shared understanding of system requirements that
ultimately produces the software that is genuinely fit for its purpose. Gojko Adzic
presents an introduction to agile acceptance testing. He discusses when to use FitNesse,
when not to use it, and how to start writing acceptance tests with this free tool.
Gojko explains how to make the most of automated acceptance tests by focusing on
business rules, how to overcome workflow constraints, and how to avoid common testing
pitfalls. He describes features specific to the .NET FitNesse test runner, including
cell handlers and embedded symbols, that allow you to save time and effort in writing
and maintaining tests. Join in to see if FitNesse fits into your .NET testing world.
Learn more about Gojko
Adzic |
|
|
|
|
Danny Allan, IBM Rational |
|
|
Software quality is a priority for most organizations, yet many
are still struggling to handle the volume of testing. Unfortunately, applications
are frequently released with significant security risks. Many organizations rely
on an overburdened security team to test applications late in development when fixes
are the most costly, while others are throwing complex tools at test teams expecting
the testers to master security testing with no formal processes and training. Danny
Allan describes five steps to integrate security testing into the software development
lifecycle. Danny shows how highly secure and compliant software applications begin
with security requirements and include design, development, build, quality assurance,
and transitional practices. He describes some of the most common application security
vulnerabilities, techniques to address these issues, and methods to safeguard sensitive
online information from the bad guys.
Learn more about Danny
Allan |
|
|
|
|
Wayne Hom, Augmentum |
|
|
Mobile device manufacturers face many challenges bringing quality
products to market. Most testing methodologies were created for data processing,
client/server, and Web products. As such, they often fail to address key areas of
interest to mobile applications—usability, security, and stability. Wayne
Hom discusses approaches you can use to transform requirements into usability guides
and use cases into test cases to ensure maximum test coverage. He discusses automation
frameworks that support multiple platforms to reduce test cycle times and increase
test coverage, while measuring and reporting at the different phases of the software
lifecycle. Wayne presents case studies to illustrate how to reduce test cycles by
up to 75 percent. He demonstrates solutions that have helped providers of third-party
applications and services manage testing cycles for multiple mobile device releases.
Learn more about Wayne
Hom |
|
|
Top of Page
|
|
|
|
Software Quality Engineering • 330
Corporate Way, Suite 300 • Orange Park, FL 32073
Phone: 904.278.0524 • Toll-free: 888.268.8770 • Fax:
904.278.4380 • Email:
[email protected]
© 2008 Software Quality Engineering, All rights reserved.
|
|
|
|