STARWEST Software Testing Analysis & Review
 
SQE Home
 
 
 
STARWEST 2010 Concurrent Sessions

Go To:   Wednesday  |  Thursday  

Concurrent Sessions for Wednesday, September 29, 2010 11:30 a.m. 
W1  
Take a Chance—Testing Lessons Learned from the Game of MONOPOLY®
Rob Sabourin, AmiBug.com
 
For years, MONOPOLY® has entertained countless people with the fictional thrill of what it might be like to make a killing in real estate—or to lose your shirt. As Rob Sabourin explains, the board game is similar to the real-world experience of running a software test project. Rob guides you through some of MONOPOLY’s powerful lessons and strategies relating to test planning, risk management, technical debt, context-driven test strategies, contingencies, and decision making. In MONOPOLY, winning players consistently select, adapt, and apply strategies. Skilled testers adapt on the fly to their discoveries, applying heuristics and risk models to consistently deliver value. Winning at MONOPOLY, just like successful testing, is all about people: relationships, negotiation, and communication. To succeed in testing or MONOPOLY, you’ve got to be ready for whatever drawbacks or opportunities Chance happens to throw your way. Before you roll those real-world dice on your next testing effort, let Rob teach you the strategies for winning—with wit, wisdom, and entertainment drawn from that wonderful childhood game! [Since 1972, Rob has collected more than one-hundred different sets including more international versions of Monopoly than either Hasbro (USA) or Waddington (UK) acknowledges exist.]  
Learn more about Rob Sabourin  
 
W2  
Alternative Testing: Do We Have to Test Like We Always Have?
Julian Harty
 
Are the “old ways” always the “best ways” to test? Julian Harty shares his thought-provoking ideas on when traditional testing is—and is not—appropriate and poses alternatives for us to consider. For example, what might happen if we choose not to test a product at all? Perhaps the benefits of earlier delivery would outweigh the cost and delay that testing imposes. If a key goal of testing is to provide answers to quality-related questions about a product, are there alternative information sources for answers—say, from live experiments in production? How do you know whether your testing approach is really efficient and effective, especially if you already consider yourself a testing expert? Can your testing knowledge and experience blind you to alternative strategies? One option is to put yourself to the test. For instance, you could more objectively evaluate your skills by working on a crowd-sourced test project. Come, listen, join in, and leave invigorated with a fresh perspective on how you can become a better, more aware, and more astute tester.  
Learn more about Julian Harty  
 
W3  
Building a Successful Test Automation Strategy
Karen Rosengren, IBM
 
You have been told repeatedly that test automation is a good thing and that you need to automate your testing. So, how do you know where to start? If you have started and your efforts don’t seem to be paying back your investment, what should you do? Although you believe automation is a good thing, how can you convince your management? Karen Rosengren takes you through a set of practical and proven steps to build a customized test automation strategy based on your organization’s needs. She focuses on the real problem you are trying to solve—repetitive manual test effort that can be significantly reduced through automation. Using concrete examples, Karen shows you how to develop a strategy for automation that addresses real—not theoretical—savings. She shares how she has demonstrated the business value of automation to executives and gained both buy-in and the necessary budget to be successful. If you need to get your test automation started the right way or back on track, this session is perfect for you.   
Learn more about Karen Rosengren  
 
W4  
Transform Your Lifecycle—Virtualize the Test Lab
Theresa Lanowitz, voke
 
Every tester has heard “it works on my machine” from a developer, referring to a defect deemed to be non-reproducible. We all know the back-and-forth conversations and have yearned for ways to easily replicate test environment failures in the development environment. Test organizations often struggle with access to test environments that closely match production while the operations department struggles to keep up with the demand for provisioned environments. Virtual lab technology can solve these frequent, tedious, and expensive problems, delivering immediate productivity and return-on-investment. By shattering barriers between development, testing, and operations, virtual lab technology is transformational and promises to be the hub of the modern application lifecycle. Theresa Lanowitz shares the results of the “voke Market Snapshot” report on virtual lab management. This groundbreaking research is relevant, current, and something all testers and test managers need to know. Learn how to leverage virtual labs in your test organization while eliminating the age old developer-tester contention that “it works on my machine.”  
Learn more about Theresa Lanowitz  
 

W5

 
Exploratory Testing of Mobile Applications
Jonathan Kohl, Kohl Concepts, Inc.
 
Exploratory testing—the process of simultaneous test design, execution, and learning—is a popular approach to testing traditional application software. Can you apply this approach to testing mobile applications? At first, it is tempting to merely employ the same methods and techniques that you would use with other software applications. Although some concepts transfer directly, testing mobile applications presents special challenges you must consider and address. Jonathan Kohl shares his experiences with testing mobile apps, including the smaller screens and unique input methods that can cause physical strain on testers and slow down the testing effort. Smaller memory and less processing power in the device mean tests often interfere with the application’s normal operation. Network and connectivity issues can cause unexpected errors that crash mobile apps and leave testers scratching their heads. Join Jonathan and learn his tricks of the trade for testing mobile applications, so you can start your mobile testing project with confidence.  
 Learn more about Jonathan Kohl  
 
W6  
Tour-based Testing: The Hacker's Landmark Tour
Rafal Los, Hewlett-Packard
 
When visiting a new city, people often take an organized tour, going from landmark to landmark to get an overview of the town. Taking a “tour” of an application, going from function to function, is a good way to break down the testing effort into manageable chunks. Not only is this approach useful in functional testing, it’s also effective for security testing. Rafal Los takes you inside the hacker’s world, identifying the landmarks hackers target within applications and showing you how to identify the defects they seek out. Learn what “landmarks” are, how to identify them from functional specifications, and how to tailor negative testing strategies to different landmark categories. Test teams, already choked for time and resources and now saddled with security testing, will learn how to pinpoint the defect—from the mountains of vulnerabilities often uncovered in security testing—that could compromise the entire application.   
Learn more about Rafal Los  
 Concurrent Sessions for Wednesday, September 29, 2010 1:45 p.m.
W7  
Focusing with Clear Test Objectives
Sharon Robson, Software Education
 
Frustrated with your team’s testing results—sometimes great, sometimes lacking? Do you consistently over promise and under deliver? If these situations sound familiar, you may be suffering from the ills of UCT (Unclear Test Objectives). Clearly defining test objectives is vital to your project’s success; it’s also seriously hard to get right. Test objectives are often driven by habit—“Let’s copy and paste the last set of objectives”; by lack of understanding—“Let’s use whatever the requirements say”; or by outside forces—“Let’s just do what the user wants.” Sharon Robson shares the structured approach she uses to define test objectives, including key test drivers, approaches, processes, test levels, test types, focus, techniques, teams, environments, and tools. Sharon illustrates how to measure, evaluate, compare, and balance these often conflicting factors to ensure that you have the right objectives for your test project.   
Learn more about Sharon Robson  
 
W8  
Reducing the Testing Cycle Time through Process Analysis
John Ruberto, Intuit, Inc.
 

Because system testing is usually what lies between development and release to the customer—and hopefully more business value or profit—every test team is asked to test faster and finish sooner. Reducing test duration can be especially difficult because many of the factors that drive the test schedule are beyond our control. John Ruberto tells the story of his team’s cutting the system test cycle time from twelve weeks down to four. John shares how they first analyzed the overall test process to create a model of the test duration. This model decomposed the test schedule into six factors: test cycles, number of tests, defects, the rates at which tests were executed and defects handled, tester skills, and the number of testers. By decomposing the test cycle into these variables, their team identified six smaller—and thus easier—problems to solve. Join John as he describes the dozen new techniques and approaches his team ultimately implemented to dramatically shorten their test cycle.

 
Learn more about John Ruberto  
 
W9  
Automating Test Design in Agile Development Environments
Antti Huima, Conformiq
 
How does model-based automated test design (ATD) fit with agile methods and developer test-driven development (TDD)? The answer is “Superbly!”, and Antti Huima explains why and how. Because ATD and TDD both focus on responsive processes, handling evolving requirements, emergent designs, and higher product quality, their goals are strongly aligned. Whereas TDD tests focus on the unit level, ATD works at higher test levels, supporting and enhancing product quality and speeding up development. ATD dramatically reduces the time it takes to design tests within an iterative agile process and makes tests available faster, especially as development proceeds through multiple iterations. Antti shatters the common misconception that model-based methods are rigid and formal and cannot be employed in the rapid, fluid setting of agile environments. Drawing on theoretical knowledge, research, and practical project experience within his company, Antti presents a compelling case for ATD in agile and explains the main guideposts for a successful implementation.  
Learn more about Antti Huima  
 

W10
 
Using the Amazon Cloud to Accelerate Testing
Randy Hayes, Capacity Calibration, Inc.
 
Virtualization technologies have been a great boon to test labs everywhere. With the Amazon Elastic Compute Cloud (EC2), these same benefits are available to everyone—without the need to purchase and maintain your own hardware. Once you master the tricks and tools of this new technology, you too can instantly have limitless capacity at your disposal. Randy Hayes demonstrates how to use the AWS Management Console to create virtual test machines (AMIs), use S3 storage services, handle elastic IP Addresses, and leverage these services for functional testing, load testing, defect tracking, and other common testing functions. Randy explains the Amazon Virtual Private Cloud, which allows EC2 cloud instances to be configured to run inside your firewall to test inward-facing applications. Gain access to a pre-configured AMI with open source testing tools and other utilities for quickly migrating your test lab to EC2.   
Learn more about Randy Hayes  
 

W11
 
Automating Embedded System Testing
William Coleman, LogiGear
 
Many testers believe the challenges of automating embedded and mobile phone-based systems testing are prohibitively difficult. By approaching the problem from a test design perspective and using that design to drive the automation initiative, William Coleman demystifies automated testing of embedded systems. He draws on experiences gained on a large-scale testing project for a leading smart-phone platform and a Window CE embedded automotive testing platform. William describes the technical side of the solution—how to setup a tethered automation agent to expose the GUI and drive tests at the device layer. Learn how to couple this technology solution with a test design methodology that helps even non-technical testers participate in the automation development and execution. Take back a new approach to achieve large-scale automation coverage that is easily maintainable over the long term.  
Learn more about William Coleman  
 

W12
 
Model-based Testing: The Key to Testing Industrialization
Bruno Legeard, Smartesting
 
Customers who want “more, faster, cheaper” put pressure on the development schedule, usually leaving less time for testing. The solution is to parallelize testing and development so that they proceed together. But how, especially when the requirements and software are constantly changing? Model-based testing (MBT) distills the testing effort down to the essential business processes and requirements, capitalizing on abstractions to reduce the costs of change and improve test data management. MBT facilitates a continuous and systematic transformation from business requirements to an automated or manual test repository. MBT permits re-use of the same test design for both integration testing—end-to-end and system-to-system—and functional testing—system, acceptance, and regression. Bruno Legeard discusses how to achieve key test goals by employing MBT: automating test generation, accelerating test automation, improving manual test effectiveness, and maintaining an organized test repository. The bottom line is that model-based testing is a sure way to align your testing goals and your testware repository with business needs.  
Learn more about Bruno Legeard  
 Concurrent Sessions for Wednesday, September 29, 2010 3:00 p.m.

W13
 
Quality Metrics for Testers: Evaluating Our Products, Evaluating Ourselves
Lee Copeland, Software Quality Engineering
 
Finally, most businesses realize that a final system testing “phase” in the project cannot be used as the catch-all for software quality problems. Many organizations are changing development methodologies or creating organization-wide initiatives that drive quality techniques into all aspects of development. So, how do you know that a quality initiative is working or where the most improvement effort is needed? Adrian O’Leary shares examples of quality improvement programs he has observed and illustrates how they are using defect data from various test phases to guide their efforts. See how measurements of defect leakage help these organizations gauge the efficiency and effectiveness of all development activities. Adrian identifies key “quick hit” recommendations for defect containment, including the use of static testing, traceability, and more. Take back new tools for understanding where your organization’s software quality is today and where it needs to be in the future.  
Learn more about Lee Copeland  
 

W14
 
Streamlining the Developer-Tester Workflow
Chris Menegay, Notion Solutions, Inc.
 
The islands that many development and test teams live on seem far apart at times. Developers become frustrated by defect reports with insufficient data to reproduce a problem; testers are constantly retesting the same application and having to reopen "fixed" defects. Chris Menegay focuses on two areas for improvement that will help both teams: a better build process to deliver a more testable application to the test team and a defect reporting process that delivers better defect data back to the developers. From the build perspective, he explores ways for the development team to identify which requirements are completed, which defects were fixed, and how to guide testers on which test cases to execute. Chris details the components of a good defect report, illustrating ways for testers to provide accurate reproduction steps, demonstrating video capture tools, examining valuable log files, and discussing test environment issues. Join this session to help bring together development and test efforts with new practices that facilitate a natural and productive sharing of information.  
Learn more about Chris Menegay  
 

W15
 
State-driven Testing: An Innovation in UI Test Automation
Dietmar Strasser, Borland (a Micro Focus company)
 
Keyword-driven testing is an accepted UI test automation technique used by mature organizations to overcome the disadvantages of record/playback test automation. Unfortunately, keyword-driven testing has drawbacks in terms of maintenance and complexity because applications easily can require thousands of automation keywords. To navigate and construct test cases based on so many keywords is extremely cumbersome and can be impractical. Join Dietmar Strasser to learn how state-driven testing employs an application state-transition model as its basis for UI testing to address the disadvantages of keyword-driven testing. By defining the states and transitions of UI objects in a model, you can minimize the set of allowed UI actions at a specific point in a test script, requiring fewer keywords. With a test case editor built on this concept, technical and non-technical testers—and anyone else in the team—can develop maintainable and stable test cases in minutes rather than hours or days.  
Learn more about Dietmar Strasser  
 

W16
 
Testing with Virtual Machines: Past, Present, and Future
Roussi Roussev, VMWare
 
In the past several years, virtualization has dramatically improved tester productivity. A virtual machine is a useful abstraction for encapsulating the entire software stack. Roussi Roussev presents proven techniques that no modern test environment is complete without. Running multiple virtual machines on a single host maximizes hardware resource utilization and reduces operating costs. Strong isolation facilitates building security testing and multi-tenant environments. With the help of snapshots, virtual machines can quickly travel in time and space. Virtual hardware makes simulating machine, cluster or entire datacenter failure scenarios a whole lot easier. Deterministic record/replay helps track hard to reproduce bugs, comparing outputs allows for measuring the impact of small configuration and binary changes. Discover how managers should rethink testing in a virtualized environments and address the challenges that come with them. Further, Roussi discusses recent developments that have the potential to change the way we test software.  
Learn more about Roussi Roussev  
 

W17
 
Test as a Service: A New Architecture for Embedded Systems
Raniero Virgilio, Intel
 
The classic models adopted in test automation today—guaranteeing ease of test implementation rather than extendibility of the test architecture—are inadequate for the unprecedented complexity of today’s embedded software market. Because many embedded software solutions must be designed and developed for multiple deployments on different and rapidly changing hardware platforms, testers need something new. Raniero Virgilio describes a novel approach he calls Test as a Service (TaaS), in which test logic is implemented in self-consistent components on a shared test automation infrastructure. These test components are deployed at runtime to make the test process completely dynamic. The TaaS architecture provides specific high-level test services to testers as they need them. Through a case study, Raniero shows how to make the most of the TaaS model by employing open source solutions, exploiting the architecture’s major benefits, and mitigating the most common risks.  
Learn more about Raniero Virgilio  
 

W18
 
Stick a Fork in It: Defining Done
Tracy Beeson, Menlo Innovations
 
It seems that developers have as many definitions of “done” as Eskimos have words for “snow.” But without a clear definition of done, it is difficult to gauge progress on a project. Menlo Innovations has a simple solution. Instead of declaring a story card or feature done on its own, developers collaborate with the business analyst and testing teammates to determine when the application meets their requirements. Tracy Beeson highlights the power of asking one deceptively simple question that has multiple, complex answers that can, if not implemented with care, upset the status quo and lead to new problems. Using loosely orchestrated role playing, you’ll practice this approach for discovering when done is really done. During an exploratory testing session, you’ll gain an understanding of the perspective each role on the project brings to the definition of done and discover the benefits this process can have on overall product quality.  
Learn more about Tracy Beeson  

Top of Page



 
Send us Your Feedback