Software Testing Analysis & ReviewSoftware Testing Analysis & Review
Software Testing Analysis & Review
  Home About Software Quality Engineering Conference Sponsors Contact Us SQE.com  
Software Testing Analysis & Review
Why Attend?
Conference FAQs
Conference at-a-Glance
Keynote Presentations
Preconference Tutorials
Concurrent Sessions
Certification Training
Special Events
Testing EXPO
Networking Events
Alumni Testimonials
Conference Sponsors
Contact Us
About Us
Past STAR Conferences
Other Conference Events
 
 
 

STAREAST 2007 Wednesday Concurrent Sessions

  Go To:   Wednesday  |  Thursday  |  Friday  

 Wednesday, May 16, 2007 11:30 a.m.
W1
Test Management

Communicating the Value of Testing
Theresa Lanowitz, voke, Inc.
Dan Koloski, Empirix

 
Test managers constantly lament that few outside their group understand or care much about the value they provide and consistently deliver. Unfortunately, they are often correct. The lack of visibility and understanding of the test team’s contribution can lead to restricted budgets, fewer resources, tighter timelines, and ultimately, lower group productivity. Join Theresa Lanowitz and Dan Koloski as they highlight ways to move from simply being a tester of software to an advocate for your organization’s customers. Learn how to effectively and concisely communicate with key stakeholders in your organization to ensure that they understand the value and role of the testing group. With effective and concise communication, the testing group will be perceived as more strategically important and integral to the success of every project.

• Strategies for communicating complex data
• Ensure your communications give you the visibility you need
• How to create testing evangelists within your organization

W2
Test Techniques
Top Ten Tendencies that Trap Testers
Jon Bach, Quardev Laboratories
 
A trap is an unidentified problem that limits or obstructs us in some way. We don’t intentionally fall into traps, but our behavioral tendencies aim us toward them. For example, have you ever found a great bug and celebrated only to have one of your fellow testers find a bigger bug just one more keystroke away? A tendency to celebrate too soon can make you nearsighted. Have you ever been confused about a behavior you saw during a test and shrugged it off? The tendency to dismiss your confusion as unimportant or irrelevant may make you farsighted -- limiting your ability to see a bug right in front of you. Jon Bach demonstrates other limiting tendencies like Stakeholder Trust, Compartmental Thinking, Definition Faith, and more. Testers can't find every bug or run every possible test, but identifying these tendencies can help us avoid traps that might compromise our effectiveness and credibility.

• How you might be susceptible to traps
• Ways through (or around) the ten most common traps
• Participate in exercises that test your situational awareness
W3
Test Automation
Behavior Patterns for Designing Automated Tests
Jamie Mitchell, Jamie Mitchell Consulting, Inc.
 
Automated GUI tests often fail to find important bugs because testers do not understand or model intricate user behaviors. Real users are not just monkeys banging on keyboards. As they use a system, they may make dozens of instantaneous decisions, all of which result in complex paths through the software code. To create successful automated test cases, testers must learn how to model users’ real behaviors. This means test cases cannot be simple, recorded, one-size-fits-all scripts. Jamie Mitchell describes several user behavior patterns that can be adopted to create robust and successful automated tests. One pattern is the 4-step dance, which describes every user GUI interaction: (1) ensure you’re at the right place in the screen hierarchy; (2) provide data to the application; (3) trigger the system; and (4) wait for the system to complete its actions. Join Jamie to learn how this pattern and others can guide your implementation of each automated GUI test.

• Why simplistic automated scripts are worse than useless
• Faulty assumptions we make when automating test cases
• Patterns to help your GUI test automation designs
W4
Metrics
Measuring the Effectiveness of Testing Using DDP
Dorothy Graham, Grove Consultants
 
Does your testing provide value to your organization? Are you asked questions like “How good is the testing anyway?” and “Is our testing any better this year?” How can you demonstrate the quality of the testing you perform, both to show when things are getting better and to show the effect of excessive deadline pressure? Defect Detection Percentage (DDP) is a simple measure that organizations have found very useful in answering these questions. It is easy to start—all you need is a record of defects found during testing and defects found afterwards (which you probably already have available). Join Dorothy Graham as she shows you what DDP is, how to calculate it, and how to use it to communicate the effectiveness of your testing. Dorothy addresses the most common stumbling blocks and answers the questions most frequently asked about this very useful metric.

• Calculate defect detection percentage (DDP) for your projects
• How other organizations have used DDP successfully
• Deal with issues, questions, and problems in using this metric
W5
Special Topics
The NEW IEEE 829 Testing Standard: What You Need to Know
Claire Lohr, Lohr Systems
 
You know about it. You’ve used it. Maybe you’ve even loved it. But now, after all these years, the IEEE 829 standard, the only international standard for test documentation, has been radically revised. As a leader on the IEEE committee responsible for this update, Claire Lohr has detailed insight into what the changes mean to you. You’ll discover that all of the old documents, with one exception, are still included. But now, the 829 standard describes documentation for each level of testing, adds a three-step process for choosing test documents and their contents, adds additional documents, and follows the ISO 12207 life-cycle standard as its basis. In addition, the new standard can be tailored for agile methods if the stakeholders agree on the modifications.

• The one-size-fits-all IEEE 829 standard of the past is gone
• How to tailor the new documents to match your needs
• Consider whether your organization should adopt the revised standard
 Wednesday, May 16, 2007 1:45 p.m.
W6
Test Management
You’re the New Test Manager—Now What?
Brett Masek, American HealthTech
 
You’ve wanted this promotion to QA/Test manager for so long and now, finally, it’s yours. But, you have a terrible sinking feeling . . . "What have I gotten myself into?" "How will I do this?" You have read about Six Sigma and developer to tester ratio—but what does this mean to you? Should you use black-box or white-box testing? Is there a gray box testing? Your manager is mumbling about offshore outsourcing. Join Brett Masek as he explains what you need to know to become the best possible test manager. Brett discusses the seven key areas—test process definition, test planning, defect management, choosing test case approaches, detailed test case design, efficient test automation, and effective reporting—you need to understand to lead your test team. Learn the basics for creating a test department and how to achieve continuous improvement. And, learn how to avoid the biggest mistake most new test managers make—failing to say "No," even when it is necessary.

• What is important to establish in your testing process
• Test planning essentials
• Types of metrics that show your team’s value
W7
Test Techniques
Modular Test Case Design: The Building Blocks of Reusable Tests
Shaun Bradshaw, Questcon Technologies
 
The use of modular design in programming has been a common technique in software development for years. However, the same principles that make modular designs useful for programming—increased reusability and reduced maintenance time—are equally applicable to test case development. Shaun Bradshaw describes the key differences between procedural and modular test case development and explains the benefits of the modular approach. He demonstrates how to analyze requirements, designs, and the application under test to generate modular and reusable test cases. Join Shaun as he constructs and executes test scenarios using skeleton scripts that invoke the modular tests. Learn how you can design and create a few self-contained scripts (building blocks) that then can be assembled to create many different test scenarios. Ensure adequate coverage of an application’s functionality without having to write the same test scenarios over and over.

• Differences between procedural and modular test cases
• Develop modular test cases and construct test scenarios
• Design and create reusable automated tests
W8
Test Automation
Test Automation Centers of Excellence
Jennifer Seale, Nationwide Insurance
 
Many organizations want to automate their testing efforts, but they aren’t sure how to begin. Successful test automation requires dedicated resources and automation tool expertise—two things that overworked test teams do not have. Nationwide Insurance’s solution was to create a Test Automation Center of Excellence, a group of experts in automation solution design. Members of this team partner with various project test teams to determine what to automate, develop a cost-benefit analysis, and architect a solution. Their automation experts stay with the test team throughout the automation project, assisting, mentoring, and cheering. Join Jennifer Seale to learn what it takes to put together a Test Automation Center of Excellence and examine test automation from a project management point of view. Jennifer describes the processes and artifacts her centralized test automation team develops to ensure a consistent, high-quality test automation partnership. Take back a template for planning a test automation project and scoping the effort required for success.

• A process that ensures good test automation planning
• Produce an accurate cost-benefit analysis
• How a centralized automation team can help your entire organization
 W9 is a Double-Track Session!
W9
Metrics
Managing by the Numbers
John Fodeh, HP - Mercury
 
Metrics can play a vital role in software development and testing. We use metrics to track progress, assess situations, predict events, and more. However, measuring often creates “people issues,” which, when ignored, become obstacles to success or may even result in the death of a metrics program. People often feel threatened by the metrics gathered. Distortion factors may be added by the people performing and communicating the measurements. When being measured, people can react with creative, sophisticated, and unexpected behaviors. Thus our well-intentioned efforts may have a counter-productive effect on individuals and the organization as a whole. John Fodeh addresses some of the typical people issues and shows how cognitive science and social psychology can play important roles in the proper use of metrics. John demonstrates different presentation and communication techniques and raises an important question: By recognizing that metrics can influence people to alter their behavior, is it possible—and ethical—to use “motivational” metrics to improve team behavior?

• Sociological and psychological factors that emerge when using metrics
• Coping with “people issues” when implementing a metrics program
• Communicate your metrics to avoid “metrics malpractice”
W10
Special Topics
Testing Web Applications for Security Defects
Michael Sutton, SPI Dynamics
 
Approximately three-fourths of today’s successful system security breaches are perpetrated not through network or operating system security flaws but through customer-facing Web applications. How can you ensure that your organization is protected from holes that let hackers invade your systems? Only by thoroughly testing your Web applications for security defects and vulnerabilities. Michael Sutton describes the three basic security testing approaches available to testers—source code analysis, manual penetration testing, and automated penetration testing. Michael explains the key differences in these methods, the types of defects and vulnerabilities that each detects, and the advantages and disadvantages of each method. Learn how to get started in security testing and how to choose the best strategy for your organization.

• Basic security vulnerabilities in Web applications
• Skills needed in security testing
• Who should be performing security assessments
 Wednesday, May 16, 2007 3:00 p.m.
W11
Test Management
Employ Tomorrow’s Customers to Staff Your Testing Team Today
Alex Dietz, Vital Images
 
Regression testing of the Vital Images’ medical imaging software was a continual challenge. Poor product testability, challenging automation implementation, tester shortages, and low process discipline contributed to an environment in which regression testing was often completed after the Beta site release. Even then, testing was incomplete and failed to cover the growing product feature scope. Alex Dietz describes how, through a stroke of inspiration, he created a new team just for regression testing. Rather than turning to outsourcing, he hired future users of his product. Alex describes the unique labor pool he used to staff the team, the costs incurred, personnel levels, metrics, and the management approach he adopted while still meeting FDA requirements for class II medical devices. Although the model Alex describes was applied to medical imaging software, it is not specific to his industry—and could be used successfully in yours.

• Benefits of employing users for regression testing
• A comparison of costs for traditional outsourcing vs. user testing
• Measurements and metrics to evaluate customers as testers
W12
Test Techniques
Risk-Based Testing: From Theory to Practice
Susan Herrick, EDS Global Quality Assurance
 
With mounting pressure to deliver high-quality applications at breakneck speed, the need for risk-based testing has increased dramatically. In fact, now practically everyone involved in testing claims to be doing risk-based testing. But are you really? Drawing on real-life examples, Susan Herrick guides you through a six-step, risk-based testing approach: ambiguity analysis to reduce the risk of misunderstood requirements; risk analysis to determine testing scope and develop the “right” testing strategy; systematic test design to support development and execution of the “right” tests; requirements traceability to measure and manage test coverage; test metrics collection and reporting to provide information that supports corrective action; and testing close down to communicate any remaining quality risks and support effective decision-making regarding application readiness. Susan also describes where the risk-based testing process fits into the project life cycle, regardless of the development methodology selected for the project.

• A definition of risk-based testing
• A proven six-step process for risk-based testing
• How to introduce this risk-based testing approach into your organization
W13
Test Automation
Business Rules-Based Test Automation
Harish Krishnankutty, Infosys Technologies Limited
 
All business applications implement business rules. Unfortunately, the rules can be very dynamic due to changes in requirements by external organizations and internal forces. Wise application designers and developers do not imbed the implementation of specific business rules within applications but define, store, and maintain them as data outside the applications that use them. Likewise, wise testers now use a similar approach called business rules-based test automation in which automated test scripts are written against the business rules rather than against the application. This process incorporates technical components such as a robust testing keyword library, a business-friendly user interface, and automated script generators to accelerate the test automation work and cover more business scenarios than with the conventional approach. Harish Krishnankutty guides you through the underlying concepts of business rules-based test automation, describes a roadmap for implementing it, and discusses the benefits of the adoption of this unique approach.

• Identify business rules used within your organization
• How to provide better test coverage at lower cost
• Increase confidence in the reliability of your systems
W14
Special Topics
Testing the Heathrow Terminal 5 Baggage Handling System (Before It Is Built)
Roger Derksen, Transfer Solutions BV
 
London Heathrow Terminal 5 will open in March 2008. This new terminal will handle 30 million passengers a year, and all of these passengers will expect their baggage to accompany them on their flights. To achieve this end, a new baggage handling system is being built that will handle more than 100,000 bags a day. The challenge of testing the integrated software is related not only to its size and complexity but also to the limited time that will be available to test the software in its actual environment. Roger Derksen explains the vital role of factory integration testing using models that emulate the full system. Roger discusses the limitations of these techniques and explains what can—and cannot—be done in the factory environment and what issues still must be addressed on site.

• A testing strategy for use on very large, complex systems
• How to use models for testing when physical systems are unavailable
• Advantages and disadvantages of these testing techniques

  Go To:   Wednesday  |  Thursday  |  Friday  


 
 
Send us Your Feedback
Software Quality Engineering  •  330 Corporate Way, Suite 300  •  Orange Park, FL 32073
Phone: 904.278.0524  •  Toll-free: 888.268.8770  •  Fax: 904.278.4380  •  Email: [email protected]
© 2007 Software Quality Engineering, All rights reserved.