Home About Software Quality Engineering Conference Sponsors Contact Us SQE.com
Why Attend?
Conference FAQs
Conference at-a-Glance
Keynote Presentations
Preconference Tutorials
Concurrent Sessions
Certification Training
Special Events
Testing EXPO
Networking Events
Alumni Testimonials
Conference Sponsors
Contact Us
About Us
Past STAR Conferences
Other Conference Events
 
 
 

STARWEST 2007 Concurrent Sessions

Go To:   Wednesday  |  Thursday  |  Friday  

 Wednesday, October 24, 2007 11:30 a.m.
W1
Test Management

The Tester’s Critical C’s: Criticism, Communication, Confidence
Dorothy Graham, Grove Consultants
 
Testers are professional critics. Our job is to evaluate and criticize other people’s work. Although criticism can have a positive meaning, it is more often taken as negative. When we communicate our criticism to other people, we are sometimes misunderstood, and this can lead to serious problems, including losing confidence in ourselves. Dorothy Graham examines how our delivery of criticism and the ways we communicate can make us more effective—and not damage our interpersonal relationships. Dorothy presents a communications model that helps explain how and why personal interactions can go wrong. Both the “push” and “pull” styles of influencing can help us communicate better with our managers. Dorothy explains how your confidence level affects your ability to constructively criticize others’ work and communicate test results. She concludes with valuable tips for increasing your confidence.


• Give and receive criticism effectively
• How communication can go wrong and how to improve it
• Increase your confidence to improve your effectiveness

W2
Test Techniques

Cause-Effect Graphing
Gary Mogyorodi, Software Testing Services
 
Cause-Effect Graphing is a powerful, but little known, technique for test case design. Rather than trying to manually create a comprehensive set of test cases, the tester models the problem with cause-effect graphs that automatically generate decision tables based on the inputs, outputs, and relationships among the data for the problem. From the decision tables, the technique then identifies the necessary and sufficient set of test cases that covers 100% of the functionality described for the problem. Gary Mogyorodi has had the rare opportunity to compare test coverage obtained using Cause-Effect Graphing to that obtained from a set of manually created test cases previously derived for an application. He reports on the difference in test coverage obtained from the two different approaches to the same problem.


• The process of Cause-Effect Graphing for test case design
• Functional test coverage measures
• Advantages and disadvantages of Cause-Effect Graphing

W3
Metrics

Measures and Metrics for Your Biggest Testing Challenges
Ed Weller, Integrated Productivity Solutions, LLC
 
Over the course of many STAR conferences, Ed Weller has collected a list of your biggest challenges in testing—lack of time, unrealistic deadlines, lack of resources, inadequate requirements, last minute changes, knowing when to stop testing, and poor quality code from development. Using this list and Victor Basili’s “Goal, Question, Metric” approach to measurement, Ed identifies the measurements and metrics that will help test managers and engineers objectively evaluate and analyze their biggest problems. By doing so, you can map out improvement options and make a strong business case for the resources and funding you need. By providing management with objective evidence rather than subjective opinions, which they call “whining,” you will improve your chances for success. Just as importantly, you will be able to use these measurements to guide and communicate your progress with meaningful data.


• The top testing challenges and the measurements to quantify them
• Measurement data to guide your improvements
• Metrics to present needs and show progress

W4
Testing the New Web

Testing for Security in the Web 2.0 World
Michael Sutton, SPI Dynamics, Inc.
 
While many are extolling the virtues of the next generation of Internet and Web technologies, others are warning that it could turn the Internet into a hacker’s dream. Web 2.0 promises to make applications more usable and connect us in ways that we’ve never imagined. We’ve just begun to digest a host of exciting technologies such as AJAX, SOAP, RSS, and “mashups.” Are we making a big mistake by increasing the complexity of Web applications without taking security into account? Michael Sutton discusses the major security issues we must address when implementing Web applications with the newest technologies and describes poor coding practices that can expose security defects in these applications. Most importantly, Michael discusses testing techniques for finding security defects— before they bite—in this new world.


• The new technologies of Web 2.0
• Major security issues exposed within these technologies
• Techniques for finding Web 2.0 security flaws

W5
Performance Testing

Preparing for the Madness: Load Testing the 2007 College Bracket Challenge
Eric Morris, Microsoft
 
For the past two seasons, the Windows Live development team has run the Live.com College Bracket Challenge, which hosts brackets for scores of customers during the “March Madness” NCAA basketball tournament. March Madness is the busiest time of the year for most sports Web sites. So, how do you build your Web application and test it for scalability to potentially millions of customers? Eric Morris guides you through the process their team uses to model users, establish performance goals for their application, define test data, and construct realistic operational scenarios. Learn how the tests were conducted, the specific database performance and locking problems encountered, and how these problems were isolated and fixed. Finally, Ed demonstrates the custom reporting solution the team developed to report results to stakeholders.


• How to establish performance goals and requirements
• Ways to accurately model user behavior and load
• Performance testing data analysis and reporting

 Wednesday, October 24, 2007 1:45 p.m.
W6
Test Management

Bringing Shrek to Life: Software Testing at DreamWorks
Anna Newman, Dreamworks Animation
 
Want to take a behind the scenes look at DreamWorks Animation testing?  Learn what happens when you have a tiny QA team, release deadlines that cannot slip even a day, and a crew of crazy animators using software in ways most developers never imagined.  You just make it work!  Anna Newman discusses how to leverage your development team to create and even execute tests on your behalf and ways to best prioritize testing areas.  Find out how a small team operates successfully when a software release cycle is only few weeks long, rather than months as in many other industries.  Anna explains her communications strategies for better partnerships with customers, developers, and senior management in the absence of formal development specs and test plans.  Break out of your testing box and get that “happily ever after” (or is it “happily ogre after?) feeling in your test group.
 
 

• Small team testing issues and solutions
• Free automation tools for testing graphical images
• Strategies for better communications in a non-traditional environment 

W7
Test Techniques

A Pair of Stories about All-Pairs Testing
Jonathan Bach, Quardev Inc.
 
What do you do when you’re faced with testing a million or more possible combinations, all manually? Easy—just declare the problem so big and the time so short that testing is impossible. But what if there were an analytic method that could drastically reduce the number of combinations to test while reducing risks at the same time?  All-pairs testing, the pairing up of testable elements, is one way to create a reasonable number of test cases while reducing the risk of missing important defects. Unfortunately, as Jonathan Bach demonstrates, this technique can also be used incorrectly, thus creating more risk, not less. Jonathan shares his experiences on two projects—one success and one failure—that employed all-pairs analysis and describes the reasons behind the results. Start down the path to all-pairs success for your next big testing project.
  

• Learn the rationale behind pairwise data analysis
• Use two free tools that create the pairings
• Understand the risks and rewards of all-pairs testing

 

W8 is a Double-Track Session! 

W8
Metrics

Test Metrics: The Good, the Bad, and the Ugly
John Fodeh, Hewlett-Packard
 
Appropriate metrics used correctly can play a vital role in software testing. We use metrics to track progress, assess situations, predict events, and more. However, measuring often creates “people issues,” which, if ignored, become obstacles to success and can even destroy a metrics program, a project, or an entire team. Metric programs may be distorted by the way metrics are depicted and communicated. In this interactive session, John Fodeh invites you to explore the good, the bad, and the ugly side of test metrics. John shows how to identify and use metrics for assessing the state and quality of the system under test. When being measured, people can react with creative, sophisticated, and unexpected behaviors. Thus our well-intentioned efforts may have a counter-productive effect on individuals and the organization as a whole. The ugly side of metrics is encountered when people manipulate metrics. In this double-track session, explore the pros and cons of applying and using metrics.

  


• Key metrics needed for testing and test management
• “People issues” encountered when implementing a metrics program
• How to present and communicate metrics to avoid “malpractice”

W9
Testing the New Web

Ensuring Quality in Web Services
Chris Hetzler, Intuit
 
As Web service-based applications become more prevalent, testers must understand how the unique properties of Web services affect their testing and quality assurance efforts. Chris Hetzler explains that testers must focus beyond functional testing of the business logic implemented in the services. Quality of Service (QoS) characteristics—security, performance, interoperability, and asynchronous messaging technology—are often more important and more complicated than in classical applications. Unfortunately these characteristics are often poorly defined and documented. In addition, Web services can be implemented using a number of technologies—object oriented programming, XML documents, and databases—and can employ multiple communications protocols, each requiring different testing skills. Take back a list of infrastructure and supporting tools— some of which you may need to build yourself—that are necessary to effectively test Web services.


• Quality of Service (QoS) characteristics for Web services
• How to apply your current skills and tools to Web services testing
• New skills and tools you need for testing Web services

W10
Performance Testing

Ten Indispensable Tips for Performance Testing
Gary Coil, IBM
 
Whether you are inexperienced with performance testing or an experienced performance tester who is continuously researching ways to optimize your process and deliverables, this session is for you. Based on his experience with dozens of performance testing projects, Gary Coil discusses the ten indispensable tips that he believes will help ensure the success of any performance test. Find out ways to elicit and uncover the underlying performance requirements for the software-under-test. Learn the importance of  a production-like test environment, and methods to create suitable environments without spending a fortune. Take back valuable tips on how to create representative workload-mix profiles that accurately simulate the expected production load. And more! Gary has developed and honed these practical and indispensable tips through many years of leading performance testing engagements. 


• A set of practices that will ensure better performance testing
• How to make your performance data work for you
• How to report succinct and understandable performance test findings

 Wednesday, October 24, 2007 3:00 p.m.
W11
Test Management

Results-Driven Testing: Adding Value to Your Organization
Derk-Jan de Grood, Collis
 
Software testers often have great difficulty in quantifying and explaining the value of their work. One consequence is that many testing projects receive insufficient resources and, therefore, are unable to deliver the best value. Derk-Jan de Grood believes we can improve this situation although it requires changing our mindset to “result-driven testing”. Result driven testing is based on specific principles: (1) understand, focus on, and support the goals of the organization; (2) do only those things that contribute to business goals; and (3) measure and report on testing’s contribution to the organization. Keeping these principles at the forefront binds and guides the team. Join this session to find out how the test team at Collis has adopted these principles. They have developed a testing organization that generates trust and provides valuable insight into the quality of their organization’s products.



• The philosophy of results-driven testing
• How to align your testing to your organization’s business goals
• A program to incorporate result driven principles into your test organization

W12
Test Techniques

Bugs Bunny on Bugs! Hidden Testing Lessons from the Looney Tunes Gang
Rob Sabourin, AmiBug.com, Inc.
 
Bugs Bunny, Road Runner, Foghorn Leghorn, Porky Pig, Daffy Duck, and Michigan J. Frog provide wonderful metaphors for the challenges of testing. From Bugs Bunny we learn about personas and the risks of taking the wrong turn in Albuquerque. Michigan J. Frog teaches valuable lessons about defect isolation. Is it duck season or rabbit season?—and how ambiguous pronouns can dramatically change the meaning of our requirements. The Tasmanian Devil teaches us about the risks of following standard procedures and shows us practical approaches to stress and robustness testing. From Yosemite Sam we learn about boundary conditions and defying physics. And, of course, the Coyote seems to put a bit too much confidence in the latest tools and technologies from ACME. The Looney Tunes Gang teaches lessons for the young at heart—novice and experienced testers alike! Rob Sabourin shares some powerful heuristic models for testing that you can apply right away.

• How metaphors can help us understand and communicate
• The value of personas for testing
• Heuristic models that are not only useful—they’re fun!

W13
Testing the New Web

Testing AJAX Applications with Open Source Selenium
Patrick Lightbody, Gomez, Inc.
 
Today's rich AJAX applications are much more difficult to test than the simple Web applications of yesterday. With this rich new user interface comes new challenges for software testers—not only are the platforms on which applications run rapidly evolving, but test automation tools are having trouble keeping up with new technologies. Patrick Lightbody introduces you to Selenium, an open source tool designed from the ground up to work on multiple platforms and to support all forms of AJAX testing. In addition, he discusses how to develop AJAX applications that are more easily testable using frameworks such as Dojo and Scriptaculous. Learn the importance of repeatable data fixtures with AJAX applications and how automated testing must evolve with the arrival of AJAX. Get ahead of the curve by encouraging the development of more testable AJAX software and adding new automation tools to your bag of testing tricks.
 


• How Web applications are moving from a page-centric to a more granular paradigm
• Frameworks for developing testable AJAX-based Web applications
• Open source Selenium’s basic functionality

W14
Performance Testing

Load Generation Capabilities for Effective Performance Testing
John Scarborough, Aztecsoft
 
To carry out performance testing of Web applications, you must ensure that sufficiently powerful hardware is available to generate load levels. At the same time, you need to avoid investing in unnecessarily expensive hardware “just to be sure.” A valid model for estimating the load generation capabilities of performance testing tools on different hardware configurations will help you generate the load you need with the minimum hardware. Rajeev Joshi believes the models provided by most tool vendors are too simplistic for practical use. In fact, in addition to the hardware configuration, the load generation capabilities of any tool are a function of many factors: the number of users, frequency and time distribution of requests, data volume, and think time. Rajeev presents a model for the open source load generator tool, Jmeter, which you can adapt for any performance testing tool.
 


• Model the load generating capabilities of your performance test tools
• Experimental designs to verify a load generation model
• How to purchase or allocate just the right amount of hardware for a performance test



Top of Page


 
Send us Your Feedback Software Quality Engineering  •  330 Corporate Way, Suite 300  •  Orange Park, FL 32073
Phone: 904.278.0524  •  Toll-free: 888.268.8770  •  Fax: 904.278.4380  •  Email: [email protected]
© 2007 Software Quality Engineering, All rights reserved.