Home About Software Quality Engineering Conference Sponsors Contact Us SQE.com
Why Attend?
Conference FAQs
Conference at-a-Glance
Keynote Presentations
Preconference Tutorials
Concurrent Sessions
Certification Training
Special Events
Testing EXPO
Networking Events
Alumni Testimonials
Conference Sponsors
Contact Us
About Us
Past STAR Conferences
Other Conference Events
 
 
 

STARWEST 2007 Concurrent Sessions

Go To:   Wednesday  |  Thursday  |  Friday  

 Friday, October 26, 2007 10:00 a.m.
F1
Test Management

Beyond the Rock and the Hard Place
Andy Kaufman, Institute for Leadership Excellence & Development, Inc.
 
One stakeholder says “Zig”.  The other says “Zag”. No compromise is in sight, and the project deadline looms nearer. The rock and the hard place—welcome to the test manager’s world! How do you deal with an overly emotional stakeholder or a developer who is ignoring your requests? Few of us like conflict, but our ability to navigate conflict goes a long way toward determining how successfully we can deliver quality projects. Andy Kaufman introduces you to “conflict handling modes” that describe different approaches you can take to deal with conflict. Understanding these different modes can help you get beyond your typical responses to conflict to those that can be more effective. Join Andy as he discusses real-world project conflicts, and learn practical ideas to improve your ability to manage them.
 


• Different conflict handling modes you can use to manage issues
• How to understand your own personal tendencies for dealing with conflict
• Ways to improve your ability to manage conflict successfully

F2
Agile Testing

How Testers Can Help Drive Agile Development
Lisa Crispin, ePlan Services, Inc.
 
Although some experts say that testers are not needed in an agile development environment, Lisa Crispin knows differently. Testers want to make sure customers get what they need; they look at the “big picture” and work to ensure the best experience for the user. Unfortunately, even in the agile development world, business needs and the users’ experience often are disconnected from the delivered software. Professional testers can help agile developers deliver what stakeholders want—the first time. Lisa describes how she uses tests cases to create a common language that business customers, users, and developers all understand. She explains the techniques for eliciting examples to define features and describes how to turn examples into executable tests. These tests define the scope of a feature, making it easier for everyone to envision how the feature should look, feel, and work. Lisa also shows how to write tests that guide programmers toward delivering well-designed, well-tested systems.
 

• How tests can be the common language for business, users, and developers
• Elicit examples of features and convert them into executable tests
• Use tests to define the scope of features for development

F3
Test Automation

50 Ways to . . . Improve Test Automation
Mark Fewster, Grove Consultants
 
Although this session is not about Paul Simon’s famous song, “50 Ways to Leave Your Lover”, it will be most entertaining nonetheless. In this fast-paced presentation, Mark Fewster shares fifty ways for you to consider, adopt, or adapt to meet your organization’s needs—management, metrics, organizational structure, scripting methods, comparison techniques, testware architecture, and many more. These ideas will give you fresh insight into your current processes and help you identify actions to reverse undesirable trends, correct ailing procedures, and magnify the benefits of test automation. Although the ideas cannot be discussed in great detail due to time restrictions, there will be enough information for you to understand and then apply. So join Mark—become informed, enthusiastic, and even entertained by this whirlwind of test automation ideas.
 

• Key areas of test automation or failure 
• Weaknesses with many test automation projects
• Ideas for correcting and improving test automation projects and practices

F4
Reviews and Inspections

Lightweight Peer Code Reviews
Jason Cohen, Smart Bear, Inc.
 
Peer code reviews can be one of the most effective ways to find bugs. However, developers will not accept a heavy process, and it's easy to waste time using poor methods. Jason Cohen describes how lightweight code review practices can succeed where more cumbersome, formal inspections fail. He shares the results from the largest case study of peer reviews ever conducted. You will gain new insights on how much time to spend in review, how much to code review in one session, and how author preparation practices can increase the efficiency of a review. Jason offers tips on the mechanics of lightweight code reviews and compares five common styles of review. He provides advice on how to build checklists and describes what metrics can actually tell us. Learn how to conduct practical, time-efficient code reviews while avoiding the most common mistakes.
 


• Why lightweight reviews work where formal inspections fail
• The social issues of reviews and how to overcome them
• What code review metrics mean and what they do not mean

F5
Special Topics

Testing Hyper-Complex Systems: What Can We Know?
Lee Copeland, Software Quality Engineering
 
Throughout history, humans have built systems of dramatically increasing complexity. In simpler systems, defects at the micro level are mitigated by the macro level structure. In complex systems, failures at the micro level cannot be compensated for at a higher level, often with catastrophic results. Now we are building hyper-complex computer systems, so complex that faults can create totally unpredictable behaviors. For example, systems based on the Service Oriented Architecture (SOA) model can be dynamically composed of reusable services of unknown quality, created by multiple organizations and communicating through many technologies across the unpredictable Internet. Lee Copeland explains that claims about quality require knowledge of test “coverage,” an unknowable quantity in hyper-complex systems. Are testers now going beyond our limits to provide useful information about the quality of systems to our clients? Join Lee for a look at your testing future as he describes new approaches needed to measure test coverage in these complex systems and lead your organization to better quality—despite the challenges.


• Simple, complex, and hyper-complex systems defined
• Why hyper-complex systems fail unpredictably and sometimes catastrophically
• Failures caused by the “Butterfly Effect”

 Friday, October 26, 2007 11:15 a.m.
F6
Test Management

Toot Your Own Horn: Hyper-visibility in Software Testing
Barrett Nuzum, Valtech Technologies
 
Too often software projects are provided insufficient resources for testing. Perhaps, the project is under-funded, and testing is the first thing to get cut. Maybe the schedule is tight, and testing scope is reduced to allow for more developers. Barrett Nuzum believes the underlying problem is that the typical test team only makes itself known—and valued—when quality is poor and defects are obvious. It doesn’t have to be that way! Barrett reviews ways to make your team hyper-visible to your business stakeholders and the entire development team—large, visible charts for test teams metrics; aggregation of existing test results into development updates; fun and extreme feedback devices for everyone to see and enjoy; and more. Discover innovative ways of “tooting your own horn” to make the service and value of testing and QA impossible to ignore.
 


• Why making a business case for testing is more important today
• Ways to improve the visibility of testing’s contributions and value
• Maximizing the return on your visibility investment

F7
Agile Testing

Perils and Pitfalls of the New “Agile” Tester
Janet Gregory,DragonFire, Inc.
 
If your background is testing on traditional projects, you are used to receiving something called “requirements” to develop test cases—and sometime later receiving an operational system to test. In an agile project, you are expected to test continually changing code based on requirements that are being uncovered in almost real time. Many perils and pitfalls await testers new to agile development. For example, a tester new to agile might think, “I’ll test the latest ‘stories’ on Tuesday when I get my next build.” And you would be WRONG! Waiting for a new build will almost always put you an iteration behind the developers and in a schedule hole from which you cannot recover. To avoid this trap, you must start testing as soon as the developer has completed a feature story, even before coding begins. Janet Gregory discusses the new when’s, how’s and, what’s of agile testing and helps you begin to change your mindset so you can become the new agile tester in such high demand today. 
 


• Pitfalls of agile waiting for unsuspecting testers
• Ways to avoid traps that test teams fall into when agile practices are introduced
• Tools and techniques for testing in an agile development environment

F8
Test Automation

Component-Based Test Automation
Vincenzo Cuomo, ST Incard
 
Creating software applications by assembling pre-built components has proved to be very successful on many development projects. Just as component-based development can reduce the time-to-market of high quality software, the same concept is equally applicable to automated testing. Vincenzo Cuomo introduces an approach to test automation called Component-based Testing. Using this method, you design and create reusable, highly configurable test components that can be assembled into application-specific test scripts. Vincenzo presents a case study to illustrate Component-based Testing concepts and demonstrates how you can build test components that are application independent and self-contained. In Vincenzo’s experience, Component-based Testing has resulted in higher test case reusability (up to 80%) and a remarkable reduction of testing time and cost (up to 50%).
 


• How to rethink test script creation in terms of components
• The differences between Component-based Testing and other approaches
• Achieve significant reductions in testing time and costs

F9
Reviews and Inspections

Client Verification Sessions: A Low Cost, High Payback Approach
Mette Bruhn-Pedersen, XPonCard Service Systems
 
Want to improve the quality of your products? Of course you do. But how? Mette Bruhn-Pedersen uses a simple, but effective method that includes both clients and users in the development process. His company organizes and conducts verification sessions early in the development process. These sessions consist of two parts: First is a demonstration of the implemented functionality using test cases as examples; second is a “play” session in which the customer is given control of the system to explore the functionality from a business perspective. By observing the client, testers get a better understanding of what functionality is most important to the client as well as increasing their knowledge of the software’s intended use. Sometimes, the clients find important, new defects during the session. And almost always, testers learn they need to add new test scenarios based on their observations during the play session. 
 


• Find missing or misunderstood functionality faster and more cheaply
• How to improve test suites with client input
• A subtle way to set realistic customer expectations early in development

F10
Special Topics 

Challenges and Benefits of Test Process Assessments
Gopinath Mandala, Tata Consultancy Services Ltd.
 
When you need to make improvements in your test practices, a formal test process assessment can help you understand your current situation and direct you toward better testing. One assessment model is Test Process Improvement (TPI®). Gopinath Mandala reports that the TPI® model was successfully used to achieve distinct benefits for his customers. He explains the difference between a model and a methodology. He further describes the assessment methodology—the process of identifying stakeholders, interviewing, analyzing the results, and preparing and presenting recommendations—he uses to conduct assessments. Gopinath discusses the need to set the expectations of the clients before the assessment begins and suggests ways to empower them to implement recommendations after the assessment.

 


• Benefits of performing a test process assessment
• Test Process Assessment methodology
• Approaches to make an assessment successful

TPI®  is a registered trademark of Sogeti USA, LLC.


Top of Page


 
Send us Your Feedback Software Quality Engineering  •  330 Corporate Way, Suite 300  •  Orange Park, FL 32073
Phone: 904.278.0524  •  Toll-free: 888.268.8770  •  Fax: 904.278.4380  •  Email: [email protected]
© 2007 Software Quality Engineering, All rights reserved.