STARWEST Software Testing Analysis & Review
 
SQE Home
 
 
 
STAREAST 2010 Concurrent Sessions

Go To:   Wednesday  |  Thursday  


Concurrent Sessions for Thursday, September 30, 2010 9:45 a.m. 
T1  
Taking Your Testing Team Global 
Jane Fraser, Electronic Arts
 
With pressure to downsize local teams in favor of offshore or outsourced testing, you may be faced with taking your team global. Jane Fraser discusses the good, the bad, and the ugly of having to outsource or offshore testing. She talks about the pitfalls of hiring across cultures, such as when “Yes” means “We don't understand, but we'll try.” Jane shares ways to maintain your team processes and standards with a distributed team. She examines the issues and benefits of insourcing and outsourcing—and the difference between the two. Using her experience setting up insourced offices in China and India, and outsourced offices in Argentina, Vietnam, India, and China, Jane shares her transition plan to move 70% of her main development studio to five countries around the world. Whether you decide to offshore testing or it’s decided for you, join Jane to discover how to successfully transfer some or all of your testing to a remote team.  
Learn more about Jane Fraser  
 
T2  
Go Sleuthing with the Right Test Technique
Derk-Jan de Grood, Valori
 
Although much information is available on test design techniques, very little is written on how to select which techniques to use for the job at hand. Derk-Jan de Grood believes that many testers find it difficult to select the right techniques and very often use a technique simply because they know it. Instead, the best reason is that the technique is likely to discover the most important errors quickly. Derk-Jan shares his insights on test technique selection and poses three questions you should ask yourself when selecting a technique: What types of errors do I want to find? What impact do these errors have in production? Is the needed information to perform these tests available? He then lays out a list of common test techniques and discusses which error types they are most likely to discover. Take back a new understanding of test technique choice and selection to become a better software defect sleuth.  

Learn more about Derk-Jan de Grood 

 
 
T3  
Testing Dialogues: Automation Issues
Dorothy Graham, Independent Consultant, and Mieke Gevers, AQIS
 
What problems are you facing in test automation right now? Just getting started? Trying to choose the right tool set? Working to convince executive managers of the value of automation? Dealing with excessive maintenance of scripts? Worrying about usability and security testing? Something else? Based on the problems and topics you and fellow automators bring to this session, Dorothy Graham and Mieke Gevers, both experienced test automation experts, will explore many of the most vexing test automation issues facing testers today. Join with other participants in small groups to discuss your situation, share your experiences, learn from your peers, and get the experts’ views from Dorothy and Mieke. As you learn and share, each group will create a brief presentation to give at the conclusion of the session. Dot and Mieke will structure the discussions to keep them focused and moving forward while allowing the freedom to explore issues in depth. Take back solid advice and new insights from the crowd to improve your automation efforts.     
Learn more about Dorothy GrahamMieke Gevers  
 
T4  

Agile Testing: Facing the Challenges Beyond the Easy Contexts 
Bob Galen, iContact

 
Don’t let anyone tell you otherwise—doing testing well on agile teams is hard work! First, you have to get management over the misconception that you don’t need specialist testers within agile teams. Next, you have to integrate testers with the developers and provide holistic, high quality results. Those are just the easy challenges you face. Then, comes the hard part! Bob Galen explores more difficult agile testing contexts—how to attack a total lack of test automation, how to remain agile in highly regulated environments, how to serve your PMO or Testing COE while remaining agile, how to organize testing when your agile team is globally dispersed, how to blend traditional testing processes with their agile counterparts, and more. If you’re in a difficult testing context within an agile development environment, come and join the conversation. You’ll find examples and options, but no silver bullets. Remember—it’s HARD!   
Learn more about Bob Galen  
 

T5

 
The Power of the Crowd: Mobile Testing for Scale and Global Coverage 
John Carpenter, Mob4Hire, Inc.
 
Crowdsourced testing of mobile applications, a middle ground between in-house and outsourced testing, has many advantages: scale, speed, coverage, lower capital costs, reduced staffing costs, and no long-term commitments. However, crowdsourced testing of any application—mobile or not—should augment your professional testing resources, not replace them. Most importantly, crowdsourced testing has to be done well or it’s a waste of time and money. John Carpenter reviews the applications and ways he’s outsourced testing to the crowd. Focusing on adopting crowdsourcing for both functional and usability testing in mobile applications, John describes scenarios in which you can leverage the crowd to lower costs and increase product quality, including scaling the application to large populations of global users. John also reviews scenarios in which crowdsourcing can add unnecessary time and cost to a project and ways to avoid those pitfalls. Learn about the power of the crowd and when to use it to propel your testing project forward.   

 Learn more about John Carpenter 

 
 
T6  
Patterns for Test Asset Reusability
Vishal Chowdhary, Microsoft
 
Typically, testers write a test case for the one component and sub-system they are testing, thus limiting its value. What if you could repurpose and reuse previously developed test assets across several components and sub-systems? Vishal Chowdhary shares three test patterns he has encountered many times while testing various .NET Framework components at Microsoft. The “Test One, Get One Free” pattern describes how to test features that are layered—where a feature enhances an inner core feature. The “Features Get Tested Alike” pattern describes how to test similar features that are exposed via different interfaces to the user (e.g., through an API vs. the user interface). The “Standing on the Shoulders of Giants” pattern lays down design guidelines to optimally build reusable components that can be leveraged between feature teams testing large software products. Explaining their context, usage, and implementation details, Vishal presents these patterns so that you can adapt them for use on many different applications and software products.  
Learn more about Vishal Chowdhary  
 Concurrent Sessions for Thursday, September 30, 2010 11:15 a.m.
T7  
Don't Be the Quality Gatekeeper: Just Hold Up the Mirror 
Mfundo Nkosi, Micro to Mainframe
 
One of the greatest temptations of test managers and their teams is to be the quality gatekeeper—the ones who raise the gate when testing reveals little and keep it closed when they believe that defects (found and unfound) risk the project. Invariably, this role creates an expectation from stakeholders that if the release fails or a major flaw occurs in production, the test team is at fault. Based on his sometimes-painful experiences, Mfundo Nkosi shares his insights on how testing teams can maintain credibility and increase their influence by holding a mirror up to the project rather than becoming the quality police. Mfundo describes the process of maintaining a risks and issues log, writing an informative test closure report, and clearly communicating status—the good and the bad—in a non-threatening way. By focusing your testing mirror on the positives, negatives, issues, concerns, recommendations, and lessons learned, your test team will provide a critical tool to help business and stakeholders make informed decisions.   
Learn more about Mfundo Nkosi  
 
T8  
Operational Testing: Walking a Mile in the User's Boots
Gitte Ottosen, Systematic
 

Often, it is a long way from the system’s written requirements to what the end user really needs. When testing is based on the requirements and focuses solely on the features being implemented, one critical perspective may be forgotten—whether or not the system is fit for its intended purpose and does what the users need it to do. Gitte Ottosen rediscovered this fact when participating in the development of a command and control system for the army, leading the test team to take testing to the trenches and implement operational testing—also called scenario-based testing. Although her team had used domain advisors extensively when designing the system and developing requirements, they decided to design and execute large operational scenarios with real users doing the testing. Early on, they were able to see and experience the system in action from the trenches and give developers needed feedback. Gitte introduces you to the process they employed to develop operational tests and shares the results they achieved.

  

Learn more about Gitte Ottosen

 
 
T9  
Requirements Based Testing on Agile Projects 
Richard Bender, Bender RBT, Inc.
 
If your agile project requires documented test case specifications and automated regression testing, this session is for you. Cause-effect graphing—a technique for modeling requirements to confirm that they are consistent, complete, and accurate—can be a valuable tool for testers within agile environments. Whether the source material is story cards, use cases, or lightly documented discussions, you can use cause-effect graphing to confirm user requirements and automatically generate robust test cases. Dick Bender explains how to deal with short delivery times, rapid iterations, and the way requirements are documented and communicated on agile projects. By updating the cause-effect graphic models from sprint to sprint as requirements emerge, you can immediately regenerate the related test cases. This approach is far more workable than attempting to maintain the test specifications manually. As requirements stabilize, you can automate the tests to increase the efficiency of ongoing regression testing.  
Learn more about Richard Bender  
 

T10
 
Testing Embedded Software Using an Error Taxonomy 
Jon Hagar, Consultant
 
Just like the rest of the software world, embedded software has defects. Today, embedded software is pervasive—built into automobiles, medical diagnostic devices, telephones, airplanes, spacecraft, and really almost everything. Because defects in embedded software can cause constant customer frustration, complete product failure, and even death, it would seem critical to collect and categorize the types of errors that are typically found in embedded software. Jon Hagar describes the few error studies that have been done in the embedded domain and the work he has done to turn that data into a valuable error taxonomy. After explaining the concept of a taxonomy and how you can use it to guide test planning for embedded software, he discusses ways to design tests to exploit the taxonomy and find important defects in your embedded system. Finally, Jon presents some implications and conclusions he’s drawn from the data and makes recommendations for future error studies the embedded community needs.   
Learn more about Jon Hagar  
 

T11
 
Testing with Emotional Intelligence (EI) 
Thomas McCoy, Australian Department of Families, Housing, Community Services and Indigenous Affairs
 
Our profession can have an enormous emotional impact—on others as well as on us. We're constantly dealing with fragile egos, highly charged situations, and pressured people playing a high stakes game under conditions of massive uncertainty. On top of this, we're often the bearers of bad news and are sometimes perceived as the "critics", activating people's primal fear of being judged. Emotional Intelligence (EI), the concept popularised by Harvard psychologist and science writer Daniel Goleman, has much to offer our profession. Key EI skills include self awareness, self-management, social awareness, and relationship management. Explore the concept of EI, assess your own levels of EI, and look at ways in which EI can help in areas including anger management, controlling negative thoughts, constructive criticism, and dealing with conflict, all discussed within the context of the testing profession. This lively session is grounded in real-life examples, giving you concrete ideas to take back to work.  
Learn more about Thomas McCoy  
 Concurrent Sessions for Thursday, September 30, 2010 1:30 p.m.

T12
 
Grassroots Quality: Changing the Organization One Person at a Time 
Frank Lassiter, SAS Institute, Inc.
 
Throughout its history, SAS has valued innovation and agility over formal processes. Attempts to impose corporate-wide policies have been viewed with suspicion and skepticism. For quality analysts and test groups with a quality mission, the challenge is to marry innovation with the structures expected from a quality-managed development process. Frank Lassiter shares the experiences of his group’s working within the corporate culture rather than struggling against it. He describes the services his group provides to individual contributors—mentoring, facilitating meetings, exploring best practices, and technical writing support. With a reputation for adding real, immediate value to the daily tasks of individuals on R&D teams, Frank’s group is enthusiastically invited into projects. Learn why testers and test teams may have the ideal background and credibility to provide pragmatic advocacy for quality processes and discover how word-of-mouth can “spread the quality message” across your organization—one person at a time.   
Learn more about Frank Lassiter  
 

T13
 
Testing the System's Architecture 
Peter Zimmerer, Siemens AG
 
The architecture is a key foundation for developing and maintaining flexible, powerful, and sustainable products and systems. Experience has shown that deficiencies in the architecture cause too many project failures. Who is responsible for adequately validating that the architecture meets the objectives for a system? And how does architecture testing differ from unit testing and component testing? Even in the ISTQB glossary, architecture testing is not defined. Peter Zimmerer describes what architecture testing is all about and shares a list of practices to implement this type of testing within test and development organizations. Peter offers practical advice on the required tasks and activities as well as the roles, contributions, and responsibilities of software architects and others in the organization. Learn what architecture testing really means and how to establish and lead it in your projects. Early architecture testing not only results in better quality systems but also accelerates development and decreases long-term maintenance efforts.  
Learn more about Peter Zimmerer  
 

T14
 
Futility-based Test Automation 
Clinton Sprauve, Borland (a Micro Focus company)
 
Developers and other project stakeholders are paying increased attention to test automation because of its promise to speed development and reduce the costs of systems over their complete lifecycle. Unfortunately, flawed test automation efforts have prevented many teams from achieving the productivity and savings that their organizations expect and demand. Clint Sprauve shares his real-world experiences, exposing the most common bad habits that test automation teams practice. He reveals the common misconceptions about keyword-driven testing, test-driven development, behavior-driven development, and methodologies that can lead to futility-based test automation. Regardless of your test automation methodology or whether you operate in a traditional or agile development environment, Clint offers a solution on how to avoid the “crazy cycle” of script maintenance and ways to incrementally improve your test automation practices. Take back tips and new ideas for avoiding futility in this critical part of software development projects.   
Learn more about Clinton Sprauve  
 

T15
 
Debunking Agile Testing Myths 
Geoff Horne, iSQA
 
What do the Agile Manifesto and various agile development lifecycle implementations really mean for the testing profession? Extremists say they mean “no testers”; others believe it’s just “business as usual” for testers. As a test manager who has been around the block a few times, Geoff Horne has participated in countless test projects, both agile and traditional. Some of his traditional thinking about testing was turned on its ear and challenged by the key precepts of agile development. He’s discovered that traditional projects can achieve many benefits of the agile testing approach. In this revealing session, Geoff identifies and dispels the myths surrounding agile testing and demonstrates how traditional and agile methods can co-exist within a single project. For testers not versed in agile, Geoff offers suggestions for being prepared to work on an agile project when the opportunity arises. Whether or not you work in an agile environment, this session can help testers and test teams become more agile in their own right.  
Learn more about Geoff Horne  
 

T16
 
A Customer-driven Approach to Software Metrics 
Wenje Lai & J.P. Chen, Cisco Systems
 
In their drive to delight customers, organizations initiate testing and quality improvement programs and define metrics to measure their success. In many cases, we see organizations declare success even though their customers do not see much improvement in either products of services. Wenje Lai and J.P. Chen share their approach of identifying quality improvement needs and defining the appropriate metrics that link improvement goals to customer experiences. As a result, the resources allocated to internal quality improvement efforts maximize the value to the business. Their approach is a simple three-step procedure that any test or development organization can easily adopt. It starts with using customer survey data to understand the organization’s customer pain points and ends with identifying the metrics that are linked to the customer experience and actionable by development and test teams inside the organization.    
Learn more about Wenje LaiJ.P. Chen  
 

T17
 
Variations on a Theme: Performance Testing and Functional Unit Testing 
André Bondi, Siemens Corporate Research
 
The right types of performance tests can reveal functionality problems that would not usually be detected during unit testing. For example, concurrency and thread safety problems can manifest themselves in poor performance or deadlocks, leading to incorrect output. Because unit tests inherently lack concurrent activity, these problems rarely manifest themselves in functional tests. André Bondi describes test structures based on rudimentary models that reveal valuable insights about system scalability, performance, and system function. For example, to ensure that resource utilization increases linearly with the load—a necessary condition for scalability—transactions should be submitted to systems for long periods of time at different rates. Conversely, when the load is constant, performance measures should be constant. Deviations from these expectations are symptomatic of programming or design problems such as software bottlenecks, memory leaks, or system instability due to defects. André illustrates these concepts with example results, shows you how to automate the analysis of performance test results.  
Learn more about André Bondi  
 Concurrent Sessions for Thursday, September 30, 2010 3:00 p.m.

T18
 
Developing a Testing Center of Excellence 
Mona Lane, Aetna
 
In spite of well-established testing processes, many organizations still are struggling to achieve consistent, reliable testing results. Are testing deliverables completed incorrectly? Is your organization slow to react to change? A Testing Center of Excellence (TCOE) provides oversight of the testing efforts across the enterprise to help provide the best testing services possible and adapt more rapidly to innovations and challenges. Mona Lane shares the strategy Aetna followed to build a successful TCOE.  Originally focused on one specific area—test tools—it evolved and continues to expand to encompass all aspects of testing. She shares the checklists they’ve developed to review testing artifacts for consistency and how these reviews are helping Aetna improve quality. Join Mona to explore the types of test metrics Aetna uses to identify knowledge gaps and how these metrics are helping improve both their training plans and testing processes.  
Learn more about Mona Lane  
 

T19
 
End-to-End Testing—When the Middle is Moving 
Ruud Teunissen, POLTEQ IT Services BV
 
State-of-the-art development technologies and methods have increased our ability to rapidly implement new systems to support continuously changing business needs. These technologies include Web services and services that encapsulate legacy systems, as well as SOA, SaaS, cloud computing, agile practices, and new test sourcing options. Testers are being pushed to create suites of end-to-end tests in which all parts of the system are tested together. Ruud Teunissen explores ways to create end-to-end tests that are integrated, production-like, automated, continuously running, and cover the full application landscape. Ruud presents strategies and tools for developing, executing, and maintaining these tests including issues surrounding the test environment and test data. Although the appropriate tools to support end-to-end testing are essential, the most critical success enabler is finding the right people who understand the end-to-end application environment. Join Ruud to learn about the next step in the evolution of testing.  
Learn more about Ruud Teunissen  
 

T20
 
Handling Failures in Automated Acceptance Tests 
Alexandra Imrie, BREDEX GmbH
 
One of the aims of automated functional testing is to run many tests and discover multiple errors in one execution of the test suite. However, when an automated test runs into unexpected behavior—system errors, wrong paths taken, incorrect data stored, and more—the test fails. When a test fails, additional errors, called inherited errors, can result or the entire test can stop unintentionally. Either way, some portion of the system remains untested, and either the error must be corrected or the automation changed before proceeding. Alexandra Imrie describes proven approaches to ensure that the most tests will continue running despite errors encountered. She begins by sharing a specific way of designing tests to minimize the disturbance from an error. Using this test design as a foundation, Alex describes the strategies she exploits for handling and recovering from error events that occur during automated functional tests. In conclusion, Alex explains some process considerations for dealing with errors within the project team.  
Learn more about Alexandra Imrie  
 

T21
 
Multi-level Testing in Agile Development 
Roi Carmel, Hewlett-Packard
 
Before they could begin automated testing, test teams used to wait on the sidelines for developers to produce a stable user interface. Not anymore. Agile development environments and component-based applications challenge testers to contribute value earlier and continuously throughout development. Although most agile testers currently focus on unit and integration testing, they also need to test the application’s business and service layers—all the way to the system level. Roi Carmel guides you step-by-step through these stages, describing which practice—GUI or non-GUI automated testing—is the right choice and why. The incorrect choice can lead to iteration delays, lower team productivity, and additional problems. Roi explores who should be responsible for multi-level testing, why regression test suites are important in agile environments, how the test engineer skill-set is evolving, and the most common mistakes test teams make when testing GUIs in an agile environment.  
Learn more about Roi Carmel  
 

T22
 
The Test Manager's Dashboard: Making It Accurate and Relevant 
Lloyd Roden, Grove Consultants
 
Gathering and presenting clear information about quality—both product and process—may be the most important part of the test manager’s job. Join Lloyd Roden as he challenges your current progress reports—probably full of lots of difficult-to-understand numbers—and asks you to replace the reports with a custom Test Manager’s Dashboard containing a series of graphs and charts with clear visual displays. Your dashboard needs to report quality and progress status that is accurate, useful, easily understood, predictive, and relevant. Learn about Lloyd’s favorite dashboard graphs—test efficiency, risk progress, quality targets, and specific measures of the test team's well being. Learn to correlate and interpret the various types of dashboard data to reveal the complete picture of the project and test progress. By creating a Test Manager’s Dashboard, you will provide significant long term benefits to both the test team and the organization—and make your job easier and more fulfilling.  
Learn more about Lloyd Roden  
 

T23
 
Software Performance Testing: Beyond Record and Playback 
Alim Sharif, Ultimate Software Group
 
Predictable software performance is crucial to the success of almost every enterprise system and, in some cases, to the success of the company itself. Before deploying systems into production, management is demanding comprehensive performance testing and reliable test results. This has created significant pressure on development managers and performance test engineers. Alim Sharif guides you through the basic steps for planning, creating, executing, and reporting performance tests. He explains how to actively involve stakeholders—application developers, database administrators, network engineers, IT infrastructure groups, and senior managers—to identify and resolve performance issues. Alim discusses how to maintain the balance between these stakeholder interests during each step and demonstrates how to effectively lead the performance test effort. Avoid the wars that can break out between stakeholders and developers when they discover performance issues at the eleventh hour. Learn how to identify performance problems in the application code, database schema, network topology, and hardware environment before it’s too late.  
Learn more about Alim Sharif  

Top of Page



 
Send us Your Feedback