STARWEST Software Testing Analysis & Review
 
SQE Home
 
 
 
STARWEST 2011
Thursday Concurrent Sessions
Thursday, October 06, 2011 9:45 AM
T1
Test Management
Servant Leadership in Agile: The End of Command and Control C Dale Emery, DHE The switch from traditional, top-down management to agile project practices poses a dilemma for managers and the team, including test managers and testers. If agile teams self-manage their work, what does a test manager actually do now? And without strong guidance from a traditional manager, how do teams organize their work? Dale Emery describes how successful agile teams resolve these conundrums—by adopting a seemingly paradoxical way of collaboration called “servant leadership.” A servant leader leads by serving and serves by leading. On high-performing agile teams, everyone is a servant leader in one way or another. There are no followers in the traditional sense and no command-and-control managers. Everyone leads—all the time. Everyone serves—all the time. Learn the principles and practices of servant leadership, and how servant leadership helps you apply your existing experience and skills in new ways to contribute to the success of your projects. Learn more about Dale Emery
Thursday, October 06, 2011 9:45 AM
T2
Test Techniques
Teach Your Acceptance Tests to Speak “Business” C Richard Lawrence, Humanizing Work Acceptance Test-Driven Development (ATDD) uses specification by example to define expressive automated tests that facilitate project communication and drive product design. Acceptance testing tools like Cucumber, FitNesse, and Concordion are powerful tools for building a common, ubiquitous language connecting business people, developers, and testers. A common language leads to common understanding and ultimately to better software that meets business needs and delights customers. Unfortunately, too many teams write tests that use excessive technical jargon and miss out on this opportunity. Learn from Richard Lawrence, an expert in acceptance test-driven development, how to use acceptance test scenarios to develop a common language among stakeholders. Discover approaches you can use to refactor your existing test scenarios to speak “business.” Although the focus of this session is on Cucumber, participants with other tools will be able to apply the lessons to their test environments Learn more about Richard Lawrence
Thursday, October 06, 2011 9:45 AM
T3
Performance Testing
Performance Testing the SMART Way C Mieke Gevers, AQIS Although testers know the ins and outs of functional testing, many of us don’t have a smart process for doing software performance testing. To improve her personal performance testing skills, Mieke Gevers looked at processes from other disciplines—automobile manufacturing, medical rehabilitation, and project management. It was here she found SMART, which stands for Specific, Measurable, Attainable, Realistic, and Timely. Learn how Mieke’s organization used SMART to deal with chaotic performance testing situations—lack of clear requirements, discrepancies between business objectives and reality, running out of time, and changes in technology. In Mieke’s organization, SMART has helped them save time, react quickly to production requests for developing and running tests, develop reproducible performance tests, and create better test results documentation. Take back new knowledge of how to use the SMART objectives to become a smarter tester and improve your performance testing. Learn more about Mieke Gevers
Thursday, October 06, 2011 9:45 AM
T4
Metrics
A Holistic Way to Measure Quality C Jennifer Bonine, Up Ur Game Learning Solutions Have your executives ever asked you to measure product quality? Is there a definitive way to measure application quality in relation to customer satisfaction? Have you observed improving or excellent defect counts and, at the same time, heard from customers about software quality issues? If you are in a quandary about quality metrics, Jennifer Bonine may have what you are looking for. Join her to explore the Problems per User Month (PUM) and the Cost of Quality (CoQ) metrics that take a holistic approach to measuring quality and guiding product and process improvements. Learn what data is required to calculate these metrics, discover what they tell you about the quality trends in your products, and learn how to use that information to make strategic improvement decisions. By understanding both measures, you can present to your executives the information they need to answer their questions about quality. Learn more about Jennifer Bonine
Thursday, October 06, 2011 9:45 AM
T5
Test Labs
Virtualization of Test Labs C Frank Lanciotto, Aetna, Inc. Frank Lanciotto shares his experience with Aetna’s creation of “world class” quality testing platforms using virtual technology in conjunction with physical devices. Aetna’s Quality Assurance Lab, which uses virtual technology in both server and desktop environments, has transformed from 95 percent physical test devices in 2009 to only 25 percent today. The rise in virtualization to the 75 percent level has benefited their organization with lower costs and better process management, accommodating increased testing needs worldwide and avoiding licensing issues due to the location of the physical devices or software. Frank shares how Aetna used virtualization to integrate its reservation and administrative management systems for all test devices. By continuing to expand its virtual test lab program, Aetna will be able to provide more and different testing platforms for its testers, enabling them to significantly support Aetna’s drive toward zero defects. Learn from Aetna’s successes and mistakes to accelerate your test lab’s path to virtualization. Learn more about Frank Lanciotto
Thursday, October 06, 2011 9:45 AM
T6
Special Topics
Offshore and Outsourced Test Automation Adventures C Hans Buwalda, LogiGear Organizations look at two ways to reduce repetitive testing costs—automation and offshoring. Although either can work, combining these two approaches has the promise of even more savings to organizations by freeing up their employees for more creative testing. Because both automation and offshoring are complex operations in and of themselves, combining them adds more risks and challenges that can lead to disappointment and a “double backlash” instead of a “double benefit” if not implemented with proven approaches. Test automation pioneer Hans Buwalda shares his personal “adventures” with offshoring and outsourcing automated testing. Organized along major challenges he’s faced—methodology, automation technology, cultural differences, long distances, and hard to deal with time differences—Hans presents a set of failure patterns that are common in offshoring and offers practical suggestions for how to overcome them. Whether your organization is already invested in outsourcing or offshoring or looking at its options, this presentation will give you the information and tools you need to seriously consider an automation option, too. Learn more about Hans Buwalda
Thursday, October 06, 2011 11:15 AM
T7
Test Management
Crowdsourced Testing: An Emerging Model for Serious Testing C Manoj Narayanan, Cognizant Technology Solutions Crowdsourcing has emerged as a startlingly effective by-product of social networking and the web. Manoj Narayanan describes the many ways businesses are using crowdsourcing as a cost and quality lever in their most important software testing projects. Learn about crowdsourcing and how the value delivered can differ when testing a web application, mobile device, gaming app, or other types of systems. Manoj compares the business model practiced by organizations such as uTest to traditional testing practices. He examines the different approaches that organizations are taking today to integrate crowdtesting into the overall testing strategy, ranging from adopting crowd testing for ad hoc releases to incorporating it as an integral part of the overall testing strategy. Manoj concludes by exploring how greater integration between social networking and crowdsourcing can further enhance the testing business model—creation of “social clubs” for domain-focused testing and using social search channels to build “tribal” knowledge across software releases. Learn more about Manoj Narayanan
Thursday, October 06, 2011 11:15 AM
T8
Test Techniques
Model-based Testing: The Next Generation C Alexander Andelkovic, Spotify Spotify is a music streaming service offering high-quality, instant access to music from a range of major and independent record labels. Model-based testing (MBT) is an important test technique they use to ensure that their systems deliver quality service. Spotify has discovered new ways to use MBT for effective testing in support of its more than ten-million user base. Alexander Andelkovic shares the challenges of implementing and integrating new MBT solutions and convincing company management that MBT is both efficient and effective. Explore the choice they made between buying or building an advanced MBT tool, the benefits of using MBT in new ways, and the increased visibility from improved quality. Whether your organization employs automated testing or not, Alexander shows you how to successfully integrate advanced MBT techniques with traditional test methods. Learn more about Alexander Andelkovic
Thursday, October 06, 2011 11:15 AM
T9
Performance Testing
Performance Testing in the Cloud C Alim Sharif, Ultimate Software Group An explosion of new web-based products, new customers, and new technology has challenged many organizations with extreme system loads, resulting in poor performance. At the same time managers, who are under pressure to reduce their operational expenses while meeting SLAs, are turning to cloud computing to develop and deploy software. Alim Sharif, performance test architect at Ultimate Software Group, guides you through the benefits, challenges, and realities of performance testing in the cloud environment. Alim explains how the elastic nature of the cloud can skew your performance baseline results and shares his experience with performance testing of cloud-based systems. Take back a bag full of tips and tricks to enhance your performance testing experience. With the wisdom you’ll gain in this session, you can make your performance testing effort a success in the cloud. Learn more about Alim Sharif
Thursday, October 06, 2011 11:15 AM
T10
Metrics
Quantifying the Value of Static Analysis C William Oliver, Lawrence Livermore National Laboratory During the past ten years, static analysis tools have become a vital part of software development for many organizations. However, the question arises, "Can we quantify the benefits of static analysis?" William Oliver presents the results of a study performed at Lawrence Livermore National Laboratory to do just that. They measured the cost of finding software defects using formal testing on a system without static analysis; then, they integrated a static analysis tool into the process and, over a period of time, recalculated the cost of finding software defects. Join William as he reveals the results of their study, and discusses the value and benefits of static testing with tools. Learn how commercial and open source analysis tools can perform sophisticated, interprocedural source code analysis over large code bases. Take back the proof to your organization that employing static analysis can reduce the time and cost of finding defects and subsequent debugging and, ultimately, reduce the number of defects making their way into your releases. Learn more about William Oliver
Thursday, October 06, 2011 11:15 AM
T11
Test Labs
Hardware Bound: Testing with Limited Access to Resources C Scott Miles, Gatan, Inc If you are challenged to test software applications with limited or no access to the hardware on which they operate in production, this session is for you. Gatan’s DigitalMicrograph software, the industry standard for use on electron microscopes controlling proprietary cameras and imaging filters, is highly specialized software requiring expensive hardware equipment for testing. This equipment must be shared by many individuals and organizations—including test. Even Gatan, the manufacturer of this equipment, often does not have all the hardware available in-house to test software revisions. Scott Miles shares his experience living with limited access to hardware and the approaches he has used to strengthen their test strategies. Join in the discussion and take back examples of how to build relationships with your manufacturing facility, OEM suppliers, and customers to leverage their hardware and resources for everyone’s benefit. Learn more about Scott Miles
Thursday, October 06, 2011 11:15 AM
T12
Special Topics
Testing on the Toilet: A Success Story from STARWEST 2007 C Mette Bruhn-Pedersen, Bruhn-Pedersen Consulting As testers, we often need to inform and educate our colleagues about the fundamentals of testing. The challenge is not just to get their attention for five minutes; the goal is to continually reinforce the benefit and techniques for testing. In their STARWEST 2007 Keynote, Googlers Bharat Mediratta and Antoine Picard introduced the idea of “Testing on the Toilet”—a testing newsletter posted in toilets throughout Google’s development campus. In this story of implementing a great idea in your company, Mette Bruhn-Pedersen describes how she adapted this idea to spread the testing message in her organization. Instead of using a testing newsletter, Mette created a “Testing on the Toilet” quiz consisting of questions based on the ISTQB foundation syllabus on the fundamentals of testing. Learn how to customize this approach to your organization and discover how to create your own, unique testing message. Learn more about Mette Bruhn-Pedersen
Thursday, October 06, 2011 1:30 PM
T13
Test Management
Establishing a Testing Center of Excellence: The Pros and Cons C Raja Neravati, AppLabs Many testing organizations view implementing a Testing Center of Excellence (TCoE) as a positive step toward providing better service to their clients. They understand how a TCoE can define and promote standard testing practices, consolidate testing tools, reduce costs, define testing boundaries, and provide specialized testing services. Unfortunately, many organizations, as they work to establish their TCoE, face problems and don’t achieve the promised benefits. Raja Neravati explores approaches for overcoming the challenges of implementing a TCoE. He describes a “tested” TCoE implementation process to address business goals, stakeholder interests, span of control, budgets, and quality problems. Because not every organization can benefit by adopting a TCoE, Raja describes situations—specific organizational cultures, development methodologies, geographic dispersion, resource limitations, and testing volumes—where decentralization of testing can yield better results. If your organization is considering or has implemented a TCoE, join in to accelerate your journey toward improved testing and quality through Raja’s experiences and guidance. Learn more about Raja Neravati
Thursday, October 06, 2011 1:30 PM
T14
Test Techniques
Risk-based Testing: When You Can’t Test It All C Reán Young, The Kroger Company Testers everywhere have experienced this scenario—the development cycle slips and now testing gets two weeks instead of four to complete its work. How do you systematically determine what to test and what not to test in this time-constrained situation? How do you determine the right amount of testing so that you are not doing too much or too little? Reán Young shows how using a risk-based approach helps to identify test strategy options based on a combination of business and technical factors. They evaluate risks in each area of the application and devise a test plan that ensures that the most critical components will be tested before the deadline. This approach encourages the entire project team to take ownership of determining what should be tested. By implementing risk-based testing in your organization, your test team will have the tools to target, prioritize, and maximize the value of testing—especially when time is short and the pressure is on. Learn more about Reán Young
Thursday, October 06, 2011 1:30 PM
T15
Test Process Improvement
Test Process Improvement on a Shoestring C Martin Pol, Polteq Testing Services BV In most organizations cost reduction is still the number one motivation for test process improvement. Although several formal improvement models are popular, they require formal assessments, process change working groups, extensive implementation programs, and new organizational structures. Instead, you can quickly implement measures that improve your testing process incrementally within your day-to-day activities. Martin Pol presents a low-budget way to select and implement a set of measures that can rapidly improve testing’s contribution to your project’s success—simple risk analysis, proactive test design, coverage targeting, and novel ways to reuse tools, environments, expertise, and existing testware. Learn how low-budget test process improvement can become a natural behavior for your testing staff. Achieve quick wins by working more closely with development and using what you have—instead of buying or creating new tools. Learn more about Martin Pol
Thursday, October 06, 2011 1:30 PM
T16
Personal Excellence
Be the Tester Your Dog Thinks You Are C Eric Jacobson, Turner Broadcasting System, Inc. Most of us grew up wanting to be firemen or astronauts or teachers—not testers. Eric Jacobson, an average guy and not incredibly technical, loves software testing and his career in testing as much as his dog loves him. Using videos and candid photos of his test team at work, Eric shares the top ten skills and practices he’s developed and honed over the years to make himself a test leader. He explains how he helps his team establish reasonable goals and then meet them. Find out why testing broadly first and deeper later keeps the programmer busy and takes some of the guesswork out of test estimation. Watch Eric as he shows you how to use white boarding to explore technical systems and help programmers find their own mistakes. Take back to work ten ideas you can employ immediately to help you be the tester your dog thinks you are. Learn more about Eric Jacobson
Thursday, October 06, 2011 1:30 PM
T17
Mobile Testing
Mobile Testing: Old Wine in a New Bottle? C Manish Mathuria, InfoStretch Corporation In the enterprise, mobile adoption is increasing at a fast pace—and so are the concerns about security, reliability, and quality for the software that drives mobile devices. Some of the unique and unfamiliar challenges faced while testing mobile applications are usability, network connectivity, online/offline content, call interruptions, varying form factors, networks, and device providers. Manish Mathuria describes how mobile testing differs from testing traditional enterprise systems. He provides practical tips on how to quickly and smoothly transition from traditional software testing to mobile testing while striving to deliver the same level of quality. Manish explores the nascent mobile testing tools available today to test enterprise mobile applications that are quickly reaching the same complexity as their desktop counterparts. Leave with a toolkit full of tricks, best practices, tools recommendations, automation approaches, and resourcing models to maximize return on your mobile testing efforts. Learn more about Manish Mathuria
Thursday, October 06, 2011 1:30 PM
T18
Security Testing
Practical Threat Modeling: Engaging Testers Early C Edward Bonver, Symantec Threat modeling is one of the most important activities that development and test teams should perform as part of a security development lifecycle. Although threat modeling is not always easy to get going for a team that has little or no security experience, it can be critical to your products and your project. Edward Bonver explores the process behind modeling threats to systems and demonstrates resulting models. He explains how the process has been successfully implemented and followed across Symantec, where development teams and environments vary dramatically across hundreds of products. Learn how the Symantec development and test teams build a comprehensive security profile of the software, providing a guide for secure development as well as the testing focus and strategy. In addition find out how they use threat models as a learning method to make sure, early on, that testers develop a thorough understanding of the system under test. Learn more about Edward Bonver
Thursday, October 06, 2011 3:00 PM
T19
Test Management
Managing Intrateam Dysfunction C Dawn Haynes, PerfTestPlus, Inc. Inspired by her years of consulting with large and small test teams, Dawn Haynes shares her observations of the most common and troublesome dysfunctions within software project teams—absence of trust, fear of conflict, lack of commitment, avoidance of accountability, and inattention to results. Often team members and managers are so heads-down in the day-to-day tasks that they aren’t even aware of their problems. Without an understanding of the dysfunctions and their root causes, improvement is a non-starter. Dawn provides a roadmap to identify the dysfunctions in any team and maps those issues to a set of recommendations for remediation. Using real project scenarios as examples, Dawn highlights debilitating dynamics—and solutions in context—among software project managers, development teams, and internal/external QA/test teams. Gain new perspectives to turn your development or test team’s current dysfunctions into tremendous improvement opportunities. Learn more about Dawn Haynes
Thursday, October 06, 2011 3:00 PM
T20
Test Techniques
xBTM: Taking Full Advantage of Exploratory Testing C Michael Albrecht, AddQ Consulting Christin Wiedemann, AddQ Consulting Exploratory testing provides both flexibility and speed, which have become increasingly important as more and more projects adopt agile where scripted tests are struggling to keep up with the quick pace of short iterations. So, how do you retain traceability back to requirements with exploratory testing without losing your creativity? Christin Wiedemann and Michael Albrecht share their experiences using a combination of session-based test management and thread-based test management which they call xBTM. In session-based test management, Michael and Christin structured and documented exploratory testing in sessions. However, sometimes the work environment is too hectic or chaotic and requires more flexibility and freedom, which is provided by thread-based test management. Why not get the best of both techniques? xBTM unites the two exploratory techniques to get the full advantage of both—from test planning to test reporting. By using xBTM, Christin and Michael are able to spend more time actually testing while creating required documentation as a by-product of the process. Learn more about Michael Albrecht, Christin Wiedemann
Thursday, October 06, 2011 3:00 PM
T21
Test Process Improvement
Test Process Improvement with TMMi® C Erik van Veenendaal, Improve Quality Services BV The Test Maturity Model integration® (TMMi®) model, developed to complement the CMMI® framework, is rapidly becoming the test process improvement model of choice in Europe, Asia, and the US. Erik van Veenendaal, one of the developers of TMMi, describes the model’s five maturity levels—Initial, Managed, Defined, Management and Measurement, and Optimization—and the key testing practices required at each level. The model’s definition of maturity levels provides the basis for standardized TMMi assessments and certification, enabling companies to consistently deploy testing practices and collect industry metrics. The benefits of using the TMMi model include an improvement of testing methods, reduction in costs, and improved product quality. Based on his experiences with TMMi, Erik describes the critical success factors for your test process improvement efforts and provides the information you need to justify and establish a TMMi test process improvement project. ® TMMi is the registered trademark of the TMMi Foundation.
® CMMI is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University.
Learn more about Erik van Veenendaal
Thursday, October 06, 2011 3:00 PM
T22
Personal Excellence
Better Testing through Cultural Change C Cliff Morehead, ThoughtWorks Even though you employ the best testing processes, techniques, people, and tools, the overall effectiveness of your testing effort will always be bounded by your organization's commitment to quality. Cliff Morehead describes techniques he uses for assessing an organization's quality culture and shares approaches for influencing positive cultural change. Organizations have two major dimensions that determine if change takes hold: the driving force—top-down versus bottom-up—and the quality focus—externally oriented versus internally oriented. Cliff discusses specific tactics you can use to increase the effectiveness of your test improvement efforts for different organization types, with a focus on how front-line team members can influence their organization's culture. Take back the lessons Cliff has learned from his experiences in improving testing—what has worked for him and what hasn't. Learn more about Cliff Morehead
Thursday, October 06, 2011 3:00 PM
T23
Mobile Testing
Selecting Mobile Application Automation Tools C Pradeep Govindasamy, Cognizant Technology Solutions Today’s mobile application market holds massive promise for devices and applications that exceed user expectations. Despite the hurry-to-market pressures of mobile development, proper testing is vital to differentiating an application in a highly competitive market. Pradeep Govindasamy describes three areas to consider in choosing test automation tools: browser/platform, screen resolution/input mechanism, and external system interface. The first involves selecting different browser and platform combinations—iPhone IOS, Android SDK, etc.—and evaluating GUI Mapping, recording of objects, and reusability. The second area defines the types of interfaces needed for classes, modules, or libraries to ensure that the automation tools provide support. The third area focuses on a testing framework that includes a class for user interface events—such as keystrokes and mouse clicks—and exploring how the form factor of each device influences tool selection. If you are challenged to test mobile applications and are looking for automation support, this session is for you. Learn more about Pradeep Govindasamy
Thursday, October 06, 2011 3:00 PM
T24
Security Testing
Can You Hear Me Now? Yes … and Everyone Else Can, Too C Jon Hagar, Independent Consultant Mobile devices—connected to the world through the Internet, web, networks, and messaging—are everywhere and expanding rapidly in numbers, functionality, and, unfortunately, security threats. Not too many years ago, little attention was paid to mobile device security. Now, we hear reports almost daily of phone emails/messages being hacked, apps with worms, phishing via smart devices, and smart device fraud. People often have their records, personal data, and financial records on or accessible by the mobile device. In addition, organizations are using these devices to conduct critical business. Jon Hagar shares and analyzes case histories and examples of mobile application security failures. Based on this analysis, Jon summarizes these attacks and describes how to expose security bugs within these devices. Learn how to perform security testing on mobile applications and avoid being the next news headline about a serious mobile-based security breach. Learn more about Jon Hagar


Top of Page
 
Send us Your Feedback