STARWEST Software Testing Analysis & Review
 
SQE Home
 
 
 
STARWEST 2011
Tuesday Tutorials
Tuesday, October 04, 2011 8:30 AM
TA
Key Test Design Techniques C Full-Day Lee Copeland, Software Quality Engineering All testers know that we can identify many more test cases than we will ever have time to design and execute. The major problem in testing is choosing a small, “smart” subset from the almost infinite number of possibilities available. Join Lee Copeland to discover how to design test cases using formal black-box techniques, including equivalence class and boundary value testing, decision tables, state-transition diagrams, and all-pairs testing. Explore white-box techniques with their associated coverage metrics. Evaluate more informal approaches, such as random and hunch-based testing, and learn the importance of using exploratory testing to enhance your testing ability. Choose the right test case design approaches for your projects. Use the test results to evaluate the quality of both your products and your test designs. Learn more about Lee Copeland
Tuesday, October 04, 2011 8:30 AM
TB
SOLD OUT
Test Automation for Mobile Apps C Full-Day Julian Harty, eBay, Inc. Even if you are not already involved in testing software aimed at mobile devices such as smartphones, that day may be just around the corner. Testing mobile software is particularly challenging especially if you are unfamiliar with the impact of network connectivity, phone providers, software platforms, etc., or if you have to deal with multiple platforms and devices. Because test automation in this field is still immature, there are few references and examples to guide you toward quick and effective test techniques. In this interactive session, Julian Harty helps you understand how and when automation can aid your mobile wireless application testing and explains how to test mobile apps. Based on the several years Julian has spent testing mobile applications at Google and his collaboration with industry experts, the simple and practical tips you’ll learn for the typical and unusual problems mobile applications testers encounter may just make the difference between your success and failure Learn more about Julian Harty
Tuesday, October 04, 2011 8:30 AM
TC
Becoming an Influential Test Team Leader C Full-Day Randy Rice, Rice Consulting Services, Inc. Have you been thrust into the role of test team leader? Are you in this role now and want to hone your leadership skills? Test team leadership has many unique challenges, and many test team leaders—especially new ones—find themselves ill-equipped to deal with the problems they face. The test team leader must motivate and support the team while keeping testing on track, within time and budget constraints. Randy Rice focuses on how you can grow as a leader, influence your team and those around you, and positively impact those outside your team. Learn how to become a person of influence, deal with interpersonal issues, and help your team build their skills and value to the team and the organization. Discover how to communicate your team’s value to management, how to stand firm when asked to compromise principles, and how to learn from your successes and failures. Develop your own action plan to become an influential test team leader. Learn more about Randy Rice
Tuesday, October 04, 2011 8:30 AM
TD
Critical Thinking for Software Testers C Full-Day James Bach, Satisfice, Inc. Critical thinking is the kind of thinking that specifically looks for problems and mistakes. Regular people don't do a lot of it. However, if you want to be a great tester, you need to be a great critical thinker, too. Critically thinking testers save projects from dangerous assumptions and ultimately from disasters. The good news is that critical thinking is not just innate intelligence or a talent—it's a learnable and improvable skill you can master. James Bach shares the specific techniques and heuristics of critical thinking and presents realistic testing puzzles that help you practice and increase your thinking skills. Critical thinking begins with just three questions—Huh? Really? and So?—that kick start your brain to analyze specifications, risks, causes, effects, project plans, and anything else that puzzles you. Join this interactive, hands-on session and practice your critical thinking skills. Study and analyze product behaviors and experience new ways to identify, isolate, and characterize bugs. Learn more about James Bach
Tuesday, October 04, 2011 8:30 AM
TE
Software Performance Testing: Preparing for a Successful Test C Full-Day Dale Perry, Software Quality Engineering What does it take to properly plan, analyze, design and implement a software performance test? How do you report the results? What factors need to be considered? What is your performance test tool telling you? Do you really need a performance test? Is it worth the cost? These questions plague all performance testers. Many performance tests do not appear to be worth the time it takes to run them, and the results never seem to resemble—yet alone predict—production system behavior. Performance tests are some of the most difficult tests to create and run, and most organizations don’t fully appreciate the time and effort required to properly perform them. Dale Perry discusses the key issues and realities of performance testing—what can and cannot be done with a performance test, what is required to do a performance test, and how to present what the test “really” tells you. Learn more about Dale Perry
Tuesday, October 04, 2011 8:30 AM
TF
Usability Testing in a Nutshell New Morning Julie Gardiner, Grove Consultants Because our systems are becoming more complex and the market is becoming saturated with competing brands, usability can be a differentiating factor in purchasing decisions. A classic system requirement is “The system must be user-friendly” but what does that mean, and more importantly, how do we test for this requirement? Join Julie Gardiner as she describes usability techniques you can employ to demonstrate a user interface’s efficiency and effectiveness. Find out how to predict and test for usability, and, most importantly, for user satisfaction before the software’s release. Take back new knowledge of proven usability testing techniques: heuristic evaluation, cognitive walkthroughs, focus groups, user personas, contextual task analysis, usability labs, and satisfaction surveys. Learn how to define usability goals and how to get management to take usability defects seriously. If you want to improve your skills in usability testing, this session is for you.
Delegates are encouraged to bring a laptop to this session.
Learn more about Julie Gardiner
Tuesday, October 04, 2011 8:30 AM
TG
Acceptance Test-driven Development: Mastering Agile Testing New Morning Nate Oster, CodeSquads, LLC On agile teams, testers can struggle to “keep up” with the pace of development if they continue employing a waterfall-based verification process—finding bugs after development. Nate Oster challenges you to question waterfall assumptions and replace this legacy verification testing with Acceptance Test-driven Development (ATDD). With ATDD, you “test first” by writing executable specifications for a new feature before development begins. Learn to switch from “tests as verification” to “tests as specification” and to guide development with acceptance tests written in the language of your business. Get started by joining a team for a simulation and experience how ATDD helps build in quality instead of trying to test out defects. Then progress to increasingly more realistic scenarios and practice the art of specifying intent with plain-language and table-based formats. These paper-based simulations give you meaningful practice with how ATDD changes the way you think about tests and collaborate as a team. Leave empowered with a kit of exercises to advocate ATDD with your own teams! Learn more about Nate Oster
Tuesday, October 04, 2011 8:30 AM
TH
Essential Test Management and Planning C Morning Rick Craig, Software Quality Engineering The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level. Learn more about Rick Craig
Tuesday, October 04, 2011 8:30 AM
TI
SOLD OUT
Planning Your Agile Testing: A Practical Guide C Morning Janet Gregory, DragonFire, Inc. Traditional test plans are incompatible with agile software development because we don't know all the details about all the requirements up front. However, in an agile software release, you still must decide what types of testing activities will be required—and when you need to schedule them. Janet Gregory explains how to use the Agile Testing Quadrants, a model identifying the different purposes of testing, to help your team understand your testing needs as you plan the next release. Janet introduces you to alternative, lightweight test planning tools that allow you to plan and communicate your big picture testing needs and risks. Learn how to decide who does what testing—and when. Determine what types of testing to consider when planning an agile release, the infrastructure and environments needed for testing, what goes into an agile “test plan,” how to plan for acquiring test data, and lightweight approaches for documenting your tests and recording test results. Learn more about Janet Gregory
Tuesday, October 04, 2011 8:30 AM
TJ
Using Visual Models for Test Case Design C Morning Rob Sabourin, AmiBug.com Designing test cases is a fundamental skill that all testers should master. Rob Sabourin shares a graphical technique he has employed to design powerful test cases that will surface important bugs quickly. These skills can be used in exploratory, agile, or engineered contexts—anytime you are having problems designing a test. Rob illustrates you can use how Mindmaps to visualize test designs and better understand variables being tested, one-at-a-time and in complex combinations with other variables. He presents the Application-Input-Memory (AIM) heuristic through a series of interactive exercises. We’ll use a widely available free, open-source tool called FreeMind to help implement great test cases and focus our testing on what matters to quickly isolate critical bugs. If you are new to testing, these techniques will remove some of the mystery of good test case design. If you’re a veteran tester, these techniques will sharpen your skills and give you some new test design approaches. Learn more about Rob Sabourin
Tuesday, October 04, 2011 1:00 PM
TK
Reliable Test Effort Estimation C Afternoon Ruud Teunissen, Polteq Test Services BV How do you estimate your test effort? And how reliable is that estimate? Ruud Teunissen presents a practical and useful test estimation technique directly related to the maturity of your test and development process. A reliable effort estimation approach requires five basic elements: (1) Strategy – Determine what to test (performance, functionality, etc.) and how thoroughly it must be tested. (2) Size – Yes, it does matter—not only the size of the system but also the scope of your tests. (3) Expected Quality – What factors have been established to define quality? (4) Infrastructure and Tools – Define how fast you can test. Without the proper organizational support and the necessary tools, you’ll need time you may not have. (5) Productivity – How experienced and efficient is your team? Join Ruud to improve your test estimations and achieve more realistic goal setting and test strategies. Learn more about Ruud Teunissen
Tuesday, October 04, 2011 1:00 PM
TL
Testing Web-based Applications: An Open Source Solution New Afternoon Mukesh Mulchandani, ZENTest Labs Need a jump start on functional test automation using open source test tools? Mukesh Mulchandani leads a focused, hands-on workshop on how to automate testing web-based applications using, as the demonstration tool, Selenium, a popular open source testing tool. He draws on his experience with large-scale test automation to show you how to record scripts, drive test scenarios with data, insert verification points, and create reusable scripts. Mukesh explains various design approaches—and the benefits and limitations of each—for functional test automation including data-driven, functional decomposition, and keyword-driven. In addition, he discusses alternatives to Selenium and explains how to choose the best tool set for your situation. Learn how to quickly discover what parts of your application can and should be automated with these tools. Take back an approach for creating automated smoke and regression test suites that execute automatically with every new release.
Delegates are encouraged to bring a laptop to this session.
Learn more about Mukesh Mulchandani
Tuesday, October 04, 2011 1:00 PM
TM
Measurement and Metrics for Test Managers C Afternoon Rick Craig, Software Quality Engineering To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms, including Goal-Question-Metric, and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points. Learn more about Rick Craig
Tuesday, October 04, 2011 1:00 PM
TN
Exploratory Testing Is Now in Session C Afternoon Jon Bach, eBay, Inc. The nature of exploration, coupled with the ability of testers to rapidly apply their skills and experience, make exploratory testing a widely used test approach—especially when time is short. Unfortunately, exploratory testing often is dismissed by project managers who assume that it is not reproducible, measurable, or accountable. If you have these concerns, you may find a solution in a technique called Session-Based Test Management (SBTM), developed by Jon Bach and his brother James to specifically address these issues. In SBTM, testers are assigned areas of a product to explore, and testing is time boxed in “sessions” that have mission statements called “charters” to create a meaningful and countable unit of work. Jon discusses—and you practice—the skills of exploration using the SBTM approach. He demonstrates a freely available, open source tool to help manage your exploration and prepares you to implement SBTM in your test organization. Learn more about Jon Bach
Tuesday, October 04, 2011 1:00 PM
TO
SOLD OUT
Agile Requirements Exploration with Tester Collaboration New Afternoon Janet Gregory, DragonFire, Inc. In agile projects where roles are often blurred, it is not always clear how and where testers can best help—especially early in the project. Janet Gregory has discovered that having testers participate in requirements analysis improves the requirements and makes testing more productive. In this experiential tutorial, Janet leads participants to employ agile analysis models and specify acceptance criteria as a way to verify and validate requirements. Experience how incorporating your testing mindset and using test design techniques during requirements exploration accelerate test planning, improve specifications, and enhance product quality while uncovering erroneous, missing, conflicting, and unnecessary requirements. Learn more about Janet Gregory
Tuesday, October 04, 2011 1:00 PM
TP
How Google Tests Software New Afternoon James Whittaker, Google Google is a company that releases complex software rapidly to millions of users worldwide. Have you ever wondered how Google does testing? James Whittaker unveils the secret sauce of Google test practices. From developer-oriented unit testing, to the mystical role of the Software Engineer in Test, and to the role of Test Engineer that even a lot of Googlers don't fully comprehend, no testing topic is off limits. James shares how Google achieves high quality in a developer-centric company where the developer-to-tester ratio is incredibly lopsided. He then explores how Google executes with a test organization that is independent of product teams. At the end of the session, James previews the tools Google testers are releasing to the open source community and what all this can mean for testing in your organization. Learn more about James Whittaker


Top of Page
 
Send us Your Feedback