Our Promise to You

http://www.sqe.com/

 
 
 

Thursday Concurrent Sessions
Thursday, October 04, 2012 9:45 AM
T1
Test Management
The Vital P’s of Testing: Purpose, People, and Process C Mike Hendry, Unum When building a testing organization, where do you start? Technical skills? Domain knowledge? Testing experience? The cheapest resource? A set of testing tools? A formal test process? Mike Hendry suggests that before looking for staff or tools, you must ask—and answer—fundamental questions about the planned organization. Drawing on the collective wisdom of many management, leadership, and testing gurus, and on his experience building three testing centers of excellence in the past ten years, Mike shares his successes and failures, tips and traps in building a successful team. This includes determining the purpose of the organization, the types of people needed, and the test processes to be used. Although every organization is different, and what works in one organization may not work in another, Mike is confident that at least one of his “learnings” will resonate with you. Learn more about Mike Hendry
Thursday, October 04, 2012 9:45 AM
T2
Test Techniques
Combinatorial Test Design: Beyond the Gee-wiz Numbers C Karen Rosengren, IBM Combinatorial Test Design (CTD) is a great technique to ensure that your tests cover your test space thoroughly at a depth that matches the level of risk. Although it is entertaining to consider the huge number of tests required to test all combinations and compare that to the small number that CTD selects, there is so much more to learn. Karen Rosengren takes you on a journey through a program she led inside IBM. Its objective was to minimize the number of tests being run; however, in the process they learned much more about their testing efforts. They found ways to measure the effectiveness of their testing, how to clearly show the complexity of feature creep, how their understanding of the test space drove better low level designs in the product code, and how the generated test designs created a better foundation for test automation. Join Karen to learn about these additional—and valuable—benefits of CTD. Learn more about Karen Rosengren
Thursday, October 04, 2012 9:45 AM
T3
Test Automation
Engineering an Enterprise Selenium Framework C Brian Kitchener, Pearson eCollege Manipulating a web page is only the beginning of test automation. A large test automation suite contains an enormous amount of information about a system. Creating and maintaining such a system can be a nightmare if done incorrectly. If you don't have a feature-rich test automation framework, you're making your life more difficult than it needs to be. Drawing on his years of experience, Brian Kitchener discusses the benefits and pitfalls of engineering an enterprise test automation framework using Selenium. He discusses overall design schemes, common problems and solutions, and shares specific code examples you can take back with you. Leave knowing how to quickly establish a framework, create hooks and provide features to your entire organization, determine the basic features any automation framework should have, and identify some potential problems to avoid. A great framework helps everyone in your organization create cleaner, stronger, and more reliable tests. Learn more about Brian Kitchener
Thursday, October 04, 2012 9:45 AM
T4
Mobile Testing
Mobile Test Automation: Lessons Learned in the Trenches C Manish Mathuria, InfoStretch Sha Mohammed, Sabre Airline Solutions With mobile applications becoming more and more mission critical for the enterprise, testing teams are increasingly looking to automate the test cases for their mobile applications. However, with mobile testing still in its nascent stages, it is no surprise that organizations find the implementation of mobile test automation to be both time consuming and expensive, often negating any benefits of the efficiencies it should bring to the testing process. Manish Mathuria shares key lessons learned from implementing tools, technologies, and frameworks that help mobile testing teams take full advantage of test automation—without increases in cost or time. Manish presents real-life examples and cases studies using open source tools like Selenium and commercial tools like QTP, explaining how various organizations have benefited from mobile test automation. Learn the overall landscape of mobile test automation, classifications of automation tools, and how to select the right tool for your needs. For each class of tools, Manish describes typical gotchas and ways to maximize your ROI. Learn more about Manish Mathuria, Sha Mohammed
Thursday, October 04, 2012 9:45 AM
T5
Performance Testing
Performance Testing in an Agile World C Sai Subramanian, Cognizant Technology Solutions Like agile development, performance testing is an iterative process in which new problems and risks are identified, and appropriate steps are taken to fix issues or mitigate risks. Experience tells us that in an agile environment, performance testing must be built into the development process, owned by the entire team, and coordinated by the performance analyst. Agile programs spanning multiple scrums running in parallel and sharing a common infrastructure present a unique set of problems that can challenge any performance analyst. Sai Subramanian shares his experience in the evolution of performance testing within agile teams. Sai discusses the strengths and challenges posed by the different levels of collaboration between the performance analysts and other project stakeholders. Join Sai to learn about the Scrum of Scrums approach for coordinating performance testing, the path that led to the adoption of this approach, and the benefits and the lessons learned from alternative approaches. Learn more about Sai Subramanian
Thursday, October 04, 2012 9:45 AM
T6
Special Topics
Cross-platform Testing at Microsoft C Jean Hartmann, Microsoft Microsoft Office products have been available on Windows-based and Apple Mac personal computers for many years. Now, with the rapid proliferation of mobile device platforms, current testing strategies are more difficult to justify because it is not feasible to implement test suites from scratch for each new mobile platform that arrives in the market. Join Jean Hartmann in an exploration of the platform-agnostic testing space and learn about current cross-platform strategies, tools, and processes being developed within Microsoft Office. Jean presents examples to help you gain a better understanding of the benefits, challenges, and trade-offs that you must make when considering such approaches. To start the process of developing the new strategies, tools, and processes, you’ll need to create portable tests in which testers define their core test scenarios once and then leverage them for different platforms and devices. By adopting new cross-platform testing approaches, you and your team can address the challenges of mobile platform validation. Learn more about Jean Hartmann
Thursday, October 04, 2012 11:15 AM
T7
Test Management
Maslow’s Hierarchy of Quality: Realigning Your Thinking C Anu Kak, PayPal Maslow’s Hierarchy of Needs is a popular model that describes the stages of human psychological development. Anu Kak shares how Maslow’s work can be applied to align the quality thinking of a software development organization through a “Hierarchy of Quality.” This builds a quality-centric culture and enhances the quality of products before they are released while quickly learning from mistakes. Anu describes a path beginning at the basic needs for high quality—test plans, defects, regression tests, etc.—and progresses to define what is needed to achieve high levels of sustainable customer satisfaction. Anu describes how “self actualization” in customer quality can be achieved through the process of moving up the hierarchy of needs while sustaining the lower tiers of the model. Learn the techniques to evangelize the adoption of the Hierarchy of Quality across all stakeholders, and how to analyze and navigate the quality thinking across the tiers of this quality model. Learn more about Anu Kak
Thursday, October 04, 2012 11:15 AM
T8
Test Techniques
Code Coverage in the Internet Age C Michael Portwood, The Nielsen Company With the proliferation of mobile devices, cloud computing, and client-side scripting—coupled with web services—how do you guarantee adequate code coverage for your applications? Basic tests inadequately cover many of these technologies, leading to defects and disappointing user experiences. Michael Portwood describes the importance of unit test coverage and then presents techniques, tips, and tricks to simplify the process of achieving more complete coverage for Internet-enabled solutions. Michael shares tips for automation and techniques for testing both client- and server-side scripting. Gain insight into identifying code requiring complex testing techniques and explore ideas for covering them. Michael describes complex testing situations—like those found in multi-threaded and distributed code—where test coverage alone may provide misleading results. Michael illustrates these approaches with specific quick start and real-world rollout strategies to help you identify, isolate, and then remove latent uncovered code—before your customer tells you about it. Learn more about Michael Portwood
Thursday, October 04, 2012 11:15 AM
T9
Test Automation
How to Break Web Software C Dawn Haynes, PerfTestPlus, Inc. If you're new to testing Web applications or facing new challenges, you may feel overwhelmed by the terminology and multiple technologies of today's Web environments. Web testing today requires more than just exercising the functionality of applications. Each system is composed of a customized mix of various layers of technology, each implemented in a different programming language and requiring unique testing strategies. This “stew” often leads to puzzling behavior across browsers; performance problems due to page design and content, server locations, and architecture; and the inconsistent operation of the bleepin' Back button! Dawn Haynes shares a Web testing strategy for discovering and targeting new areas to test and an extensive set of test design ideas and software attacks. Dawn demonstrates tools to help you take your Web testing to the next level as you test HTML syntax, page layout, download speeds, 508 compliance, readability, and more. Learn more about Dawn Haynes
Thursday, October 04, 2012 11:15 AM
T10
Mobile Testing
Innovative Tools for Your Mobile Testing Toolkit C Eing Ong, Intuit, Inc. Automating mobile testing faces challenges from a huge variety of devices, resolutions, user interactions, and operating systems. While there is no single solution in the market that can solve all your testing needs, Eing Ong has found a few innovative open source tools that stand out. Learn how Sikuli, ImageMagick, and MOET (Mobile End-to-end Testing) can address the limitations of instrumentation techniques and how you can leverage them individually or together to test your native mobile applications. Eing begins by demonstrating Sikuli's innovative visual technology for iPhone user behavior testing and MonkeyRunner for Android testing. Then she shows how you can use ImageMagick to crop and convert images as well as fine tune image comparisons for image verification. Learn how to apply MOET's “write once, run anywhere” design patterns to use only one IDE, one language, and one test execution tool to run the same set of tests on Android, iPhone, and other devices. Take back practical code samples to jump start your mobile testing. Learn more about Eing Ong
Thursday, October 04, 2012 11:15 AM
T11
Performance Testing
Database Load Testing and Performance Analysis: New Approaches for Fast Results C Ron Warshawsky, Enteros, Inc. Ever wonder exactly how a database becomes the bottleneck for an entire application? Ever think about replaying real production load to see if your application can stand the pressure? Ron Warshawsky presents novel methods for performing load testing at the database level to quickly identify database performance problems. Ron shares the best ways to load test the database side of applications and to analyze the performance of databases. To identify the root causes of database performance bottlenecks, Ron demonstrates how you can correlate load test data with database performance analysis results. Although Oracle is used as a case study database, the same methods are applicable to other databases including DB2, SQL Server, MySQL and PostgreSQL. Benefit from Ron’s years of architecting, troubleshooting, and supporting the world’s largest transactional database systems, and get on the fast path to improving database performance in your shop. Learn more about Ron Warshawsky
Thursday, October 04, 2012 11:15 AM
T12
Special Topics
The Missing Integration at Best Buy: Agile, Test Management, and Test Execution C Frank Cohen, PushToTest What can you do when test tools from proprietary vendors don’t seem to support your organization’s processes and open source tools are too narrowly focused? Best Buy, the world's largest electronics retailer, faced this very situation. With hundreds of agile development projects running concurrently, they needed an integrated test management and test execution tool set that would scale up easily. Frank Cohen describes how he helped Best Buy integrate open source functional and load test tools, vendor-supplied test management tools, and repository tools with their agile software development methodology. Now, with this integrated solution, business managers, testers, developers, and IT Ops managers click the “Start” button to perform a thorough set of automated tests, verify the results, and produce an informative dashboard of results. Frank shares the best—and worst—practices of building this integration, how to evaluate the results, and how much effort it took to make it all work. Learn more about Frank Cohen
Thursday, October 04, 2012 1:30 PM
T13
Test Management
A Test Manager’s Transformation Toolkit C Mari Kawaguchi, Bank of America If you have had your testing window reduced or you are being challenged to do more with less, this session is for you. Mari Kawaguchi shares how she and her team embraced these challenges to transform their testing operating model. Sharing her extensive experience, Mari details the road map that elevated her testing organization to a valued and strategic partner within the organization. Mari describes the strategic components of testing—people, process, and technology—and shares how to assess your team’s skills, build subject matter experts, and ensure the right mindset to drive innovation and change. From a process and technology perspective, she outlines early testing engagement and collaboration, risk based testing, root cause analysis of escaped defects, and performance scorecards. Take back a new toolkit of ways to transform your test team into strategic business partners. Learn more about Mari Kawaguchi
Thursday, October 04, 2012 1:30 PM
T14
Test Techniques
The Art of Designing Test Data C Rajini Padmanaban, QA InfoTech Test data generation is an important preparatory step in software testing. It calls for a tester’s creativity as much as test case design itself. Focusing on the type of testing to be performed and designing data to support it yields the greatest success in finding defects. For example, security testing largely requires negative test data to attempt to gain access to a system as a hacker would. Localization testing requires very specific test data in the areas of date, time, and currency. Rajini Padmanaban describes how test data generation is a reverse engineering process, where one first focuses on the end goal and then works back to determine what kind of data should be created. Rajini describes data sets for various types of testing, ideas to keep in mind in reusing test data, and sharing data across the product team to save time while not trespassing on the team’s creative thinking. Learn more about Rajini Padmanaban
Thursday, October 04, 2012 1:30 PM
T15
Metrics
The Metrics Minefield C Michael Bolton, DevelopSense, Inc. In many organizations, management demands measurements to help assess the quality of software products and projects. Are those measurements backed by solid metrics? How do we make sure that our metrics are reliably measuring what they're supposed to? What skills do we need to do this job well? Measurement is the art and science of making reliable and significant observations. Michael Bolton describes some common problems and risks with software measurement, and what we can do to address them. Learn to think critically about numbers, what they appear to measure and how they can be distorted. Improve the quality of the information that we're gathering to understand the relationship between observation, measurement, and metrics. Evaluate your measurements by asking probing questions about their validity. Join Michael to find out how measurement can be used to illustrate—not tell—the project team’s stories, and how those stories can help you make better decisions. Learn more about Michael Bolton
Thursday, October 04, 2012 1:30 PM
T16
Mobile Testing
Automating Mobile Application Testing with MonkeyTalk C Stu Stern, Gorilla Logic As enterprises scramble for competitive advantage by rapidly creating and deploying compelling mobile applications, testing professionals have been challenged to quickly adopt new tools and processes to provide effective testing. Failing to meet this challenge often can result in "one-star" user ratings that doom the application to failure. While many automation engineers have mastered the available tools for automating web application testing, mobile applications require new kinds of tools that understand the richer palette of user interface components and gestures that comprise modern mobile application interfaces. Stu Stern introduces MonkeyTalk (formerly FoneMonkey and FlexMonkey), a free and open source tool that lets testers record, play back, edit, and manage comprehensive functional test automation suites for native Android, iOS, HTML5 and Adobe Flex applications. In addition to the basic operations, learn how to parameterize and datadrive test scripts, create reusable testing libraries, and create cross-platform tests that work with both iOS and Android applications. Learn more about Stu Stern
Thursday, October 04, 2012 1:30 PM
T17
Security Testing
Security Testing: Thinking Like an Attacker C Frank Kim, ThinkSec Compared to traditional functional testing, security testing requires testers to develop the mindset of real attackers and pro-actively look for security vulnerabilities throughout the software development lifecycle. Using live demos, Frank Kim shows you how to think—and act—like a hacker. Rather than just talking about issues such as Cross Site Scripting (XSS), SQL Injection, and Cross Site Request Forgery (CSRF), Frank shows—live and in color—how hackers abuse potentially devastating defects by finding and exploiting vulnerabilities in a live web application. Find out how attackers approach the problem of gaining unauthorized access to systems. Discover the tools hackers have that you don't even know exist and how you can find critical security defects in your production apps. In this revealing session, you’ll learn how to become a better tester and find serious security vulnerabilities in your systems before the bad guys do. Learn more about Frank Kim
Thursday, October 04, 2012 1:30 PM
T18
Special Topics
Test Automation as a Service C Jonathon Wright, Trafigura We’ve all heard the claims that cloud computing will, without any up-front investment, provide instant scalability, flexibility, and availability for testing-on-demand. But how well does this work in practice? Is it really the perfect environment to create powerful testing solutions? If so, why would your businesses invest in creating its own custom test automation frameworks to support your solution lifecycle management, when there’s a solution to meet everyone’s needs? Based on his experience, describing recent successes and the pitfalls to be avoided, Jonathon Wright explores these questions, and the pros and cons of using the “Testing as a Service” model. Sharing real world examples, he shows how you can transform the automation services in your organization and how you can take your company on the journey into the cloud regardless of your automation maturity. Influence and contribute to the future direction in the “Testing as a Service” community. Learn more about Jonathon Wright
Thursday, October 04, 2012 3:00 PM
T19
Test Management
Using Agile Techniques to Manage Testing—Even on Non-agile Projects C Brian Osman, OsmanIT Sometimes, test managers and teams can get bogged down in rigid processes, excessive documentation, and simply too much data, resulting in less actual testing getting done. The good news is that there is a better way! Brian Osman describes how he applies agile and lean practices within his test team, even on non-agile projects. Brian shares his experiences with low-tech, high-value techniques such as big visible charts to track and share information. He was able to cut down on long meetings and eliminate complex status reports while still helping his team stay focused and on track. They improved visibility and communications with management, development, and business stakeholders while reducing interruptions. Brian explains how his new approach positively influenced other projects around them and how it helped everyone stay on task. Learn ways your test team can “go agile”—even if the rest of the project has not. Learn more about Brian Osman
Thursday, October 04, 2012 3:00 PM
T20
Test Techniques
Big Data and Quality C Ken Johnston, Microsoft Reena Agarwal, Microsoft Big Data systems—those in which data sets grow so large that they become awkward to work with using traditional tools—are a huge problem for testers. To address the Big Data problem, testers must throw the traditional approach of creating and executing fixed test cases out the window. Ken Johnston and Reena Agarwal share how testing is changing and adapting within Microsoft on the Bing search platform and elsewhere. There, they are implementing sophisticated analysis techniques to validate system output and measure data quality. Join this leading edge session to learn what Big Data is all about and the technologies driving these new systems. Explore examples from different industries—including Big Data testing within Microsoft and Bing—and explore the new skills you and your team will need to test in the era of Big Data. When you hear about the petabytes, exabytes, zettabytes, or yottabytes of data coming your way, you’ll be ready. Learn more about Ken Johnston, Reena Agarwal
Thursday, October 04, 2012 3:00 PM
T21
Metrics
The Dangers of the Requirements Coverage Metric C Lee Copeland, Software Quality Engineering When testing a system, one question that always arises is, “How much of the system have we tested?” Coverage is defined as the ratio of “what has been tested” to “what there is to test.” One of the basic coverage metrics is requirements coverage—measuring the percentage of the requirements that have been tested. Unfortunately, the requirements coverage metric comes with some serious difficulties: Requirements are difficult to count; they are ideas, not physical things, and come in different formats, sizes, and quality levels. In addition, making a complete count of “what there is to test” is impossible in today’s hyper-complex systems. The imprecision of this metric makes it unreliable or even undefined and unusable. What is a test manager to do? Join Lee Copeland to discover how to reframe the idea of coverage from a quantitative to a qualitative measure, recognizing that coverage does not measure our testing of a product, but our testing of a model that represents the product, with many details omitted. Learn more about Lee Copeland
Thursday, October 04, 2012 3:00 PM
T22
Mobile Testing
Testing Mobile Apps: Three Dilemmas Solved C Yoram Mizrachi, Perfecto Mobile The fragmentation and unpredictability of the mobile market present new challenges and risks for the business—and the development team. Testers must assure application quality across multiple platforms and help deliver new products almost every day. Using his experiences implementing automated mobile testing for clients, Yoram Mizrachi analyzes three fundamental mobile testing dilemmas encountered when enterprises go mobile. First, learn how and when to use emulators and real mobile devices, when to rely on each, and how many devices you will need in each stage of development. Second, understand the differences between testing on local devices versus remote devices in the cloud and how the differences affect test coverage, scalability, logistics, risk, and security liability. Third, discover how to reduce the learning curve and time-to-market by extending your existing software application lifecycle management environment to mobile, preserving your organization's tools, processes, and knowledge. Join Yoram to increase your ability to quickly identify an efficient, feasible mobile testing strategy for your organization. Learn more about Yoram Mizrachi
Thursday, October 04, 2012 3:00 PM
T23
Security Testing
Penetration Testing Demystified C Edward Bonver, Symantec Penetration testing is a method of evaluating the security of a system by maliciously attacking it and analyzing its possible weaknesses. Penetration testing uses a suite of tests, generally performed in a gray-box fashion, to attack the system as real attackers would—approaching the system with attacker eyes, knowledge, skills, and tools. Edward Bonver explains why and how penetration testing should be done on any mission-critical system as part of a comprehensive security testing strategy. He describes the factors that influence the success of penetration testing include testing environment readiness, technical information, and the availability of the product teams’ key contacts. You’ll learn the details behind penetration testing, common approaches, testing options, and best practices. Discover what technologically diverse product teams can expect of the process, how it gets done at Symantec, and how you can prepare to perform an active penetration test on your system. Learn more about Edward Bonver
Thursday, October 04, 2012 3:00 PM
T24
Special Topics
Prime Directive: Improve Dev Testing Skills C Andrew Prentice, Atlassian In many development organizations today, quality is the responsibility of everyone on the project—both developers and testers. However, getting devs fully engaged in this testing continues to be a challenge. Andrew Prentice describes two approaches—blitz testing and mentored testing—that help Atlassian’s developers gain the skills they need to improve code quality before the testers get their hands on the application. He shares how they structure and organize blitz tests—group test sessions, the various roles participants play in them, and how to foster their viral adoption. Andrew also describes mentored testing, including its potential risks and mitigation options. He examines the skills and tools testing teams need to implement these techniques with their devs. You’ll learn about the whole-team accomplishments that came about as a result of their innovative efforts: improved accountability, elimination of bottlenecks, and testers having time to focus on the most complex testing issues. Learn more about Andrew Prentice


Top of Page
 
Send us Your Feedback