Explore all that STARWEST has to offer in the conference schedule. Filter by job function or interest area to find the topics that matter the most to you.
You're under tight time pressure and have barely enough information to proceed with testing. How do you test quickly and inexpensively, yet still produce informative, credible, and accountable results? Rapid Software Testing, adopted by context-driven testers worldwide, offers a field-proven answer to this all-too-common dilemma. In this one-day sampler of the approach, Paul Holland introduces you to the skills and practice of Rapid Software Testing through stories, discussions, and "minds-on" exercises that simulate important aspects of real testing problems.
Large-scale testing projects can severely stress many of the testing practices we have gotten used to over the year. This can result in less than optimal outcomes. A number of innovative ideas and concepts have emerged to support industrial-strength testing of large and complex projects. Hans Buwalda shares his experiences and the strategies he's developed and used for large testing on large projects. Learn how to design tests specifically for automation and how to successfully incorporate keyword testing.
Whether you are new to testing or looking for a better way to organize your test practices and processes, the Systematic Test and Evaluation Process (STEP™) offers a flexible approach to help you and your team succeed. Dale Perry describes this risk-based framework—applicable to any development lifecycle model—to help you make critical testing decisions earlier and with more confidence. The STEP™ approach helps you decide how to focus your testing effort, what elements and areas to test, and how to organize test designs and documentation.
In response to increasing market demand for high performance applications, many organizations implement performance testing projects, often at great expense. Sadly, these solutions alone are often insufficient to keep pace with emerging expectations and competitive pressures. With specific examples from recent client implementations, Scott Barber shares the fundamentals of implementing T4APM™ a simple and universal approach that is valuable independently or as an extension of existing performance testing programs.
A test strategy is the set of ideas that guides your test design. It's what explains why you test this instead of that, and why you test this way instead of that way. Strategic thinking matters because testers must make quick decisions about what needs testing right now and what can be left alone. You must be able to work through major threads without being overwhelmed by tiny details. James Bach describes how test strategy is organized around risk but is not defined before testing begins. Rather, it evolves alongside testing as we learn more about the product.
Many organizations never achieve the significant benefits that are promised from automated test execution. Surprisingly often, this is due not to technical factors but to management issues. Dot Graham describes the most important management concerns the test manager must address for test automation success, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use or your current state of automation.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them.
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities—learning, test design, and test execution—done in parallel. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits.
Has this happened to you? You try to implement a change in your organization and it doesn’t get the support that you thought it would. And, to make matters worse, you can't figure out why. Or, you have a great idea but can’t get the resources required for successful implementation. Jennifer Bonine shares a toolkit of techniques to help you determine which ideas will—and will not—work within your organization.
In today’s market, global outreach, quick time to release, and a feature rich design are the major factors that determine a product’s success. Organizations are constantly on the lookout for innovative testing techniques to match these driving forces. Crowdsourced testing is a paradigm increasing in popularity because it addresses these factors through its scale, flexibility, cost effectiveness, and fast turnaround.
Data warehouses are critical systems for collecting, organizing, and making information readily available for strategic decision making. The ability to review historical trends and monitor near real-time operational data is a key competitive advantage for many organizations. Yet the methods for assuring the quality of these valuable assets are quite different from those of transactional systems. Ensuring that appropriate testing is performed is a major challenge for many enterprises.
The nature of exploration, coupled with the ability of testers to rapidly apply their skills and experience, make exploratory testing a widely used test approach—especially when time is short. Unfortunately, exploratory testing often is dismissed by project managers who assume that it is not reproducible, measurable, or accountable. If you have these concerns, you may find a solution in a technique called session-based test management (SBTM), developed by Jon Bach and his brother James to specifically address these issues.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources.
Are you overwhelmed by the number of mobile devices you need to test? The device market is large and new devices become available almost weekly. Karen Johnson discusses three key mobile testing challenges—device selection, user interface, and device and application settings—and leads you through each. Learn how to select which devices to test and how to keep up-to-date in the ever-changing mobile market. Need to learn about user interface testing on mobile? Karen reviews mobile UX concepts and design. Wonder what device settings can impact your mobile app testing?
Test reporting is something few testers take time to practice. Nevertheless, it's a fundamental skill—vital for your professional credibility and your own self-management. Many people think management judges testing by bugs found or test cases executed. Actually, testing is judged by the story it tells. If your story sounds good, you win. A test report is the story of your testing. It begins as the story we tell ourselves, each moment we are testing, about what we are doing and why. We use the test story within our own minds, to guide our work.
In the tradition of James Whittaker’s book series How to Break … Software, Jon Hagar applies the testing “attack” concept to the domain of embedded software systems. Jon defines the sub-domain of embedded software and examines the issues of product failure caused by defects in that software. Next, he shares a set of attacks against embedded software based on common modes of failure that testers can direct against their own software. For specific attacks, Jon explains when and how to conduct the attack, as well as why the attack works to find bugs.
Test estimation is one of the most difficult software development activities to do well. The primary reason is that testing is not an independent activity and is often plagued by upstream destabilizing dependencies. Julie Gardiner describes common problems in test estimation, explains how to overcome them, and reveals six powerful ways to estimate test effort. Some estimation techniques are quick but can be challenged easily; others are more detailed and time consuming to use.
Critical thinking is the kind of thinking that specifically looks for problems and mistakes. Regular people don't do a lot of it. However, if you want to be a great tester, you need to be a great critical thinker. Critically thinking testers save projects from dangerous assumptions and ultimately from disasters. The good news is that critical thinking is not just innate intelligence or a talent—it's a learnable and improvable skill you can master.
All testers know that we can identify many more test cases than we will ever have time to design and execute. The key problem in testing is choosing a small, “smart” subset from the almost infinite number of possibilities available. Join Lee Copeland to discover how to design test cases using formal black-box techniques, including equivalence class and boundary value testing, decision tables, state-transition diagrams, and all-pairs testing. Explore white-box techniques with their associated coverage metrics.
As applications for smartphones and tablets become incredibly popular, organizations encounter increasing pressure to quickly and successfully deliver testing for these devices. When faced with a mobile testing project, many testers find it tempting to apply the same methods and techniques used for desktop applications. Although some of these concepts transfer directly, testing mobile applications presents its own special challenges. Jonathan Kohl says if you follow the same practices and techniques as you have before, you will miss critical defects.
Although many training classes and conference presentations describe processes and techniques meant to help you find bugs, few explain what to do when you find a good one. How do you know what the underlying problem is? What do you do when you find a bug, and the developer wants you to provide more information? How do you reproduce those pesky, intermittent bugs that come in from customer land? In this hands-on class, Jon Bach helps you practice your investigation and analysis skills—questioning, conjecturing, branching, and backtracking.
Feel your testing’s stuck in a rut? Looking for new ways to discover test ideas? Wondering if your testers have constructive methods to discover different approaches for testing? In this interactive session, Karen Johnson explains how to use heuristics to find new ideas. After a brief discussion, Karen has you apply and practice with a variety of heuristics. Need to step back and consider some of your testing challenges from a fresh perspective?
It is not enough to verify that software conforms to requirements by passing established acceptance tests. Successful software products engage, entertain, and support the users' experience. While goals vary from project to project, no matter how robust and reliable your software is, if your users do not embrace it, business can slip from your hands. Rob Sabourin shares how to elicit effective usability requirements with techniques such as story boarding and task analysis.
You don’t have to be a social butterfly to succeed with social networking. As a manager, tester, or QA professional, you need to differentiate yourself from the pretenders. If you are a “doer,” it’s time to start building your reputation at work and extending your reach on social networking sites, discussion forums, through online participation and at conferences like STAR. Whether you are searching for a new job, recruiting a candidate, or looking for new ways to solve problems, you need to know how to network.
Testers often encounter problems when automating test execution. The surprising thing is that many testers encounter the very same problems, over and over again. These problems often have known solutions, yet many testers are not aware of them. Recognizing the commonality of these test automation issues and their solutions, Seretta Gamba and Dorothy Graham have organized them into a set of test automation patterns. A pattern is a general, reusable solution to a commonly occurring problem.
You name the testing topic, and Alan Page has an opinion on it, hands-on practical experience with it—or both. Spend the morning with Alan as he discusses a variety of topics, trends, and tales of software engineering and software testing. In an interactive format loosely based on discovering new testing ideas—and bringing new life to some of the old ideas—Alan shares experiences and stories from his twenty year career as a software tester.
Have you ever worked on a project where you felt testing was thorough and complete—all of the features were covered and all of the tests passed—yet in the first week in production the software had serious issues and problems? Join Dawn Haynes to learn how to inject robustness testing into your projects to uncover those issues before release. Robustness—an important and often overlooked area of testing—is the degree to which a system operates correctly in the presence of exceptional inputs or stressful environmental conditions.
Testing in production for online applications has evolved into a critical component of successful performance testing strategies. Dan Bartow explains the fundamentals of cloud computing, its application to full-scale performance validation, and the practices and techniques needed to design and execute a successful testing-in-production strategy. Drawing on his experiences, Dan describes the methodology he has used for testing numerous online applications in a production environment with minimal disruption.
Innovation is a word tossed around frequently in organizations today. The standard cliché is “Do more with less.” People and teams want to be innovative but often struggle with how to define, prioritize, implement, and track their innovation efforts. Jennifer Bonine shares the "Innovation Types" model to give you new tools to evolve and expand your innovation capabilities. Find out if your innovation ideas and efforts match your team and company goals. Learn how to classify your innovation and improvement efforts as core (to the business) or context (essential but non-revenue generating).
Today’s software applications are often security-critical, making security testing an essential part of a software quality program. Unfortunately, most testers have not been taught how to effectively test the security of the software applications they validate. Join Jeff Payne as he shares what you need to know to integrate effective security testing into your everyday software testing activities. Learn how software vulnerabilities are introduced into code and exploited by hackers. Discover how to define and validate security requirements.
In both agile and traditional projects, keyword-driven testing has proven to be a powerful way to attain a high level of automation—when it is done correctly. Many testing organizations use keyword-driven testing but aren’t realizing the full benefits of scalability and maintainability that are essential to keep up with the demands of testing today’s software. Hans Buwalda outlines how you can meet what he calls the “5 percent challenges”—automate 95 percent of your tests with no more than 5 percent of your total testing effort—using his proven, keyword-driven test method.
In our increasingly agile world, the new buzzword is collaboration—so easy to preach but difficult to do well. Testers are challenged to work directly and productively with customers, programmers, business analysts, writers, trainers, and pretty much everyone in the business value chain. Testers and managers have many touch points of collaboration: grooming stories with customers, sprint planning with team members, reviewing user interaction with customers, troubleshooting bugs with developers, whiteboarding with peers, and buddy checking.
When testing web applications, you may feel overwhelmed by the technologies of today's web environments. Web testing today requires more than just exercising a system’s functionality. Each system is composed of a customized mix of various layers of technology, each implemented in a different programming language and requiring unique testing strategies. This “stew” often leads to puzzling behavior across browsers; performance problems due to page design and content, server locations, and architecture; and inconsistent operation of navigation controls.
When leading a test team or working in an agile team, becoming a trusted advisor to other stakeholders is paramount. This requires three key skills: earning trust, giving advice, and building relationships. Join Julie Gardiner as she explores each of these skills, describing why and how a trusted advisor develops different “mindsets.” Julie shares a framework of “quick-wins” for test managers and team leaders who need to show the value of testing on projects. To help provide timely, relevant information to stakeholders, she shares seven powerful monitoring and predicting techniques.
Kick off the conference with a welcome reception!Mingle with experts and colleagues while enjoying complimentary food and beverages.
Are you a new star speaker or aspiring to be one in the future? Join us at this workshop on making effective conference presentations. Learn the secrets of developing content, identifying the Big Message, preparing slides with just the right words and images, presenting your message, handling questions from the audience, and being ready when things go wrong. Lee Copeland, a professional speaker since birth, shares ideas that will help you be a better speaker, no matter what the occasion.
Professional testers and test managers are feeling the pressures of low-cost competition and tools that claim to replace them through automation. So, how can test teams add more value to their projects and organization? In a recent survey of executives and testers, Mike Kelly and Jeanette Thebeau found major disconnects between what executives and testers believe are most important to the business. They explore new insights into the risks and concerns executives perceive and what you should do differently.
Testing a game console isn’t all fun and games. However, with more than 50 million Xbox 360 consoles sold, and the amazing success of the Kinect sensor, it’s certainly a hotbed of excitement for software developers and testers alike. Veteran tester Alan Page is having a blast on the Xbox console team and shares an insider’s view of what it’s like to test one of the most popular entertainment systems ever created. Learn the details of testing the Xbox from the guts of the operating system to the latest applications—and everything in between.
Network with colleagues and speakers in the Expo...Read More
A number of test automation ideas that at first glance seem very sensible actually contain pitfalls and problems that you should avoid. Dot Graham describes five of these “intelligent mistakes”—automated tests will find more bugs more quickly; spending a lot on a tool must guarantee great benefits; it’s necessary to automate all of our manual tests; tools are expensive so we have to show a substantial return on investment; and testing tools must be used by the testers. Dot points out that automation doesn’t find bugs; tests do.
Load testing is just one—but the most frequently discussed—aspect of performance testing. Luckily, much of performance testing does not demand the same expensive tools, special skills, environments, or time as load testing does. Scott Barber developed the Rapid Performance Testing (RPT) approach to help individuals and teams with the non-load aspects of performance testing.
The IEEE 829 Test Documentation standard is thirty years old this year. Boris Beizer’s first book on software testing also turned thirty. Testing Computer Software, the best selling book on software testing, is twenty-five. During the last three decades, hardware platforms have evolved from mainframes to minis to desktops to laptops to tablets to smartphones. Development paradigms have shifted from waterfall to agile. Consumers expect more functionality, demand higher quality, and are less loyal to brands. The world has changed dramatically and testing must change to match it.
The demand to deliver more software in less time is increasing. Give in to the pressure without thinking, and you end up facing burnout, stress, business risk, and, most likely, even more demands. Refuse, fight the good fight, and it is likely the business will replace you with someone else. Matt Heusser tackles head-on the problem of pressure, sharing his favorite concepts from the book How to Reduce the Cost of Software Testing.
And now for something completely different...Monty Python's Flying Circus revolutionized comedy and brought zany British humor to a worldwide audience. However, buried deep in the hilarity and camouflaged in its twisted wit lie many important testing lessons—tips and techniques you can apply to real world problems to deal with turbulent projects, changing requirements, and stubborn project stakeholders.
If you've worked on an agile project, delivering to production on a regular basis, then you've struggled with the challenge of fitting in all the big tasks—performance, security, usability, and compatibility testing. To make matters worse, over time it becomes more and more challenging just to fit in all the functional testing that needs to take place, and that's even with rigorous unit and acceptance test automation. So how do you fit all that testing into the backlog when it doesn't tie nicely to one specific feature?
The time is now to take a fresh look at the way you and your teams define, build, test and deliver applications.
Sometime in your career as a test manager, you’ll be assigned to lead the effort for a program so large that the CEO and board of directors monitor it. These are programs that bet the organization’s future and come with a high degree of risk, visibility, pressure, and fixed deadlines. Internal audit and external third-party reviews become de rigueur. Your upstream partners—analysis, design, development, and suppliers—all appear (at least to you) to miss their deadlines with no apparent consequences.
Today’s data warehouses are complex and contain heterogeneous data from many different sources. Testing these warehouses is complex, requiring exceptional human and technical resources. So how do you achieve the desired testing success? Geoff Horne believes that it is through test planning that includes technical artifacts such as data models, business rules, data mapping documents, and data warehouse loading design logic. Wayne shares planning checklists, a test plan outline, concepts for data profiling, and methods for data verification.
Model-based testing can be a powerful alternative to just writing test cases. However, modeling tools are specialized and not suitable for everyone. On the other hand, keyword-driven test automation has gained wide acceptance as a powerful way to create maintainable automated tests, and, unlike models, keywords are simple to use. Hans Buwalda demonstrates different ways that keyword testing and models can be combined to make model-based testing more readily accessible. Learn how you can use keywords to create the models directly.
Historically, performance tests are run long after the code has been checked in, making performance issues time consuming to resolve and thus not a good fit in the agile process. Ivan Kreslin presents a solution that he’s implemented to address this problem. Learn how Ivan integrates the functionality in Microsoft Performance Profiling tools into a test automation framework to capture performance-related issues during continuous integration.
Code reviews are often thought of as anti-agile, cumbersome, and disruptive. However, done correctly, they enable agile teams to become more collaborative and effective, and ultimately to produce higher quality software faster. Mark Hammer describes how lightweight code review practices succeed where more cumbersome methods fail. Mark offers tips on the mechanics of lightweight code reviews and compares five common styles of review. He looks at real-world examples and reveals impressive results.
When implementing software quality metrics, we need to first understand the purpose of the metrics and who will be using them. Will the metric be used to measure people or the process, to illustrate the level of quality in software products, or to drive toward a specific objective? QA managers typically want to deliver productivity metrics to management but management may want to see metrics that describe customer or user satisfaction. Philip Lew believes that software quality metrics without actionable objectives toward increasing customer satisfaction are a waste of time.
The ability to rapidly release new product features is vital to the success of today's businesses. To accelerate development and software delivery, teams are adopting agile practices and leveraging service-oriented architectures to integrate legacy applications with other systems. At the same time, testing these composite applications can take longer and cost more. Al Wagner explains the whys and hows of service virtualization and shares ways testers can employ this technology to simulate parts of complex systems and begin testing earlier.
Get ahead of the quality assurance (QA) curve with a preview of results from the 5th edition of the World Quality Report, a series of annual surveys examining the state of application quality and testing practices across industries and geographies. Since 2009, the Capgemini Group and HP have published the annual report to provide insight into the latest trends in application quality, methodologies, tools, and processes across multiple industries and geographies. Via this presentation, Mary, Sultan, and Michael will discuss some of these trends including:
In the test lab and in production everything hinges on looking at the right performance metrics. A common problem for engineering teams is that they don’t know what metrics they should be analyzing. It’s easy to get lost in an ocean of data from disparate monitoring tools and end up with no answers to the simplest questions about performance and capacity. The reality is that to build an effective capacity model, engineers only need to track three key metrics from each tier.
All too often an agile iteration resembles a mini-waterfall cycle with developers coding for the duration of the iteration and then throwing code “over the wall” to the test team. This results in the all-too-familiar “test squeeze” with testers often testing code after the iteration has already finished. When testing occurs after an iteration’s end, the agile principle of potentially releasable is violated and negatively impacts the next iteration. To avoid these problems we must ensure that all testing is completed before the end of the iteration. But how can we achieve this?
For decades, software development tools and methods have evolved with an emphasis on modeling. Standards like UML and SysML are now used to develop some of the most complex systems in the world. However, test design remains a largely manual, intuitive process. Now, a significant opportunity exists for testing organizations to realize the benefits of modeling. Adam Richards describes how to leverage model-based testing to dramatically improve both test coverage and efficiency—and lower the overall cost of quality.
Many of our stakeholders don't understand testing like we do, especially those whose focus is on making sales, growing revenues, and watching the bottom line. As testers, how can we support them in their efforts to be successful? How can we provide useful, timely information that helps them make important decisions? Pradeep Soundararajan shares his experiences with changing perceptions of testing for those in sales and the ripple effect it had on the testers’ freedom and responsibilities.
In this agile world, as the expectations for rapid mobile application development and delivery get shorter every day, the users’ patience with a buggy app has become almost nonexistent. Elizabeth Taylor shares how to reduce iOS application testing time and gain confidence in your code: use Xcode Instruments with JavaScript to automate your functional tests; verify potentially missed UI elements with manual testing including copy, labels, and images; and learn how to stress test your app.
Exploratory testing (ET) consists of simultaneous learning, test design, test execution, and optimization. Most people are able to adopt the outward behaviors of ET but struggle to adopt an ET mindset. Griffin Jones explains that this mindset requires reflecting on four basic questions: Am I learning and adapting? Am I working on the correct mission? Should I redesign the task? Should I change how I perform the task? Sharing his experiences across project roles, Griffin explains why courage and freedom are critical ingredients in answering those four questions.
Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes. Some of the best-known experts in testing will step up to the podium and give you their best shot of lightning.
Applying Shift-left QA strategy with a focus on service virtualization, early automation, and continuous integration. In a recent survey to 300+ organizations, 97% of senior IT Executives said they are increasing investments on early testing, and 63% of IT managers reported that a lack of collaboration between QA and development has increased their project risks.
In the February Fortune magazine, eBay made the cover with the title “eBay is Back!” The article cited improvements in the look and feel of the site, strategic investments in fulfillment, and technology partnerships with retailers to establish it as more than just an online auction service. Jon Bach joined just as eBay was making big bets to make notable and visible gains with this strategy. Jon recounts his two and a half years as a quality engineering director and introduces a concept he calls Live Site Quality.
Gaming is a multibillion-dollar industry, and good testing is critical to any game’s success. Game testing has traditionally been black-box through the client—a method clearly insufficient with increasingly more complex software incorporating 3D physics, thousands of linked and interacting assets, large databases, and client-server architecture.
As organizations implement their mobile strategy, testing teams must support new technologies while still maintaining existing systems. Melissa Tondi describes the major trends and innovations in mobile technology, usage, and equipment that you should consider when transitioning existing test teams or starting new ones.
Many development organizations are experimenting—but getting mixed results—with lean development techniques. As a test or development manager, you have the power to help eliminate defects—the largest source of waste in development—and the enormous rework costs they incur. Bill Curtis discusses Jidoka, another pillar of lean, which uses automation to help developers detect and eliminate defects during development.
Thanks to the massive adoption of cloud and mobile applications, web APIs are moving to center stage for many business and technology teams. As a direct result, the need to deliver a high-quality API experience is essential. When it comes to quality aspects of web APIs, there is more than first meets the eye. Apart from obvious characteristics related to functionality, performance, and security, several not-so-obvious traits of APIs are crucial for their adoption—many related to the context of the end user and how the API is to be consumed.
Regarded as one of the most important advances in software development, code refactoring is a disciplined technique to improve the design, readability, and maintainability of source code. You can learn to apply the same refactoring concepts to automated functional test scripts. Zhimin Zhan introduces functional test refactoring, a simple and highly effective approach to refine and maintain automated test scripts.
How do we improve ourselves as software testers? What are the thinking skills we should develop? How do we refine these skills? Observing is one of the essential skills for software testers. We need to detect changes and differences even when they are subtle. Visual imaging helps us to imagine software that doesn’t exist, to plot testing possibilities. Abstracting helps us to see the outline of a product while not losing focus on small details. Managing distraction and focusing are also vital skills. Recognizing patterns enhances a tester’s ability to detect software defects.
Today, consumers spend more time on mobile apps than on the web. With this increased demand and paradigm shift toward mobile devices, the role of the software tester is evolving and becoming more complex. Since mobile testing is a relatively new domain, software testers face the challenge of understanding not only what to test but how to test. Clint Sprauve focuses on real world strategies and techniques for mobile app testing including device provisioning, mobile network virtualization, multi-OS platform coverage, and hybrid app testing.
Many believe that regression testing an application with minimal data is sufficient. With big data applications, the data testing methodology becomes far more complex. Testing can now be done within the data fabrication process as well as in the data delivery process. Today, comprehensive testing is often mandated by regulatory agencies—and more importantly by customers. Finding issues before deployment and saving your company’s reputation—and in some cases preventing litigation—are critical.
Test status reporting is a key factor in the success of test projects. Stephan Obbeck shares some ideas on how to communicate more than just a red-yellow-green status report to executive management and discusses how the right information can influence their decisions. Testers often create reports that are too technical, losing crucial information in a mountain of detailed data. Management needs to make decisions—based on data they do understand—that support the test project.
The number of software test tools keeps expanding, and individual tools are continuously becoming more advanced. However, there is no doubt that a tester’s most important—yet often neglected and underused—tool is the mind. As testers, we need to employ our intelligence, imagination, and creativity to gain information about the system under test. Humans are biologically designed to learn through play, and even as adults we can exploit this and harness the power of play to encourage and drive our creativity.
You provide your clients a service and product, designed so that each component is customizable and can be dynamically changed right down to screen layout and field location. This greatly increases the amount of testing you have to perform on a release since there could be more than fifty variations of the component. So how do you ensure high quality outcomes with so much testing to be performed under tight timeframes? You automate the testing, of course.
The practice of agile software development requires a clear understanding of business needs. Misunderstanding requirements causes waste, slipped schedules, and mistrust within the organization. Developers implement their perceived interpretation of requirements; testers test against their perceptions. Disagreement can arise about implementation defects, when the cause is really a disagreement about the requirement. Ken Pugh shows how acceptance tests decrease requirements misunderstandings by both developers and testers.
During the past decade, test engineers have become experts in browser compatibility testing. Just when we thought everything was under control, along come native mobile applications that need to run across platforms far more diverse than the desktop browser landscape has ever been. The variety of OSs, screen sizes, and hardware technology combine to create hundreds of configurations that need some testing. Manual testing across so many deployment targets will drive anyone crazy.
Having difficulties getting your organization to recognize the value of QA? Is your “salmon team” losing to currents that impede continuous improvement and strategic planning? Colleen Kirtland and Harish Krishnankutty share their two-year uphill struggle to elevate QA to the position of trusted business partner. Move QA upstream before testing begins by aligning requirements to a business capability model (BCM). Translate the BCM model into key implementation assets with story maps.
Due to the sensitive nature of the personal information often stored on mobile phones, security testing is vital when building mobile applications. Jeff Payne discusses some of the characteristics that make testing mobile applications unique and challenging. These characteristics include how mobile devices store data, fluid trust boundaries due to untrusted applications installed on the device, different and unique aspects of device security models, and differences in the types of threats one must be concerned with. Jeff shares hints and tips for effectively testing mobile applications.
Crowdsourcing has become widely acknowledged as a productivity solution across numerous industries. However, for companies incorporating crowdsourcing into existing business practices, specific issues must be addressed: What problem are we trying to solve? How do we control the process? How do we incentivize people to achieve our goals? Ultimately, the key to successfully employing a crowdsourcing model is to move beyond the realm of the “mob” to create an engaged, interactive community of diverse and skilled professionals.
Feeling fatigued, frustrated, and stressed at work? Wondering how you can stay relevant and highly valued in this fast-changing software development domain? David Rosskopf shares how you can become more productive through a non-traditional approach for automating testing—and much more. David, a self-admitted automation addict, confesses he is easily bored with repetitive tasks and frustrated with inefficiencies. Learn from David how to identify inefficiencies in your workplace and how to develop the right tool to fit each need.
Adding user acceptance testing (UAT) to your testing lifecycle can increase the probability of finding defects before software is released. The challenge is to fully engage users and assist them in becoming effective testers. Help achieve this goal by involving users early and setting realistic expectations. Showing how users add value and taking them through the UAT process strengthens their ability and commitment. Conducting user acceptance testing sessions as software functionality becomes available helps to build confidence and capability—and find defects earlier.
• How to test important areas of the application that can’t be automated – How to deal with 3rd Party- or Custom Controls?• How to manage automated test case portfolios that require way too much technical and business oriented maintenance prior to execution? With testing window too short, how do you drive ROI for automation?• How to empower testers that are business-and test experts? Current automation tools require skilled and experienced programmers.
Software testing standards—who cares, anyway? You should! The new ISO/IEC/IEEE 29119 software testing standard, driven by representatives from twenty countries and under development for the past five years, will be released soon. As a professional tester, you need to know about this standard and how it may apply to your environment. Jon Hagar describes the standard, how it was developed, and what types of projects will be impacted by it. This new standard offers risk-based approach to software testing that can be applied to both traditional and agile projects.
We all know the power of Google—or do we? Two types of people use Google: normal users like you and me, and the not-so-normal users—the hackers. What types of information can hackers collect from Google? How severe is the damage they can cause? Is there a way to circumvent this hacking? As a security tester, Kiran Karnad uses the GHDB (Google Hacking Database) to ensure their product will not be the next target for hackers.
Organizations with a mobile presence today face a major challenge of building robust automated tests around their mobile applications. However, organizations often have limited testing resources for these increasingly complex projects, and stakeholders worry about the quality of the product. So how do you plan a mobile test automation project, recognizing the failure rate of such efforts? Discover how Tarun Bhatia used big data analytics to understand where customers spend most of their time out in the wild on their apps.
Are you running automated tests during development yet not providing automated feedback to the project stakeholders? Vikas Bhupalam approached this problem by leveraging and integrating monitoring, logging, and defect tracking systems to provide automatic feedback to stakeholders. Tests are executed using a Java-based framework, and the results are sent to a monitoring tool that shows up as traffic lights on a dashboard. The dashboard links to logs on the server that provide insights into failing tests and root causes of problems. Alerts can be triggered for specific conditions.
Just as those in the software world are getting their hands around agile practices, leading software organizations are going beyond continuous delivery for acceptance testing and now adopting continuous deployment—the practice of immediately releasing new code from development into production without human intervention. Continuous delivery promises to provide higher business value through faster deployment and leaner, more productive development and operations (DevOps). Many DevOps teams are concerned about what will happen to quality when they move to continuous deployment.
If you work in a large-scale environment, you know how difficult it is to have all the systems “code complete” and ready for testing at the same time. In order to fully test end-to-end scenarios, you must be able to validate results in numerous systems. But what if all those systems are not available for you to begin testing? Chris Reites describes “decoupled testing,” an enterprise-level solution for managing interface data for capture, injection, simulation, and comparison all along your testing paths.
Software Testing has conventionally been a manual activity with a handful of automation introduced during the test execution phase. Adoption of optimization and automation across all Test Lifecycle activities—namely requirements analysis, estimation, design, execution, and reporting—enhances quality, improves productivity ,and reduces time to market. Organizations having a central Application Lifecycle Management platform integrated with Test Lifecycle Automation can reap immense benefits in their QA journey.
When you think of a bounty, do you think of Dog the Bounty Hunter, a reality series featuring a biker dude with a bad mullet, or maybe Django Unchained, Quentin Tarantino’s latest film about a slave-turned-bounty-hunter? Shaun Bradshaw doesn’t have a mullet and isn’t a movie star, but he has witnessed his fair share of bounty-style incentives used to motivate test teams to find more bugs, in hopes of improving software quality.
More information coming soon!
Join us at The Workshop on Regulated Software Testing (WREST)—a free, full-day bonus session held on Friday after the conference concludes. A unique peer workshop, WREST is dedicated to improving the practice of testing regulated systems. We define regulated software as any system that is subject to an internal or external review.