You're under tight time pressure and have barely enough information to proceed with testing. How do you test quickly and inexpensively, yet still produce informative, credible, and accountable results? Rapid Software Testing, adopted by context-driven testers worldwide, offers a field-proven answer to this all-too-common dilemma. In this one-day sampler of the approach, Michael Bolton introduces you to the skills and practice of Rapid Software Testing through stories, discussions, and "minds-on" exercises that simulate important aspects of real testing problems.
Whether you are new to testing or looking for a better way to organize your test practices and processes, the Systematic Test and Evaluation Process (STEP™) offers a flexible approach to help you and your team succeed. Dale Perry describes this risk-based framework—applicable to any development lifecycle model—to help you make critical testing decisions earlier and with more confidence. The STEP™ approach helps you decide how to focus your testing effort, what elements and areas to test, and how to organize test designs and documentation.
Selenium WebDriver is an open source automation tool for test driving browsers. People sometimes find the API daunting and their initial automation code brittle and poorly structured. In this introduction, Alan Richardson provides hints and tips gained from his years of experience both using WebDriver and helping others improve their use of the tool. Alan starts at the beginning, explaining the basic WebDriver API capabilities—simple interrogation and navigation—and then moves on to synchronization strategies and working with AJAX applications.
Many organizations never achieve the significant benefits that are promised from automated test execution. Surprisingly often, this is not due to technical factors but to management issues. Dot Graham describes the most important management issues you must address for test automation success, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use or your current state of automation.
Have you ever needed a way to measure your leadership IQ? Or been in a performance review where the majority of time was spent discussing your need to improve as a leader? If you have ever wondered what your core leadership competencies are and how to build on and improve them, Jennifer Bonine shares a toolkit to help you do just that.
The practice of agile software development requires a clear understanding of business needs. Misunderstanding requirements causes waste, slipped schedules, and mistrust within the organization. Jared Richardson shows how good acceptance tests can reduce misunderstanding of requirements. A testable requirement provides a single source that serves as the analysis document, acceptance criteria, regression test suite, and progress-tracker for any given feature. Jared explores the creation, evaluation, and use of testable requirements by the business and developers.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them.
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities—learning, test design, and test execution—done in parallel. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits.
Large-scale and complex testing projects can stress the testing and automation practices we have learned through the years, resulting in less than optimal outcomes. However, a number of innovative ideas and concepts are emerging to better support industrial-strength testing for big projects. Hans Buwalda shares his experiences and strategies he's developed for organizing and managing testing on large projects. Learn how to design tests specifically for automation, including how to incorporate keyword testing and other techniques.
A test strategy is the set of ideas that guides your test design. It's what explains why you test this instead of that, and why you test this way instead of that way. Strategic thinking matters because testers must make quick decisions about what needs testing right now and what can be left alone. You must be able to work through major threads without being overwhelmed by tiny details. James Bach describes how test strategy is organized around risk but is not defined before testing begins. Rather, it evolves alongside testing as we learn more about the product.
Data warehouses have become a popular mechanism for collecting, organizing, and making information readily available for strategic decision making. The ability to review historical trends and monitor near real-time operational data has become a key competitive advantage for many organizations. Yet the methods for assuring the quality of these valuable assets are quite different from those of transactional systems. Ensuring that the appropriate testing is performed is a major challenge for many enterprises.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources.
Requirements are essential for the success of projects―or are they? As testers, we often demand concrete requirements, specified and documented in minute detail. However, does the business really know what they want early in the project? Can they actually produce such a document? Is it acceptable to test with limited or vague requirements? Lloyd Roden challenges your most basic beliefs, explaining how detailed requirements can damage and hinder the progress of testing.
Test reporting is something few testers take time to practice. Nevertheless, it's a fundamental skill—vital for your professional credibility and your own self management. Many people think management judges testing by bugs found or test cases executed. Actually, testing is judged by the story it tells. If your story sounds good, you win. A test report is the story of your testing. It begins as the story we tell ourselves, each moment we are testing, about what we are doing and why. We use the test story within our own minds, to guide our work.
You name the testing topic, and Alan Page has an opinion on it, hands-on practical experience with it—or both. Spend the afternoon with Alan as he discusses a variety of topics, trends, and tales of software engineering and software testing. In an interactive format loosely based on discovering new testing ideas—and bringing new life to some of the old ideas—Alan shares experiences and stories from his twenty year career as a software tester.
As test managers and test professionals we can have an enormous emotional impact on others. We're constantly dealing with fragile egos, highly charged situations, and pressured people playing a high-stakes game under conditions of massive uncertainty. We're often the bearers of bad news and are sometimes perceived as critics, activating people's primal fear of being judged. Emotional intelligence (EI), the concept popularized by Harvard psychologist and science writer Daniel Goleman, has much to offer test managers and testers.
All testers know that we can identify many more test cases than we will ever have time to design and execute. The key problem in testing is choosing a small, “smart” subset from the almost infinite number of possibilities available. Join Lee Copeland to discover how to design test cases using formal black-box techniques, including equivalence class and boundary value testing, decision tables, state-transition diagrams, and all-pairs testing. Explore white-box techniques with their associated coverage metrics.
Critical thinking is the kind of thinking that specifically looks for problems and mistakes. Regular people don't do a lot of it. However, if you want to be a great tester, you need to be a great critical thinker. Critically thinking testers save projects from dangerous assumptions and ultimately from disasters. The good news is that critical thinking is not just innate intelligence or a talent—it's a learnable and improvable skill you can master.
In today's fast-paced IT world we are often told to deliver higher quality systems to our customers under challenging time schedules, with fewer resources, and reduced budgets. As test managers and team leaders, we must become more effective and efficient with the resources we are given. We should begin questioning whether all those testing processes really must be executed and whether all that documentation should be produced, or whether some, if not all, can be streamlined. Are test plans really important? Are detailed scripts really useful? How can we create highly productive teams?
Innovation is a word tossed around frequently in organizations today. The standard clichés are Do more with less and Be creative. Companies want to be innovative but often struggle with how to define, implement, prioritize, and track their innovation efforts. Using the Innovation to Types model, Jennifer Bonine will help you transform your thinking regarding innovation and understand if your team and company goals match their innovation efforts. Learn how to classify your activities as "core" (to the business) or "context" (essential, but non-revenue generating).
Cloud computing has changed the environment of testing. Its use is increasing for hosting business applications (SaaS) and testing (TaaS). Martin Pol and Jeroen Mengerink focus on SaaS, describing the relevant infrastructure and platform services (IaaS and PaaS). How do we test performance of the cloud itself? How do we make sure that the continuity of services is guaranteed? How do we cope with elasticity and the philosophy of bring-your-own-device (BYOD)? Martin and Jeroen discuss the risks that arise when implementing cloud computing―some traditional, but others completely new.
Automating system level test execution can result in many problems. It is surprising to find that many people encounter the same problems, yet they are not aware of common solutions that have worked well for others. These problem/solution pairs are called “patterns.” Seretta Gamba recognized the commonality of these test automation issues and their solutions and, together with Dorothy Graham, has organized them into Test Automation Patterns. Although unit test patterns are well known, Seretta and Dorothy’s patterns address more general issues.
In both agile and traditional projects, keyword-driven testing—when done correctly—has proven to be a powerful way to attain a high level of automation. Many testing organizations use keyword-driven testing but aren't realizing the full benefits of scalability and maintainability that are essential to keep up with the demands of testing today's software. Hans Buwalda describes the keyword approach, and how you use it to can meet the very aggressive goal that he calls the "5 percent challenge"―automate 95 percent of your tests with no more than 5 percent of your total testing effort.
It is not enough to verify that software conforms to requirements by passing established acceptance tests. Successful software products engage, entertain, and support the users' experience. Goals vary from project to project, but no matter how robust and reliable your software is, if your users do not embrace it, business can slip from your hands. Rob Sabourin shares how to elicit effective usability requirements with techniques such as story boarding and task analysis.
We have opportunities to coach people all the time. Much of what we see as coaching is actually undercover training. Real coaching is richer—offering support while explaining options. In this interactive session, Johanna Rothman invites you to explore how to coach, regardless of your position in the organization. Teaching is just one option for coaching. You have many other options, depending on your coaching stance. You may select a counselor’s stance if you are managing up or a partner’s stance if you are a peer.
So you’ve “gone agile” and have been relatively successful for a year or so. But how do you know how well you’re really doing? And how do you continuously improve your practices? When things get rocky, how do you handle the challenges without reverting to old habits? You realize that the path to high-performance agile testing isn’t easy or quick. It also helps to have a guide. So consider this workshop your guide to ongoing, improved, and sustained high-performance. Join Bob Galen and Mary Thorn as they share lessons from their most successful agile testing transitions.
As the need for testing mobile applications increases, so does the need to understand and apply test practices that cover more than just functional correctness. Randy Rice leads you through techniques for designing the right tests for your mobile applications, whether they are on the device or on a website. Learn how to know which items of functionality are important to test based on relative risk. Randy presents his visual method of how to rank important attributes including usability, compatibility, accessibility, and security, and then how to design tests for them.
In response to increasing market demand for high performance applications, many organizations implement performance testing projects, often at great expense. Sadly, these solutions alone are often insufficient to keep pace with emerging expectations and competitive pressures. With specific examples from recent client implementations, Scott Barber shares the fundamentals of implementing T4APM™, a simple and universal approach that is valuable independently or as an extension of existing performance testing programs.
Today’s software applications are often security critical, making security testing an essential part of a software quality program. Unfortunately, most testers have not been taught how to effectively test the security of the software applications they validate. Join Jeff Payne as he shares what you need to know to integrate effective security testing into your everyday software testing activities. Learn how software vulnerabilities are introduced into code and exploited by hackers. Discover how to define and validate security requirements.
Most of the time, as testers, our primary responsibility is to find problems. But, have we paused to consider what a “problem” is? In this interactive, hands-on workshop, Michael Bolton leads delegates in examining and mapping out ideas about problems. What constitutes a problem? How do we recognize one? What are the factors or dimensions of a problem?
Regardless of your role in the software lifecycle, challenges and roadblocks will stand in your way. How can you deal with difficult people who are obstacles to your ability to deliver? How can you influence someone to act on your priorities even when you don’t have organizational authority? How can you find time to network when you’re overwhelmed with day-to-day work? Andy Kaufman shares “The Dirty Little Secret of Business.” You won’t learn this secret in school, yet it is critical to your success. The secret is simple—it’s all about relationships.
In many cases, we choose solutions to problems without sufficient analysis of the underlying causes. This results in implementing a cover-up of the symptoms rather than a solution to the real underlying problem. When we do this, the problem is likely to resurface in one disguise or another, and we may mishandle it again—just as we did initially. Getting to the root of the problem is the better way to solve the current problem, and save time and money in the future.
Anyone who has ever attempted to estimate software testing effort realizes just how difficult the task can be. The number of factors that can affect the estimate is virtually unlimited. The key to good estimates is to understand the primary variables, compare them to known standards, and normalize the estimates based on their differences. This is easy to say but difficult to accomplish because estimates are frequently required even when very little is known about the project and what is known is constantly changing.
How often have you been in a situation where you could see the solution and yet did not have the authority to make a change? You tried persuasion; you tried selling your ideas; you might have even tried friendly manipulation to get your way. And nothing worked. Here’s a new plan. We can learn to develop and use personal power and influence to effect positive changes in our companies.
Communication is at the heart of our profession. No matter how advanced our testing capabilities are, if we can’t convey our concerns in ways that connect with key members of the project team, our contribution is likely to be ignored. Because we act solely in an advisory capacity, rather than being in command, our power to exert influence is almost entirely based on our communication skills. With people deluged with emails and suffering information overload, it is more important than ever that we craft succinct and effective messages, using a range of communication modalities.
Kick off the conference with a welcome reception!Mingle with experts and colleagues while enjoying complimentary food and beverages.
Are you a new STAR speaker or aspiring to be one in the future? Join us at this workshop on making effective conference presentations. Learn the secrets of developing content, identifying the Big Message, preparing slides with just the right words and images, presenting your message, handling questions from the audience, and being ready when things go wrong. Lee Copeland, a professional speaker since birth, shares ideas that will help you be a better speaker, no matter what the occasion.
It’s one thing to be exposed to new techniques from conferences and training courses, but it’s quite another thing to apply them in real life. A major reason is that people tend to focus on learning the technique without first grasping the underlying principles. Basic testing principles, such as the pesticide paradox of software defects and defect clustering, have been known for many years. Other principles, such as “Test automation is not automatic” and “Not every software failure is a defect,” are learned by experience.
We live in interesting times. Knowledge is available at our fingertips, no matter where we are. Social networks enable communication around the world. However, along with these marvels of the information age come weapons of mass distraction. With so many things competing for our attention—and so little time to focus on real work—it’s a wonder we get anything done at all. What does this mean for testers? A common belief is that only focused concentration leads to productive work—and conversely, that distraction causes procrastination and stifles creativity.
Network with colleagues and speakers in the Expo...Read More
• Learn why testing apps on mobile devices in the public cloud can threaten enterprise data• Find out how to mitigate risk using a secure, private mobile device cloud• Learn how you can deliver quality mobile apps and improve mobile device access among on-shore, off-shore and near-shore testing resources while maintaining control of your mobile testing assets
• Why considering different network conditions and devices is crucial• Best Practices for load testing native mobile apps• Native mobile app load test demo with WAN emulation & device simulation
• Cloud Testing for maximum performance and scale• Remarkable User Experience for ease of use from any platform• Enhanced Mobile Testing with realistic performance and functional mobile testing
We’ve all heard test management myths such as: “Utilize everyone, all the time”, “Don’t let people come to you without solutions to problems,” “Training time is useless,” and my all-time favorite “Work smarter.” Are you supposed to believe them? Much of what you may have heard about management is myth—based not on evidence, but on something from the Industrial Revolution, or something that someone else read in a book that does not fit your context. And it may be wrong—dead wrong. As with many myths, they do contain a tiny nugget of truth.
And now for something completely different. Monty Python's Flying Circus revolutionized comedy and brought zany British humor to a worldwide audience. However, buried deep in the hilarity and camouflaged in its twisted wit lie many important testing lessons—tips and techniques you can apply to real world problems to deal with turbulent projects, changing requirements, and stubborn project stakeholders.
When working on test automation, it seems that even though you have done everything right—good architecture, efficient framework, and good tools—you still don’t make progress. The product Seretta Gamba’s team was to automate had become so successful that anyone with even a little domain knowledge was sent to the field while those left on the automation team didn’t really know the full application.
Far too often, agile transformations focus just on development teams, agile frameworks, or technical practices as adoption strategies unfold. Often the testing activity and the testing teams are left behind in agile strategy development or worse yet, they are only along for the ride. That’s simply not an effective transformation strategy. Join experienced agile coach Bob Galen as he shares the Three Pillars Framework for establishing a balanced strategic plan to effectively implement agile quality and testing.
Many testers feel that their organizations do not treat them with the same level of professionalism and respect that their development peers receive. Testers attribute this to the fact that testing is a relatively “new” profession, that few universities grant a formal degree in software testing, and all sorts of other external factors—things beyond their control. But, to be perceived as professionals, we need to start by becoming more professional.
Current Test Process Improvement (TPI) models have proven to be a mismatch when used to assess testing in an agile context, since it is significantly more difficult to describe how to become more flexible than it is to describe how to become more structured. So what’s missing in the current models and how can we help organizations improve their testing in an agile environment? Jeroen Mengerink introduces a systematic model to improve the testing in agile software development.
• Come see how to harness the power of the Cloud to test your web applications
• Test across all major browser types without browser setup or hardware configuration
• Visually ensure your application works across desktop and mobile browsers…in minutes
• The complete “how-to” guide for manual testing on mobile devices• Tools and processes to test your native, hybrid, or web-based mobile applications• Ways to organize and document your team's test cases—no more spreadsheets
• Proprietary vs. Open Source Testing Tools• Addressing Mobile Test Requirements beyond Functional Testing• Creating Sustainable Dedicated Secure Mobile Eco-Systems
• Attendees will get an exclusive peek into massive testing activities going on inside Oracle• Customers and partners can take advantage of our productized lessons learned• Reduce the cost and effort to test changes by more than 80% by using these products
Large technology transformations and undertakings are challenging because they cut across multiple systems and domains of technology and solutions. They involve multiple organizations—from corporate to operations—making communication and collaboration challenging. This complication is amplified when the IT organization in these large enterprises engages multiple vendors. Krishna Murthy shares his experience on how to tackle such situations with customized amalgamations of the best traditional and agile program management practices―Golden Rules of Engagement.
The demand to accelerate software delivery and for teams to continuously test and release high quality software sooner has never been greater. However, whether your release strategy is based on schedule or quality, the entire delivery process hits the wall when agility stops at testing. When software/services that are part of the delivered system or required environments are unavailable for testing, the entire team suffers. Al Wagner explains how to remove these testing interruptions, decrease project risk, and release higher quality software sooner.
As online activities create more revenue than ever, organizations are turning to Selenium both to test their web applications and to reduce costs. Since Selenium is open source, there is no licensing fee. However, as with purchased tools, the same automation challenges remain, and users do not have formal support and maintenance. Proper strategic planning and the use of advanced automation concepts are a must to ensure successful Selenium automation efforts.
Many projects implicitly use some kind of risk-based approach for prioritizing testing activities. However, critical testing decisions should be based on a product risk assessment process using key business drivers as its foundation. For agile projects, this assessment should be both thorough and lightweight. PRISMA (PRoduct RISk MAnagement) is a highly practical method for performing systematic product risk assessments. Learn how to employ PRISMA techniques in agile projects using risk-poker.
No one wishes to see himself as different or treat other people differently because of his uniqueness. Unfortunately, we are frequently judged and our skills presumed based on our ethnicity, beliefs, politics, appearance, lifestyle, gender, or sexual orientation. Our professional success and our projects’ success can be derailed because of lack of understanding, stereotyping, or fear. Our professional environment includes us all―brown, black, white, tall, short, male, female, straight, gay, extroverts, and introverts.
If users can’t figure out how to use your mobile applications and what’s in it for them, they’re gone. Usability and UX are key factors in keeping users satisfied so understanding, measuring, testing and improving these factors are critical to the success of today’s mobile applications. However, sometimes these concepts can be confusing—not only differentiating them but also defining and understanding them. Philip Lew explores the meanings of usability and UX, discusses how they are related, and then examines their importance for today’s mobile applications.
• Compliance with ISO, FDA, CMMI, FMEA, SPICE…• Support for multiple environments: Agile, Waterfall, Regulatory & Hybrids• Guaranteed traceability
This session will focus on achieving continuous unattended testing using cloud-based real devices. Key takeaways:• Learn how to save time when you include testing in every sprint• Learn how to manage device state during testing• Learn how to move beyond automating test cases to automate the entire test scenario
We QA professionals know that the ideal is to build quality into a product rather than to test defects out of it. We know about the overhead associated with defects and how costs grow over time the later in the development process we find defects. If prevention is better than cure, shouldn’t we invest more time and effort in preventing defects? Kirk Lee shares the things we testers can do before coding begins to keep defects from being created in the first place. Kirk explains how to involve QA at the very beginning of the development process where prevention is most valuable.
The stakes in the mobile app marketplace are very high, with thousands of apps vying for the limited space on users’ mobile devices. Organizations must ensure that their apps work as intended from day one and to do that must implement a successful mobile testing strategy leveraging in-the-wild testing. Matt Johnston describes how to create and implement a tailored in-the-wild testing strategy to boost app success and improve user experience.
With the behavior-driven development (BDD) methodology, development teams write high level, plain natural language tests to describe and exercise a system. Unfortunately, it is difficult to develop BDD tests that encompass all interfaces and write tests that can be reused in multiple scenarios. Specifying BDD tests to run as part of different test scenarios without duplicating work frequently requires substantial effort and rework. But Cucumber provides a robust framework for writing BDD tests.
Are you embarking on a large-scale, globally distributed, multi-team scrum project? Have you already identified the potential testing challenges that lie ahead? Or have you belatedly encountered them and are now working on them in real-time?
Teams are a fundamental part of the way we all work. Understanding the ins and outs of team decision making makes us better employees, better co-workers, and even better people. As developers and testers, we continuously make decisions. Most decisions are based on how the decision maker perceives the information at hand. That perception is driven by many factors including cognitive biases—the mental shortcuts we use that lead us to simplify, make quick decisions, and ultimately mess up when we’re trying to attack new problems.
As testers and test managers, we are frequently asked to report on the progress and results of our testing. The question “How is testing going?” may seem simple enough, but our answer is ultimately based on our ability to extract useful metrics from our work and present them in a meaningful way. This is particularly important in agile environments, where clear, concise, and up-to-date metrics are potentially needed multiple times per day.
Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the opportunity for speakers to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation. And now, lightning has struck the STAR keynotes.
Businesses today are looking at the Quality Assurance function as a primary enabler for quality led business differentiation. Leading organizations are transforming their QA functions to create increased value for business and customers. QA transformation involves identifying improvement opportunities to bring in newer capabilities to enhance overall SDLC maturity to achieve top quartile performance. True transformation is achieved by challenging the current status quo, reinventing solutions through innovative approaches and by making a significant impact through business outcomes.
Software runs the business. The modern testing organization aspires to be a change agent and an inspiration for quality throughout the entire lifecycle. To be a change agent, the testing organization must have the right people and skill sets, the right processes in place to ensure proper governance, and the right technology to aid in the delivery of software in support of the business line. Traditionally, testing organizations have focused on the people and process aspect of solving quality issues.
• How is testing being transformed in a connected, digital world?• What are the tools and techniques that will help bring agility?• How can we move towards a user-centric test approach?
• In this session, Virtusa will showcase it’s abstracted language approach to automation• This tool agnostic approach highlights how automated test scripts can be maintainable and portable• This, along with other techniques has been shown to reduce Quality costs by as much as 45% on average
Join us in this session, hosted by Tata Consultancy Services (TCS), to learn about the role of testing in the Dev-Ops era and the vital role of testing to ensure superior customer experience. Topics include:• The emerging trends in Dev-Ops• The necessity of a mindset change while testing in a Dev-Ops landscape• Assurance practices in a Dev-Ops era
On large enterprise projects, the user acceptance test (UAT) is often envisioned to be a grand event where the users accept the software, money is paid, and the congratulations and champagne flow freely. UAT is expected to go well, even though some minor defects may be found. In reality, acceptance testing can be a very political and stressful activity that unfolds very differently than planned.
Many books, articles, classes, and conference presentations tout equivalence class partitioning and boundary value analysis as core testing techniques. Yet many discussions of these techniques are shallow and oversimplified. Testers learn to identify classes based on little more than hopes, rumors, and unwarranted assumptions, while the "analysis" consists of little more than adding or subtracting one to a given number. Do you want to limit yourself to checking the product's behavior at boundaries?
Many organizations are introducing test automation only to discover it is more difficult than they anticipated. The fact is that good test automation requires good coding practices. Good test automation requires good design. To do anything else will lead to spaghetti code that is hard to maintain or update. If you’re new to coding or new to automation, it is difficult to know where to begin. Join Cheezy as he describes and demonstrates lessons he has learned while helping numerous organizations adopt test automation.
Most app teams aim for 4 stars. Why not 5? Because delivering and maintaining a high-quality app becomes more challenging every day. The requirements of agile and continuous integration put more pressure than ever on testers and quality-focused developers. Add to that the raw complexity of device and platform fragmentation, new sensors, app store processes, star ratings and reviews, increased app competition, mobile automation frameworks that only half work, and users who expect the app to not just work flawlessly but also be intuitive and even beautiful and fun.
With the increasing market demand for “always on” high performance applications, many organizations find that their traditional load testing programs have failed to keep pace with expectations and competitive pressures. Agile development practices and DevOps concepts of continuous delivery cause old load testing approaches to become unacceptable bottlenecks in the delivery process.
Load testing is often one of the most difficult testing efforts to set-up—in both time for the deployment and cost for the additional hardware needed. Using cloud-based software, you can transform this most difficult task to one of the easiest. Charles Sterling explains how load testing fits into the relatively new practice of DevOps. Then, by re-using the tests created in the load testing effort to monitor applications, the test team can help solve the challenges in measuring, monitoring, and diagnosing applications―not just in development and test but also into production.
• How to bring interactivity to web test automation
• How to reduce testing time using the technique of parallel script execution on multiple nodes
• How to solve “Captcha Automation” with run-time synchronous communication between script and web application
• How we capture customer’s requirements and track them through development, and QA• How to define test requirements through QMetry using Agile practices• How to ensure coverage status of requirements with a full feedback loop to the customer.
• Develop, execute, and scale to a very large volume of high-quality tests• Systematically manage and maintain test cases, even when the application under test changes• Validate business-level test objectives for maximum application return
Know any testers who have bugs opened more than a year ago and still sitting in their defect queue? More than two years ago? Three? The fact is that many software development efforts are focused on delivering new features and functionality, leaving workarounds in place for bugs released in prior versions of applications. Often these defects seem relatively minor—we all have some workarounds for customers—but these are still bugs and ultimately should be dealt with.
In today’s cost conscious marketplace, solution providers gain advantage over competitors when they deliver measurable benefits to customers and partners. Systems of even small scope often involve distributed hardware/software elements with varying execution parameters. Testing organizations often deal with a complex set of testing scenarios, increased risk for regression defects, and competing demands on limited system resources for a continuous comprehensive test program. Learn how designing a testable system architecture addresses these challenges.
In agile projects, when the cycle from ideas to production shortens from months to hours, each software development activity—including testing—is impacted. Reaching this level of agility in testing requires massive automation. But test execution is only one side of the coin. How do we design and maintain tests at the required speed and scale? Testing should start very early in the development process and be used as acceptance criteria by the project stakeholders.
With the expansion of mobile platforms, software development and testing services have expanded also. A wide variety of applications are entering the consumer world as native, mobile web, and hybrid applications. Adding to this complexity, multiple operating systems, browsers, networks, and BYOD (bring your own device) are used. Successful deployment and adoption of these applications in the consumer world requires a robust, flexible, and scalable testing strategy.
Application performance is the first aspect of quality that every customer experiences. It can mean the difference between winning and losing a customer—between a 5-star app and a 2. No matter how sexy your application is, if it doesn’t load quickly, customers will turn to your competitor. Quality is core to agile, but agile doesn’t mention performance testing specifically. The challenge is that generally user stories don’t include the phrase “…in 3 seconds or less” and developers just focus on developing.
As the world of software development changes, software testing organizations are challenged to be more innovative to match the speed at which software releases are being deployed. The new software industry buzzword is DevOps; so you might wonder if your software testing organization is still important and how it fits in to this new industry trend. Erik Stensland shares his research into what the DevOps model is, the three ways of implementing DevOps, testing solutions for DevOps, and the benefits of DevOps.
• Learn what service virtualization entails.• Gain an understanding of how to eliminate constraints associated with static staged test environments.• Learn the efficiencies to be gained by integrating service virtualization into your existing process.
• How past attempts to change software development and testing approaches, in order to meet the increasing customer demand for better quality, aren’t really working.• What test teams really need to become leaner, test earlier, and test continuously with agile test environments using service virtualization.• How to integrate continuous testing with deployment automation accelerating the software delivery lifecycle and achieving continuous feedback.
• How to speed up the creation and maintenance of automated tests in UFT by eliminating special scripting or coding• How to leverage data driven technology to reduce test cases and expand coverage of your business scenarios• How to expand the applications and technologies that UFT can test against through the use of framework extensibility
Studies show that at least half of all software defects are rooted in poor, ambiguous, or incomplete requirements. For decades, testing has complained about the lack of solid concrete requirements, claiming that this makes our task more difficult and in some instances—impossible. Lloyd Roden challenges these beliefs and explains why having detailed requirements can be at best damaging and at worst can even be harmful to both testing and the business.
Manual functional testing is a slow, tedious, and error prone process. As we continue to incrementally build software, the corresponding regression test suite continues to grow. Rarely is time allotted to consolidate and keep these test cases in sync with the product under development. If these test cases are used as the basis for automation, the resulting suite is composed of very granular tests that are often quite brittle in nature.
Today, organizations are rapidly deploying mobile versions of their customer-facing and internal applications. With the prevalence of more agile-based approaches and the challenge of an ever-increasing diversity of devices and OS versions, testers are being asked to accomplish more testing in less time. Rachel Obstler shares how leading enterprises are improving the efficiency of their mobile testing using automation, and how they identify the right processes and tools for the job.
Today’s test organizations often have sizable investments in test automation. Unfortunately, running and maintaining these test suites represents another sizable investment. All too often this hard work is abandoned and teams revert to a more costly, but familiar, manual approach. Jared Richardson says a more practical solution is to integrate test automation suites with continuous integration (CI). A CI system monitors your source code and compiles the system after every change. Once the build is complete, test suites are automatically run.
Analytics are an increasingly important capability of any large web site or application. When a user selects an option or clicks a button, dozens—if not hundreds—of behavior-defining “beacons” fire off into a black box of “big data” to be correlated with the usage patterns of thousands of other users. In the end, all these little data points form a constellation of information your organization will use to determine its course. But what if it doesn’t work?
Planning, designing, implementing, and tracking results for QA and test automation can be challenging. It is vital to ensure that any tools selected will work well with other application lifecycle tools, driving the adoption of automation across multiple project teams or departments, and communicating the quantitative and qualitative benefits to key stakeholders. Mike Sowers discusses his experiences creating an automation architecture, establishing tool deployment plans, and selecting and reporting tool metrics at a large financial institution.
In this session, Tata Consultancy Services (TCS) will discuss the emerging trends in the digital age, its impact on testing and assurance and how you can gain advantage with a robust Quality Assurance strategy.
Are you frustrated by the false expectation that we can test quality into a product? By the time an application is delivered to testing, our ability to introduce quality principles is generally limited to defect detection. So how do you begin to shift your team’s perceptions into a true quality assurance organization? Susan Schanta shares her approach to Shift Quality Left by performing ambiguity reviews against requirements documents to reduce requirement defects at the beginning of the project.
Have you had customers report issues that cannot be reproduced in the test environment? Have you had defects leak into production because your test environment is not equivalent to production? In the past, the eBay test environment didn’t mirror production data and had security, feature, and service fidelity issues. Kamini Dandapani shares how eBay solved these problems. They now dedicate a portion of their production environment to enable eBay engineers to do more realistic testing.
DevOps is gaining popularity as a way to quickly and successfully deploy new software. With all the emphasis on deployment, software quality can sometimes be overlooked. In order to understand how DevOps and software testing mesh, Glenn Buckholz demonstrates a fully implemented continuous integration/continuous delivery (CI/CD) stack. After describing the internals of how CI/CD works, Glenn identifies the touch points in the stack that are important for testing organizations.
Many companies develop strong software development practices that include ongoing testing throughout the development lifecycle but fail to account for the testing of security-related use cases. This leads to security controls being tacked on to an application just before it goes to production. With security controls implemented in this manner, more security vulnerabilities will be found with less time to correct them.
Sports video games are generally on a short cycle time—tied to the start of a particular sport’s season. Like all video games, the pressure is always on to add more features to sell more games, and the list of “cool” features is endless. Getting buy-in to implement automated testing in this environment can be a challenge. And once you get that buy-in, your next challenge is to ensure it provides significant value to the game team. Fazeel Gareeboo shares the lessons they learned at EA Sports—lessons you can take back to your project.
Mobile apps bring a new set of challenges to testing—fast-paced development cycles with multiple releases per week, multiple app technologies and development platforms to support, dozens of devices and form factors, and additional pressure from enterprise and consumers who are less than patient with low quality apps. And with these new challenges comes a new set of mistakes testers can make! Fred Beringer works with dozens of mobile test teams to help them avoid common traps when building test automation for mobile apps.
Technologies, testing processes, and the role of the tester have evolved significantly over the past several years. As testing professionals, it is critical that we evaluate and evolve ourselves to continue to add tangible value to our organizations. In your work, are you focused on the trivial or on real "game changers"? Jennifer Bonine describes critical elements that, like a skilled painter, help you artfully blend people, process, and technology into a masterpiece, woven together to create a synergistic relationship that adds value to your organization.
Please click here for more information.
Join us at The Workshop on Regulated Software Testing (WREST)—a free, full-day bonus session held on Friday after the conference concludes. A unique peer workshop, WREST is dedicated to improving the practice of testing regulated systems. We define regulated software as any system that is subject to an internal or external review.