|
|
|
|
|
Les Hatton, University of Kingston
As professionals, we have always known that exhaustive testing is rarely feasible or affordable. Thus, we must find more efficient and effective approaches to testing. Discovering these approaches depends on the availability of data about defects—and this is where testers run into real problems. Few testers create experiments to measure their own testing effectiveness. Even fewer examine their results for statistical significance. Thus starved of sound data, we are forced to use our intuition. However, strong evidence indicates that today’s software failure patterns are very similar to past patterns that have been studied. Exploiting past work is highly beneficial to the practice and economics of today’s testing, allowing us to concentrate our tests where they are likely to be most fruitful. Join Les Hatton as he presents failure patterns from commercial case studies and recent experiments with sophisticated data mining techniques. Patterns extracted from the Common Vulnerabilities Database and other similar sources help us to be more effective testers.
Les Hatton earned a Ph.D. in computational fluid dynamics and currently holds the Chair in Forensic Software Engineering at the University of Kingston in the UK. A popular speaker at EuroStar and the STAR conferences, Les is the author of Software Faults and Failure: Avoiding the Avoidable and Living with the Rest and Safer C: Developing Software for High-Integrity and Safety-Critical Systems. A computer scientist by day, Les is a rock and blues guitarist by night, playing with the Juniper Hills Blues Band (available for weddings, pubs, company functions, etc.) He is also an athletics coach and still competes a bit. There are no words to describe how bad Les is at plumbing. |
|
|
|
|
Randall Rice, Rice Consulting
Risk-based testing has become an important part of the tester’s strategy in balancing the scope of testing against the time available. Although risk-based methods have always been helpful in prioritizing testing, it is vital to remember that we can be fooled in our risk analysis. Risk, by its very nature, contains a degree of uncertainty. We estimate the probability of a risk, but what is the probability that we are accurate in our estimate? Randall Rice describes twelve ways that risk assessment and risk-based methods may fail. In addition, he draws parallels to risk-based activities in other industries and discusses the important role of contingencies as a safety net when the unexpected occurs. Gain a greater awareness of safer ways to apply risk-based approaches so that you will be less likely to be misled by risk.
Randall Rice is a leading author, speaker, and consultant in the field of software testing and software quality. A Certified Software Quality Analyst, Certified Software Tester, and Certified Software Test Manager, Randall has worked with organizations worldwide to improve the quality of their information systems and to optimize their testing processes. Randall is co-author of Surviving the Top Ten Challenges of Software Testing. |
|
|
|
|
Steven Splaine, Nielsen Media Research
Test automation teams are often founded with high expectations from senior management—the proverbial "silver bullet" remedy for a growing testing backlog, perceived schedule problems, or low quality applications. Unfortunately, many test automation teams fail to meet these lofty expectations and subsequently die a slow organizational death— their regression test suites are not adequately maintained and subsequently corrode, software licenses for tools are not renewed, and ultimately test engineers move on to greener pastures. In many cases, the demise of the test automation team can be traced back to unrealistic expectations originally used to justify the business case for test automation. In other words, the team is doomed for failure from the beginning. Steven Splaine describes a creative approach to organizing a test automation effort, an approach that overcomes many of the traditional problems that automation teams face establishing themselves. Steven’s solution is not theory—it is a concrete, "proven in battle" approach introduced and adopted in his organization.
Steven Splaine is a chartered software engineer with more than twenty years of experience in developing software systems: Web/Internet, client/server, mainframe, and PCs. He is an experienced project manager, tester, developer, and presenter, who has consulted with more than one hundred companies in North America and Europe. In addition, Steven is a regular speaker at software testing conferences, lead author of The Web Testing Handbook & Testing Web Security, and an advisor/consultant to several Web testing tool vendors and investors. |
|
|
|
|
Lloyd Roden, Grove Consultants
As an experienced test manager, Lloyd Roden believes that test estimation is one of the most challenging and misunderstood aspects of test management. In estimation, we must deal with destabilizing dependencies such as poor quality code received by testers, unavailability of promised resources, and “missing” subject matter experts. Often test managers do not estimate test efforts realistically because they feel pressure—both external from other stakeholders and internal from their own desire to be “team” players—to stay on schedule. Lloyd presents seven powerful ways to improve your test estimation effort and really help the team succeed with honest, data-driven estimating methods. Some are quick and easy but prone to abuse; others are more detailed and complex and perhaps more accurate. Lloyd discusses FIA (Finger in the Air), Formula or Percentage, Historical, Parkinson’s Law vs. Pricing-to-Win estimates, Work Breakdown Structures, Estimation Models, and Assessment Estimation. Come discover how to make the painful experience of test estimation (almost) painless.
With more than twenty-five years in the software industry, Lloyd Roden has worked as a developer, managed an independent test group within a software house, and joined Grove Consultants in 1999. Lloyd has been a speaker at STAREAST, STARWEST, EuroSTAR, AsiaSTAR, Software Test Automation, Test Congress, and Unicom conferences as well as Special Interest Groups in Software Testing in several countries. He was Program Chair for both the tenth and eleventh EuroSTAR conferences. |
|
|
|
|
Geoff Horne, iSQA
It’s the life challenge of a test manager—leading testing while keeping the work under control. If it’s not poor code, it’s configuration glitches. If it’s not defect management problems, it’s exploding change requests. When the projects are large, complex, and constrained, it can be almost impossible to keep ahead of the “gotchas” while ensuring testing progress. IT projects have long used the concept of a Project Management Office (PMO), providing administrative services to allow Project Managers to focus on their key responsibilities. In the same way, a Test Management Office (TMO) can help test managers focus on their key testing activities. Join Geoff Horne as he describes the functions encompassed by the TMO; how establishing a TMO can benefit your organization; the management structure and resources needed for success; and how to prevent the TMO from becoming a dumping ground for issues and people no one else wants to handle.
Based in New Zealand, Geoff Horne has more than twenty-eight years of experience in IT including software development, sales and marketing, and IT and project management. In the IT industry he has founded and run two testing companies that have brought a full range of testing consultancy services to an international clientele. Recently, in the capacity of a program test manager, Geoff has focused on a few select clients running complex test projects. Geoff has written a variety of white papers on the subject of software testing and has been a regular speaker at the STAR testing conferences. |
|
|
|
|
Mike Andrews, Foundstone
We're all familiar with network security—protecting the perimeter of your company with firewalls and intrusion detection systems. Similarly, we're doing something about application security—hardening from attacks the software on which companies rely. However, what about the "soft" assets of a company? (And we’re not talking about the sofas and potted plants dotted around the office.) How prone to attack are the people who work for your company? Mike Andrews departs from the traditional talk of testing software to discuss testing human beings. Will people give up their passwords for a candy bar? How often do people actually check the site to which they are connecting? What tricks are in the arsenal of wily and unethical social engineers as they attempt to obtain information and con their way into the often unsecured inner sanctum of a company’s network and application software? You’ll be amazed, you’ll be surprised, and you’ll be shocked. You’ll be shaking your head at the stupidity of some people—and you may discover it could easily have happened to you. Technology isn't always to blame—people often are the weakest link.
Mike Andrews is a senior consultant at Foundstone where he specializes in software security, leads Web application security assessments, and teaches Ultimate Web Hacking classes. He brings a wealth of commercial and educational experience from both sides of the Atlantic and is a widely published author and frequent speaker. His book How to Break Web Software (co-authored with James Whittaker, Addison Wesley 2006) is currently one of the most popular books on Web-based application security. |
|
|
|
|
|
|