Software Testing Analysis & ReviewSoftware Testing Analysis & Review
Software Testing Analysis & Review
  Home About Software Quality Engineering Conference Sponsors Contact Us SQE.com  
Software Testing Analysis & Review
Why Attend?
Conference FAQs
Conference at-a-Glance
Keynote Presentations
Preconference Tutorials
Concurrent Sessions
Certification Training
Special Events
Testing EXPO
Networking Events
Alumni Testimonials
Conference Sponsors
Contact Us
About Us
Past STAR Conferences
Other Conference Events
 
 
 

STAREAST 2007 Keynote Presentations

 Wednesday, May 16, 2007 8:45 a.m.

Failure Patterns: A Powerful Tool to Optimize Your Testing
Les Hatton, University of Kingston

 
As professionals, we have always known that exhaustive testing is rarely feasible or affordable. Thus, we must find more efficient and effective approaches to testing. Discovering these approaches depends on the availability of data about defects—and this is where testers run into real problems. Few testers create experiments to measure their own testing effectiveness. Even fewer examine their results for statistical significance. Thus starved of sound data, we are forced to use our intuition. However, strong evidence indicates that today’s software failure patterns are very similar to past patterns that have been studied. Exploiting past work is highly beneficial to the practice and economics of today’s testing, allowing us to concentrate our tests where they are likely to be most fruitful. Join Les Hatton as he presents failure patterns from commercial case studies and recent experiments with sophisticated data mining techniques. Patterns extracted from the Common Vulnerabilities Database and other similar sources help us to be more effective testers.

Les Hatton
Les Hatton earned a Ph.D. in computational fluid dynamics and currently holds the Chair in Forensic Software Engineering at the University of Kingston in the UK. A popular speaker at EuroStar and the STAR conferences, Les is the author of Software Faults and Failure: Avoiding the Avoidable and Living with the Rest and Safer C: Developing Software for High-Integrity and Safety-Critical Systems. A computer scientist by day, Les is a rock and blues guitarist by night, playing with the Juniper Hills Blues Band (available for weddings, pubs, company functions, etc.) He is also an athletics coach and still competes a bit. There are no words to describe how bad Les is at plumbing.
 Wednesday, May 16, 2007 10:00 a.m.

The Risks of Risk-Based Testing
Randall Rice, Rice Consulting

 
Risk-based testing has become an important part of the tester’s strategy in balancing the scope of testing against the time available. Although risk-based methods have always been helpful in prioritizing testing, it is vital to remember that we can be fooled in our risk analysis. Risk, by its very nature, contains a degree of uncertainty. We estimate the probability of a risk, but what is the probability that we are accurate in our estimate? Randall Rice describes twelve ways that risk assessment and risk-based methods may fail. In addition, he draws parallels to risk-based activities in other industries and discusses the important role of contingencies as a safety net when the unexpected occurs. Gain a greater awareness of safer ways to apply risk-based approaches so that you will be less likely to be misled by risk.

Randall Rice
Randall Rice is a leading author, speaker, and consultant in the field of software testing and software quality. A Certified Software Quality Analyst, Certified Software Tester, and Certified Software Test Manager, Randall has worked with organizations worldwide to improve the quality of their information systems and to optimize their testing processes. Randall is co-author of Surviving the Top Ten Challenges of Software Testing.
 Wednesday, May 16, 2007 4:30 p.m.

Positioning Your Test Automation Team as a Product Group
Steven Splaine, Nielsen Media Research

 
Test automation teams are often founded with high expectations from senior management—the proverbial "silver bullet" remedy for a growing testing backlog, perceived schedule problems, or low quality applications. Unfortunately, many test automation teams fail to meet these lofty expectations and subsequently die a slow organizational death— their regression test suites are not adequately maintained and subsequently corrode, software licenses for tools are not renewed, and ultimately test engineers move on to greener pastures. In many cases, the demise of the test automation team can be traced back to unrealistic expectations originally used to justify the business case for test automation. In other words, the team is doomed for failure from the beginning. Steven Splaine describes a creative approach to organizing a test automation effort, an approach that overcomes many of the traditional problems that automation teams face establishing themselves. Steven’s solution is not theory—it is a concrete, "proven in battle" approach introduced and adopted in his organization.

Steven Splaine
Steven Splaine is a chartered software engineer with more than twenty years of experience in developing software systems: Web/Internet, client/server, mainframe, and PCs. He is an experienced project manager, tester, developer, and presenter, who has consulted with more than one hundred companies in North America and Europe. In addition, Steven is a regular speaker at software testing conferences, lead author of The Web Testing Handbook & Testing Web Security, and an advisor/consultant to several Web testing tool vendors and investors.
 Thursday, May 17, 2007 8:30 a.m.

Test Estimation: A Pain or . . . Painless?
Lloyd Roden, Grove Consultants

 
As an experienced test manager, Lloyd Roden believes that test estimation is one of the most challenging and misunderstood aspects of test management. In estimation, we must deal with destabilizing dependencies such as poor quality code received by testers, unavailability of promised resources, and “missing” subject matter experts. Often test managers do not estimate test efforts realistically because they feel pressure—both external from other stakeholders and internal from their own desire to be “team” players—to stay on schedule. Lloyd presents seven powerful ways to improve your test estimation effort and really help the team succeed with honest, data-driven estimating methods. Some are quick and easy but prone to abuse; others are more detailed and complex and perhaps more accurate. Lloyd discusses FIA (Finger in the Air), Formula or Percentage, Historical, Parkinson’s Law vs. Pricing-to-Win estimates, Work Breakdown Structures, Estimation Models, and Assessment Estimation. Come discover how to make the painful experience of test estimation (almost) painless.

Lloyd Roden
With more than twenty-five years in the software industry, Lloyd Roden has worked as a developer, managed an independent test group within a software house, and joined Grove Consultants in 1999. Lloyd has been a speaker at STAREAST, STARWEST, EuroSTAR, AsiaSTAR, Software Test Automation, Test Congress, and Unicom conferences as well as Special Interest Groups in Software Testing in several countries. He was Program Chair for both the tenth and eleventh EuroSTAR conferences.
 Thursday, May 17, 2007 4:15 p.m.

Building the Test Management Office
Geoff Horne, iSQA

 
It’s the life challenge of a test manager—leading testing while keeping the work under control. If it’s not poor code, it’s configuration glitches. If it’s not defect management problems, it’s exploding change requests. When the projects are large, complex, and constrained, it can be almost impossible to keep ahead of the “gotchas” while ensuring testing progress. IT projects have long used the concept of a Project Management Office (PMO), providing administrative services to allow Project Managers to focus on their key responsibilities. In the same way, a Test Management Office (TMO) can help test managers focus on their key testing activities. Join Geoff Horne as he describes the functions encompassed by the TMO; how establishing a TMO can benefit your organization; the management structure and resources needed for success; and how to prevent the TMO from becoming a dumping ground for issues and people no one else wants to handle.

Geoff Horne
Based in New Zealand, Geoff Horne has more than twenty-eight years of experience in IT including software development, sales and marketing, and IT and project management. In the IT industry he has founded and run two testing companies that have brought a full range of testing consultancy services to an international clientele. Recently, in the capacity of a program test manager, Geoff has focused on a few select clients running complex test projects. Geoff has written a variety of white papers on the subject of software testing and has been a regular speaker at the STAR testing conferences.
 Friday, May 18, 2007 8:30 a.m.

Social Engineering: Testing the Organization as Well as the Code
Mike Andrews, Foundstone

 
We're all familiar with network security—protecting the perimeter of your company with firewalls and intrusion detection systems. Similarly, we're doing something about application security—hardening from attacks the software on which companies rely. However, what about the "soft" assets of a company? (And we’re not talking about the sofas and potted plants dotted around the office.) How prone to attack are the people who work for your company? Mike Andrews departs from the traditional talk of testing software to discuss testing human beings. Will people give up their passwords for a candy bar? How often do people actually check the site to which they are connecting? What tricks are in the arsenal of wily and unethical social engineers as they attempt to obtain information and con their way into the often unsecured inner sanctum of a company’s network and application software? You’ll be amazed, you’ll be surprised, and you’ll be shocked. You’ll be shaking your head at the stupidity of some people—and you may discover it could easily have happened to you. Technology isn't always to blame—people often are the weakest link.

Mike Andrews
Mike Andrews is a senior consultant at Foundstone where he specializes in software security, leads Web application security assessments, and teaches Ultimate Web Hacking classes. He brings a wealth of commercial and educational experience from both sides of the Atlantic and is a widely published author and frequent speaker. His book How to Break Web Software (co-authored with James Whittaker, Addison Wesley 2006) is currently one of the most popular books on Web-based application security.


 
 
Send us Your Feedback
Software Quality Engineering  •  330 Corporate Way, Suite 300  •  Orange Park, FL 32073
Phone: 904.278.0524  •  Toll-free: 888.268.8770  •  Fax: 904.278.4380  •  Email: [email protected]
© 2007 Software Quality Engineering, All rights reserved.