Software Testing Analysis & ReviewSoftware Testing Analysis & Review
Software Testing Analysis & Review
  Home About Software Quality Engineering Conference Sponsors Contact Us SQE.com  
Software Testing Analysis & Review
Why Attend?
Conference FAQs
Conference at-a-Glance
Keynote Presentations
Preconference Tutorials
Concurrent Sessions
Certification Training
Special Events
Testing EXPO
Networking Events
Alumni Testimonials
Conference Sponsors
Contact Us
About Us
Past STAR Conferences
Other Conference Events
 
 
 

STAREAST 2007 Thursday Concurrent Sessions

  Go To:   Wednesday  |  Thursday  |  Friday  

 Thursday, May 17, 2007 9:45 a.m.
 T1 is a Double-Track Session!
T1
Test Management
Crucial Test Conversations
Robert Galen, RGCG, LLC
 
Many test managers feel that Development or Management or The Business does not understand or support the contributions of their test teams. You know what? They’re probably right! However, once we accept that fact, we should ask: Why? Bob Galen believes that it is our inability and ineffectiveness at 360º communications, in other words, “selling” ourselves, our abilities and our contribution. We believe that our work should speak for itself or that everyone should inherently understand our worth. Wrong! We need to work hard to create crucial conversations in which we communicate our impact on the product and the organization. Bob shares with you specific techniques for improving the communication skills of test managers and testers so that others in your organization will better understand your role and contributions. He also focuses on improving your cross-team communication and feedback skills—a key to creating highly effective teams. Come prepared to engage and communicate.

• High impact “conversations” to effectively communicate your test team’s worth
• How to craft better status reports to tell your story
• Effective feedback conversations to improve individual performance
T2
Test Techniques
Testing Requirements: Ensuring Quality Before the Coding Begins
Joe Marasco, Ravenflow
 
Software that performs well is useless if it ultimately fails to meet user needs and requirements. Requirements errors are the number one cause of software project failures, yet many organizations continue to create requirements specifications that are unclear, ambiguous, and incomplete. What's the problem? All too often, requirements quality gets lost in translation between business people who think in words and software architects and engineers who prefer visual models. Joe Marasco discusses practical approaches for testing requirements to verify that they are as complete, accurate, and precise as possible—a process that requires new, collaborative approaches to requirements definition, communication, and validation. Additionally, Joe explores the challenges of developing “requirements-in-the-large,” by involving a broad range of stakeholders—analysts, developers, business executives, and subject matter experts—in a process complicated by continual change.

• Why many common requirements specification techniques fail
• Bridging the gap between requirements written in prose and visual models
• Measures of requirements quality
T3
Test Automation
Keyword-Driven Test Automation Illuminated
Mark Fewster, Grove Consultants
 
Test Automation has come a long way in the last twenty years. During that time many of today's most popular test execution automation tools have come into use, and a variety of implementation methods have been tried and tested. Many successful organizations began their automation effort with a data-driven approach and enhanced their efforts into what is now called keyword-driven test automation. Many versions of the keyword-driven test execution concept have been implemented. Some are difficult to distinguish from their data-driven predecessors. So what is keyword-driven test automation? Mark Fewster provides an objective analysis of keyword-driven test automation by examining the various implementations, the advantages and disadvantages of each, and the benefits and pitfalls of this automation concept. Find out if keyword-driven test automation is what you are looking for or if it is an empty promise for your test organization.

• Differences between data-driven and keyword-driven test automation
• Benefits and drawbacks of keyword-driven testing
• Alternative keyword-driven automation implementations
T4
Model-based Testing
Build a Model-Based Testing Framework for Dynamic Automation
Ben Simo, Standard & Poor's
 
The promises of faster, better, and cheaper testing through automation are rarely realized. Most test automation scripts simply repeat the same test steps every time. Join Ben Simo as he shares his answers to some thought-provoking questions: What if your automated tests were easier to create and maintain? What if your test automation could go where no manual tester had gone before? What if your test automation could actually create new tests? Ben says model-based testing can. With model-based testing, testers describe the behavior of the application under test and let computers generate and execute the tests. Instead of writing test cases, the tester can focus more on the application's behavior. A simple test generator then creates and executes tests based on the application’s modeled behavior. When an application changes, the behavioral model is updated rather than manually changing all the test cases impacted by the change.

• Definition and benefits of model-based testing
• How to create a framework to support your model-based testing
• Model behavior rather than write static tests
T5
Special Topics
Lightning Talks: A Potpourri of 5-Minute Presentations
Matthew Heusser, Excelon Development (Facilitator)
 
Lightning Talks are nine five-minute talks in a fifty-minute time period. Lightning Talks represent a much smaller investment of time than track speaking and offer the chance to try conference speaking without the heavy commitment. Lightning Talks are an opportunity to present your single, biggest bang-for-the-buck idea quickly. Use this as an opportunity to give a first time talk or to present a new topic for the first time. Maybe you just want to ask a question, invite people to help you with your project, boast about something you did, or tell a short cautionary story. These things are all interesting and worth talking about, but there might not be enough to say about them to fill up a full track presentation. Click here for more information on how to submit a Lightning Talk for a future event.
 Thursday, May 17, 2007 11:15 a.m.
T6
Test Techniques
Finding Success in System Testing
Nathan Petschenik, STS Consulting
 
To achieve success in system testing—efficiently preventing important defects from reaching users—technical excellence is certainly necessary but it is not sufficient. Even more important are the skills to influence the project and team behavior to prevent defects from ever reaching the system test. Nathan Petschenik shares his insights into the technical skills you need for a successful system test. In addition, he explains how system test leaders can and must change project attitudes and influence behavior to significantly impact the quality of the software that reaches the system test team. Among other recommendations, Nathan explains how getting developers to fulfill their testing role is one way system test team leaders can influence quality on projects. By nurturing front-loaded quality—quality designed in and built in, not tested later—within the project, system testers can multiply their efforts and ensure a successful system test.

• Technical skills for system testing
• Importance of front-loaded quality to system testing
• Identifying and eliminating obstacles to front-loaded quality
T7
Test Automation
Unit Testing Code Coverage: Myths, Mistakes, and Realities
Andrew Glover, Stelligent
 
You’ve committed to an agile process that encourages test driven development. That decision has fostered a concerted effort to actively unit test your code. But, you may be wondering about the effectiveness of those tests. Experience shows that while the collective confidence of the development team is increased, defects still manage to raise their ugly heads. Are your tests really covering the code adequately or are big chunks remaining untested? And, are those areas that report coverage really covered with robust tests? Andrew Glover explains what code coverage represents, how to effectively apply it, and how to avoid its pitfalls. Code coverage metrics can give you an unprecedented understanding of how your unit tests may or may not be protecting you from sneaky defects. In addition, Andrew describes the differences between code-based coverage and specification-based coverage, and examines open-source and commercial tools available to gather these metrics.

• The meaning of code coverage for unit testing
• Benefits and pitfalls of code coverage
• Code coverage tools available for Java and .NET
T8
Model-based Testing
Harnessing the Power of Randomized Unit Testing
James Andrews, University of Western Ontario
 
It is a problem all testers have had. We write tests believing we know how the system should behave, what inputs will precede others, and which calls will be made first and which will be made last. Unfortunately, the system may not operate that way, and as a result our tests are inadequate. However, there is a solution to this problem: Randomized unit testing helps you find bugs in places you wouldn't even think to look by selecting call sequences and parameter values randomly. James Andrews explains the power and potential of randomized testing with demonstrations and case studies of real-world software defects found. He presents RUTE-J, a free Java package modeled after JUnit, which can help you develop code for testing Java units in a randomized way. James explains how assertion style, parameter range selection, and method weight selection can make randomized testing more effective and thorough.

• Why randomized unit tests find defects in unexpected places
• Adding power to your testing with randomization
• Open source tool RUTE-J for randomized unit testing
T9
Special Topics
Automated Software Audits for Assessing Product Readiness
Susan Kunz, Solidware Technologies, Inc.
 
Rather than continually adding more testing, whether manual or automated, how can you assess the readiness of a software product or application for release? By extracting and analyzing the wealth of information available from existing data sources—software metrics, measures of code volatility, and historical data—you can significantly improve release decisions and overall software quality. Susan Kunz shares her experiences using these measures to decide when and when not, to release software. Susan describes how to derive quality index measures for risk, maintainability, and architectural integrity through the use of automated static and dynamic code analyses. Find out how to direct limited testing resources to error-prone code and code that really matters in a system under test. Take back new tools to make your test efforts more efficient.

• How to apply adaptive analysis to evaluate software quality
• Effective use of quality indices and indicators
• Application of finite element analysis to software testing
 Thursday, May 17, 2007 1:30 p.m.
T10
Test Management
How to Fake a Test Project
James Bach, Satisfice, Inc.
 
It has never been easier to fool your manager into thinking that you're doing a great job testing! James Bach covers all of today’s most respected test fakery. These techniques include: misleading test case metrics, vapid but impressive looking test documentation, repeatedly running old tests "just in case they find something," carefully maintaining obsolete tests, methodology doublespeak, endless tinkering with expensive test automation tools, and taking credit for a great product that would have been great even if no one had tested it. James covers best practices for blame deflection. By the time you're through, your executive management will not know whether to fire the programmers or the customers. But, you know it will not be you. (Disclaimer: It could be you if an outsourcing company fakes it more cheaply than you do.)

• Cautionary true stories of test fakery, both purposeful and accidental
• Why surprisingly common practices often go surprisingly wrong
• Signs that your testing may be fake
T11
Test Techniques
When There‘s Too Much to Test: Ask Pareto for Help
Claire Caudry, Perceptive Software
 
Preventing defects has been our goal for years, but the changing technology landscape—architectures, languages, operating systems, data bases, Web standards, software releases, service packs, and patches—makes perfection impossible to reach. The Pareto Principle, which states that for many phenomena 80% of the consequences stem from 20% of the causes, often applies to defects in software. Employing this principle, Claire Caudry describes ways to collect and analyze potential risks and causes of defects through technology analysis, customer surveys, T-Matrix charting, Web trends reports, and more. Then, Claire discusses ways to provide adequate testing without a huge financial investment—use of virtual machines, hardware evaluation programs, vendor labs, and pre-release beta programs. Finally she discusses approaches to minimize customer risk by proactive communication of known technology and third-party issues without getting into a “blame game” with your vendor partners.

• Applying the 80/20 rule to testing priorities
• Methods to collect and analyze changing technology and customer platform requirements
• Testing options without additional automation and hardware purchases
T12
Test Automation
Verification Points for Better Testing Efficiency
Dani Almog, Amdocs
 
More then one-third of all testing time is spent verifying test results—determining if the actual result matches the expected result within some pre-determined tolerance. Sometimes actual test results are simple—a value displayed on a screen. Other results are more complex—a database that has been properly updated, a state change within the application, or an electrical signal sent to an external device. Dani Almog suggests a different approach to results verification: separating the design of verification from the design of the tests. His test cases include “verification points,” with each point associated with one or more verification methods, which can later be used on different test cases and occasions. Some of the verification methods are very simple numerical or textual comparison; others are complex; such as photo comparison. Dani describes a large test automation project in which he used verification points and reports his success in terms of reduced cost and time—and increased accuracy.

• The benefits of verification points to test efficiency and accuracy
• Techniques for designing verification points
• Evaluate the use of verification points within your testing
T13
Personal Excellence
The Nine Forgettings
Lee Copeland, Software Quality Engineering
 
People forget things. Simple things like keys and passwords and the names of friends long ago. People forget more important things like passports and anniversaries and backing up data. But Lee Copeland is concerned with things that the testing community is forgetting—forgetting our beginnings, the grandfathers of formal testing and the contributions they made; forgetting organizational context, the reason we exist and where we fit in our company; forgetting to grow, to learn and practice the latest testing techniques; and forgetting process context, the reason that a process was first created but which may no longer exist. Join Lee for an explanation of the nine forgettings, the negative effects of each, and how we can use them to improve our testing, our organization, and ourselves.

• Why we must constantly rediscover what we already know
• How each forgetting limits our personal and organizational ability
• The power we have to grow and to improve
T14
SOA Testing
A Unique Testing Approach for SOA Systems
Ed Horst, Amberpoint
 
Service Oriented Architecture (SOA) systems most often use services that are shared across different applications. Some services may even be supplied by third-parties, outside the direct control of a project, system, or organization. As these services evolve, organizations face the issue of ensuring the continuing proper functionality and performance of their ever-changing SOA systems. The implication of even a minor change to a service is often not fully understood until the systems dependent on that service operate in production and then fail. Creating an environment in which all SOA systems dependent on a particular service can be tested is virtually impossible. However, Ed Horst presents a unique approach to testing services that does not require a detailed knowledge of the systems that use that service. Ed shares real-world examples of organizations that have successfully managed service changes.

• Pitfalls of changing an SOA system without adequate testing
• A cost-effective way to test an entire set of dependent applications
• Plan for change in a world of interconnecting service oriented architectures
 Thursday, May 17, 2007 3:00 p.m.
T15
Test Management
From Start Up to World Class Testing
Iris Trout, Bloomberg
 
So you have been asked to start or improve a testing group within your organization. Where do you start? What services should you provide? Who are the right people for the job? Iris Trout presents a framework of best practices needed to implement or rapidly improve your testing organization. Hear how Bloomberg LP, a large financial reporting institution, tackled the issue of implementing a new testing organization. Iris describes how she built a strong testing process in minimal time and achieved exceptional results. She shares her interviewing techniques, automation how to's, and many other ways to implement quick successes. Learn to create Service Level Agreements. Discuss the value of peer reviews and how to evaluate their results. Iris shares handouts full of user-friendly ideas to help you get started.

• The essential components of a strong testing organization
• What makes up a good Service Level Agreement
• Examples of strong quality metrics that help drive your goals
T16
Test Techniques
Essential Regression Testing
Deakon Provost, State Farm Insurance
 
You are responsible for testing application releases, and the demand for quality is high. You must ensure that new functionality is adequately tested and that existing functionality is not negatively impacted when applications are modified. If you plan to conduct formal regression testing, you must answer a multitude of questions: What exactly is regression testing? What resources do I need? How can I justify the cost of regression testing? How can I quantify the benefits? Learn the ”who, what, when, where, why, and how” of regression testing as Deakon Provost describes how to organize a regression test team, how to obtain funding for that team and their work, what methods you can use to save the organization money while regression testing, and how to quantify the value that regression testing provides.

• How to implement regression testing in your organization
• The skills and tools needed for a successful regression testing effort
• Win management’s approval for a regression test strategy
T17
Test Automation
Top Ten Reasons Test Automation Projects Fail
Shrini Kulkarni, iGATE Global Solutions Limited
 
Test automation is the perennial “hot topic” for many test managers. The promises of automation are many; however, many test automation initiatives fail to achieve those promises. Shrini Kulkarni explores ten classic reasons why test automation fails. Starting with Number Ten . . . having no clear objectives. Often people set off down different, uncoordinated paths. With no objectives, there is no defined direction. At Number Nine . . . expecting immediate payback. Test automation requires a substantial investment of resources which is not recovered immediately. At Number Eight . . . having no criteria to evaluate the success. Without defined success criteria, no one can really say whether the efforts were successful. At Number Seven . . . Join Shrini for the entire Top Ten list and discover how you can avoid these problems.

• Why so many automation efforts fail
• A readiness assessment to begin test automation
• Learn from the mistakes other organizations have made
T18
Personal Excellence
The Great Testers of Our Time and Times Past
Clive Bates, Grove Consultants
 
What can today’s software testers learn from present and past testing masters, many of whom have put their own lives on the line to make amazing contributions to the world in which we live? Clive Bates is thinking about testers such as Chuck Yeager, Yuri Gagarin, Andy Green, Leonardo da Vinci, and Isambard Kingdom Brunel. Isambard who? Isambard Kingdom Brunel was one of the greatest engineers in British history. A designer of bridges, tunnels, viaducts, docks, and ships, Brunel constantly battled resistance from established authorities, lack of adequate funding, changes in requirements, and project delays (sound familiar?). In researching the achievements of past testing masters, Clive has identified important traits and characteristics that made them successful. If we acknowledge and adopt these traits in our lives, we may become more successful in our work.

• The testing secrets of masters in other disciplines
• How to adopt their practices to your work
• Embrace their enthusiasm and courage to promote innovation
T19
SOA Testing
Will Your SOA Systems Work in the Real World?
Jacques Durand, Fujitsu Software
 
The fundamental promise of Service Oriented Architectures (SOA) and Web services demands consistent and reliable interoperability. Despite this promise, existing Web services standards and emerging specifications present an array of challenges for developers and testers alike. Because these standards and specifications often permit multiple acceptable implementation alternatives or usage options, interoperability issues often result. The Web Services Interoperability Organization (WS-I) has focused on providing guidance, tools, and other resources to developers and testers to help ensure consistent and reliable Web services. Jacques Durand focuses on the WS-I testing tools that are used to determine whether the messages exchanged with a Web service conform to WS-I guidelines. These tools monitor the messages and analyze the resulting log to identify any known issues, thus improving interoperability between applications and across platforms.

• WS-I interoperability conformance guidelines and policies
• How to use each of the WS-I testing tools
• Ways to use the results of the conformance tool tests

  Go To:   Wednesday  |  Thursday  |  Friday  


 
 
Send us Your Feedback
Software Quality Engineering  •  330 Corporate Way, Suite 300  •  Orange Park, FL 32073
Phone: 904.278.0524  •  Toll-free: 888.268.8770  •  Fax: 904.278.4380  •  Email: [email protected]
© 2007 Software Quality Engineering, All rights reserved.