Testing physical systems—circuits and machines—is straightforward. Put them through every stress they’re likely to encounter, find what breaks, find why it broke, and figure how to keep that stress from breaking it the next time.
Testing software is another matter. The stresses are more diverse, the interactions more complex, and the operating environments (hardware, other software, and user input) are often less predictable. Crashes can be obvious, but failures producing reasonable-looking but erroneous results are not. Failure of a physical system leaves its traces in broken parts and blown components, but software failures might export their havoc while leaving the program code blandly unaffected.
In an increasingly software-driven world, software testing, verification, and validation become ever more critical, and widespread. “Companies are now more aware that software testing offers a positive return on investment—10 years ago, most software companies didn’t think so,” says IEEE Member Jeff Offutt, professor of software engineering at George Mason University, in Fairfax, Va. “The primary goal is to find the mistakes [while the product is still] in the software company, before programs go to users. Fixing it later is much more expensive, either because users say they won’t buy from that company again or because fixing it once it’s out in the field is much harder.”
Developing and validating the tests themselves is also critical, and the number of researchers in the field is growing, though only a modest number show up at software conferences. The IEEE International Conference on Software Testing, Verification and Validation, to be held from 21 to 25 March in Berlin, expects about 200 attendees, while other software engineering conferences are about half that size, Offutt says. The IEEE Computer Society sponsors the conference.
The conference’s first and last days are devoted to specialized workshops covering such topics as model-based security testing, benchmarks for event-driven software, and testing of variability-intensive systems. In between, some 40 main paper and poster sessions are scheduled. The sessions are expected to include papers on original research and on advances in practical testing and quality improvements.
Human factors also are on the agenda. As the call for papers points out, “no other engineering artifact is more closely intertwined with human activity, resulting in complex hybrid systems that involve software, human judgment, and, sometimes, political, legal, and social processes.”
A symposium on one of the workshop days invites doctoral students to present their early research to a panel of established scientists for critiquing and suggestions. “It’s a way to draw young scientists into the field and give them some help at an early stage in their careers,” Offutt says.
New technologies for building software bring new problems for testers. For example, “We still are not very good at testing Web applications,” Offutt says, “They are widely used, and most have problems.”
The sheer size of a program can be another source of testing problems. “We’re building collections of huge software systems that are integrated collections of programs that have problems we haven’t discovered yet,” Offutt says. “But I think we’re now very good at testing individual functions in programs.”
Testing large-scale, complex systems of systems will be covered in keynote addresses by Ian Sommerville, professor of software engineering at St. Andrews University, in Scotland, and Wolfram Schulte, founding manager of Microsoft’s Research in Software Engineering team. Sommerville’s topic is “Designing for Failure,” and Schulte’s is “Software Engineering and Testing at Microsoft: A Research Perspective.” The third keynoter, Walter F. Tichy of the Karlsruhe Institute of Technology, in Germany, is addressing the additional complexities caused by today’s multicore processors in “Tunable Architectures, or How to Get the Most Out of Your Multicore.”
Not all problems in the field are technical. Offutt reports there is a lot less funding for software-testing research in North America than a decade ago. “Yet we have more need for testing research and more scientists—which means more competition for fewer resources,” he says. In Europe, research has been increasing for a while, as well as in some Asian countries. They see the decline in U.S. research funding as a chance to establish themselves in software testing.”