Underlying Observations
- Software is and will be, for the
foreseeable future, the best way to augment the effectiveness of people,
societies, economies, and cultures.
- The ability to change software -- that
is, the "softness" of software -- is where its true power resides.
- Complaining about the difficulty and
cost of changing software is reasonable because we need to continue
to improve.
- It is simultaneously unreasonable
because there is no medium that is easier to change; software often
changes precisely because it is far more costly to change
systems in which software is embedded and/or hardware upon which
software runs. At the very least, some rational objectives
should underlie these complaints: for example, what relative and
absolute costs are acceptable for software testing? for software
maintenance? etc.?
- Proper assessment and evaluation of
research results in software engineering is not a rote activity.
To the first order, a result must be sufficiently interesting and then
have evidence provided commensurate with the result. Both parts
generally demand a discriminating palate; this scares many people away
from software engineering research as they mistake the difference
between the terms qualitative and subjective.
- Many results that I have been involved
in have come from extensive conversations, with students and other
colleagues, about "what is unnecessarily difficult and costly in
software engineering?" At some level, we try to reduce the gap
between (in Brooks' words) essential and accidental complexity.
- Software engineering research results
that I don't like often suffer from addressing an uninteresting problem
or from trying to get people to do things computers are better at, and
vice versa.
|
Ongoing Research Projects
Configurable
software systems add combinatorial complexity to the already
difficult problems of testing and analysis: configurations can have
arbitrarily different behaviors from one another even though they share
considerable source code and intended behavior. Software systems
commonly define enough configuration options and option settings (such
as which underlying network protocol to use or which architecture to
target) to embody thousands of possible configurations, each of which
must generally be tested and analyzed individually.
We wish to increase confidence in the
properties of software configurations while reducing the cost of doing
so based two central and related observations.
- Although configurations can in
principle be arbitrarily far from one another in behavior, this is
typically not the case – indeed, they are considered configurations
because of their sharing of source code and of intended behavior.
That is, the expectation is that the behavior of two related
configurations will usually be “close.”
- The source code that is shared
across configurations – a resource that is rarely used for testing
and analyzing more than individual configurations – affords an
opportunity to extend evidence gather during the test and analysis
of established configurations to less-investigated configurations.
Regression testing
is generally intended to mimic the equivalent of rerunning all tests
from a program P on the modified P' to ensure that no behaviors intended
to be common to them both have been changed. Our intuition is that the
differences between P and P' are usually small, but our regression
testing approaches generally assume that the distance could be
arbitrary. As testing is a constrained resource, we are
considering alternative approaches that consider the long-term history
of success or failure of a regression test over a sequence of program
versions as a way to better use this resource. As a simplistic
example, is it better to run a regression test that has "passed" on
fifty versions in a row, or is it better to run a randomly selected test
from the test suite that has not been run over that time?
Although, like financial investments, past performance is not a
guarantee of future performance, we wish to explore whether more
aggressive of use of past performance can improve the way we do software
testing.
|
Publications and related information
- My ACM Digital Library
author
page.
- My own list
(mostly with abstracts, not complete)
- Contact me directly if you need
something else from me.
Other material
|
Questions that would be great to answer (at
some level or another)
- Brooks talks about the difference
between accidental and essential complexity. How can we get a
concrete handle on this idea?
- Is there an appropriate balance of
assessing the product, assessing the process, and credentialing
individuals that leads to better software?
|