A Failed tests reveals a potential bug in the tested code. Developers need to understand which parts of the test are relevant to the failure before they start bug-fixing. This project investigates automated techniques to explain why a test fails. As an initial result, we presented a fully automated technique and its tool implementation, called FailureDoc, to infer explanatory documentation. FailureDoc augments the failed test with explanatory documentation in the form of code comments. The comments indicate changes to the test that would cause it to pass, helping programmers understand why the test fails.

We evaluated FailureDoc on five real-world programs. FailureDoc generated meaningful comments for most of the failed tests. The inferred comments were concise and revealed important debugging clues. We further conducted a user study. The results showed that FailureDoc is useful in bug diagnosis.