Using Aspects as a Test Coverage Tool


Using Aspects as a Test Coverage Tool

I was working on the Groovy project and I wanted to know what tests covered a method I was modifying. I tried to use clover to do this but it gave me way too much information and most of it wasn’t that useful. Maybe AspectJ could help?

We are using JUnit for Groovy so all tests happen to extend a particular base class and each test method has a particular naming convention. My initial design was simple:

1) define a pointcut for test methods
2) define a pointcut for all methods in the test methods call stack
3) keep track of which methods are called for which tests

This is actually quite easy in AspectJ:

private static final boolean enabled = Boolean.getBoolean("groovy.aspects.coverage");     pointcut inTestClass(TestCase testCase) : this(TestCase) && execution(void test*()) && this(testCase);     private Map coverage;          before(TestCase testCase) : if(enabled) && cflowbelow(inTestClass(testCase)) && execution(* *(..)) {       String testname = testCase.getClass().getName();       String methodSignature = thisJoinPointStaticPart.getSignature().toString();       Set tests = (Set) coverage.get(methodSignature);       if (tests == null) { 	tests = new HashSet(); 	coverage.put(methodSignature, tests);       }       tests.add(testname);     }

This gets me most of what I need. Unfortunately, in our Groovy build, each JUnit test is called in a separate VM so we can’t just build up a big map and be done with it. I thought about a few different ways to deal with this. I could have an external persistence mechanism or I could have one output file per test. I didn’t like the idea of having a million little files all over the place because it would be hard to search them quickly. So I downloaded Berkley DB got about 5 pages into the API and realized some sort of crazed non-Java C programmer wrote it. Well, that was out. Instead I brute forced it. I added two more pieces of advice:

before(TestCase testCase) : if(enabled) && inTestClass(testCase) {       try { 	File file = new File("results.ser"); 	if (file.exists()) { 	  ObjectInputStream ois = new ObjectInputStream(new FileInputStream(file)); 	  coverage = (Map) ois.readObject(); 	  ois.close(); 	} else { 	  coverage = new HashMap(); 	}       } catch (Exception e) { 	e.printStackTrace();       }     }      after(TestCase testCase) : if(enabled) && inTestClass(testCase) {       try { 	File file = new File("results.ser"); 	ObjectOutputStream oos = new ObjectOutputStream(new FileOutputStream(file)); 	oos.writeObject(coverage); 	oos.close();       } catch (Exception e) { 	e.printStackTrace();       }     }

Good old fashioned serialization to the rescue. Before running a test in a test case I load the old results, after running a test in a test case, I write the new results back out to disk. It can get pretty slow as you get towards the end of the run, but I figure that I can optimize it later. The easiest way to optimize it would be to append maps to the file and then crunch them all together when you load. That would save a lot of swapping it in and out of memory, but this is just a prototype. After making this exquisite gem, I apply it to the groovy.jar, the junit.jar, and the test classes. Notice that because Groovy compiles down to Java class files, this works for Groovy methods as well. Isn’t having one bytecode format grand! So then I run all the tests and get my “results.ser” file. What to do with it? Well, process it with Groovy of course! Here is the simplest script I could come up with to do what I want:

import java.io.*; map = new ObjectInputStream(new FileInputStream(args[0])).readObject(); map.findAll {   if (it.key =~ args[1]) {     return it;   } }.each {   println it.key + ": " + it.value; }

You pass it the “results.ser” file and a regular expression to match against method signatures and you get a list of signatures and all the tests that use them. Here is an example of the output once you are done:

Groovy:> groovy coverage.groovy results.ser bind Object org.codehaus.groovy.sandbox.markup.StreamingMarkupBuilder.bind(Object): [org.codehaus.groovy.sandbox.markup.StreamingMarkupTest] Object org.codehaus.groovy.sandbox.markup.BaseMarkupBuilder.bind(Closure): [DOMTest, org.codehaus.groovy.sandbox.markup.StreamingMarkupTest] Object org.codehaus.groovy.sandbox.markup.StreamingDOMBuilder.bind(Object): [DOMTest]

So if I was changing the BaseMarkupBuilder.bind method, I would know that I have to run at least DOMTest and StreamingMarkupTest to make sure that I didn’t regress. This feature is something that can readily go into an IDE like Eclipse. You modify a method, it looks at the call hierarchy of the tests or this runtime generated file, determines which tests need to be run, and the launches them in the background after you build. If anything happens you get the red squiggles on your method with the results of the test attached. Talk about iterative development! The XP people can even go the other way. Write all your tests and keep fixing the code till the squiggles go away and not only does it build, but it runs! I’m telling you, something like this is the next step the IDEs will have to take.