Recently, I was writing an Annotation Processor for the @Composite project. In good TDD fashion, that first and foremost meant writing some tests.
Although I in the end come across something that was fairly workable, it was trickier than one might have hoped for.
Mocked by mocks
The javax.lang.model API consists almost entirely of interfaces. Predestined for mocking, one might say.
Here we go then, with the code under test:
[java]
LeafAnnotationElementValidator(ExecutableElement element, Types typeUtils, Elements elementUtils) {
// the method must be a member of a composite annotation
Element enclosingElement = element.getEnclosingElement();
if (!ElementUtils.isAnnotation(enclosingElement)
|| (enclosingElement.getAnnotation(CompositeAnnotation.class) == null)) {
…
}
…
}
[/java]
element, right, the annotated element we’d like to process for this test. OK, one mock there. typeUtils and elementUtils, two more mocked parameters. Then a call to element, which returns another mock. And so on.
Five minutes, umpteen mocks and even more mock expectations later, I gave up.
The wrong cocktail
Why did this approach feel so wrong? It wasn’t just the many lines of code, which far outnumbered the "actual" test logic. I was trying to test the processor’s behaviour on certain fragments of code, but this code was nowhere to be seen. At best, one could glimpse its blurry reflection in the plethora of mock expectations, but one would have to be pretty well versed in the model API to make much sense of those.
What I really wanted was a Mirrorgarita1, something that would allow me to write something like:
[java]
@Test
public boolean validatorTest() {
Element mockElement = Mirrorgarita.createMockElement(TestAnnotation.class.getMethod(…));
LeafAnnotationElementValidator validator = new LeafAnnotationElementValidator(
mockElement, Mirrorgarita.newTypesMock(), Mirrorgarita.newElementsMock());
assertSomething(validator…);
}
[/java]
Here, it’s a bit clearer that I’m trying to see how the validator behaves when processing the model API representation of a certain method. And if the code for this method is in the test class (TestAnnotation could be declared in the test), I can examine this code snippet and, better still, modify it if required. Certainly more straightforward than tweaking the mock expectations.2
Unfortunately, however, Mirrorgarita does not seem to exist – at least, my cursory searches weren’t able to find her. Yes, there is Elements.getTypeElement, which returns the TypeElement for (i.e. the model API representation of) a class or interface of a given name. But the only way to get hold of an Elements instance is via ProcessingEnvironment.getElementUtils, and unfortunately that is only available…when you’re actually processing annotations, i.e. inside your processor. Not in a test class. Sigh.
The real McCoy
So we’re missing a way of conveniently converting segments of code into their model API representation. Well, thankfully there’s one old friend one can always turn to for such transformations…javac.
What, Runtime.exec? With all the system-dependent brittleness that brings? Violating the holy principle of platform independence…in a test?? Well, luckily this is Java 6, so none of that is necessary. The compiler API comes to our rescue.
Of course, invoking the compiler means compiling entire classes, as opposed to code snippets, and only allows us to test the annotation processor as a whole. If you’ve modularized your annotation processor, these are really integration rather than unit tests. But since you control the source that is being compiled, it should be easy to come up with examples that test individual parts of the processor’s functionality.
Slightly more inconvenient is the fact that this approach the only allows code paths to be tested that influence the result of the compilation, e.g. by throwing an error or raising a warning, or produce some side effect, e.g. by writing (to) a file. Any "internal" logic that does neither of these is not verifiable in this manner.
Further, when checking for the expected result of the compilation, "there should be one error" usually isn’t enough. For instance, in the case of this validation processor, I would like to be sure that the error returned really is caused by the code that is in error, not by a bug in the processor which accepts the wrong code but mistakenly reports an error elsewhere.
Since checking for error messages is horrifically brittle, the best one can do here appears to be to expect errors at a specific location, i.e. line of code. This is marginally better and seems to work, but still smells rather fishy.
In code
Enough talk. Here’s a code sample:
[java]
public class CompositeAnnotationValidationProcessorTest extends
AbstractAnnotationProcessorTest {
@Override
protected Collection<Processor> getProcessors() {
return Arrays.<Processor> asList(new CompositeAnnotationValidationProcessor());
}
@Test
public void leafAnnotationOnNonCompositeMember() {
assertCompilationReturned(Kind.ERROR, 22,
compileTestCase(InvalidLeafAnnotationUsage.class));
}
@Test
public void validCompositeAnnotation() {
assertCompilationSuccessful(compileTestCase(ValidCompositeAnnotation.class));
}
}
[/java]
Here, getProcessors returns the annotation processor instances that are supposed to be called during the compilation. The first test expects the compilation of the InvalidLeafAnnotationUsage class to return an error in line 22, whilst the second expects the compilation of ValidCompositeAnnotation to be successful, i.e. contain no errors3.
The compileTestCase method of the AbstractAnnotationProcessorTest base class, meanwhile, looks like this4:
[java]
protected List<Diagnostic<? extends JavaFileObject>> compileTestCase(
String… compilationUnitPaths) {
Collection<File> compilationUnits;
try {
compilationUnits = findClasspathFiles(compilationUnitPaths);
} catch (IOException exception) {
throw new IllegalArgumentException(…);
}
DiagnosticCollector<JavaFileObject> diagnosticCollector =
new DiagnosticCollector<JavaFileObject>();
StandardJavaFileManager fileManager =
COMPILER.getStandardFileManager(diagnosticCollector, null, null);
CompilationTask task = COMPILER.getTask(null, fileManager, diagnosticCollector,
Arrays.asList("-proc:only"), null,
fileManager.getJavaFileObjectsFromFiles(compilationUnits));
task.setProcessors(getProcessors());
task.call();
try {
fileManager.close();
} catch (IOException exception) {}
return diagnosticCollector.getDiagnostics();
}
[/java]
Whilst the annotation processor was very much a nice-to-have, the validations it was carrying out most certainly were not. When I started on the processor, therefore, I was quietly hoping on finding a way of reusing the runtime validation code that operated on the compiled classes and annotations.
Clearly, this wasn’t going to be entirely straightforward, because the information available does differ (in some areas, e.g. generics, substantially) from what’s known at runtime. But in most areas it is similiar enough to lead one to think that, for “simple” things like getting all annotations of a class or the modifiers of a method, it might be possible to come up with an implementation that was suitable for both compile- and runtime.
Alas, it was not to be (and perhaps naive to expect as much). The Model and runtime reflection APIs are almost completely distinct. A few methods do cross the devide, for instance Element.getAnnotation, but the JavaDoc comments accompanying this certainly make you aware of the difficulties associated with this.
As it is, you end up with a moderately frustrating amount of duplication (Types.isAssignable(subType, superType) for classForSuperType.isAssignableFrom(classForSubType) etc.). More awkwardly, most of the utility methods that have found their way into the various Class- and ReflectionUilts are not available for the Model API. There are the Elements and Types helper classes, but they’re more limited than those available for the runtime reflection API.
In the end, I wrote an ElementUtils class to address this. It’s still limited in scope to what was required for the validation processor, but hopefully it’s a least a useful foundation for something rather more comprehensive.
- Apologies for the dreadful pun on the name of the rather more well-known cocktail framework.
- Note how comparatively straightforward it is to do this for the runtime reflection API. If I want to test some code that requires the Method representation of a private, static method, I just declare private static myMethod() somewhere and use MyMethodHolderClass.class.getDeclaredMethod to retrieve it.
- In order to compile the test cases the source – the .java – files, need to be on the test classpath. In Maven, this requires a bit of trickery similar to
[xml]
<build>
<testResources>
<testResource>
<directory>${basedir}/src/test/resources</directory>
</testResource>
<!– add a directory of source files which need to be compiled during a test–>
<testResource>
<directory>${project.build.testSourceDirectory}/path/to/test/samples</directory>
<targetPath>path/to/test/samples</targetPath>
</testResource>
</testResources>
…
</build>
[/xml]
which has the unfortunate side effect of confusing the Maven Eclipse plugin into adding an additional source folder. This needs to be removed from the build path in Eclipse. - At first glance, the fifth classes argument of the (rather sparsely documented) JavaCompiler.getTask method would seem to perfect for running annotation processing without going through a full compilation: perfect for our purposes. Unfortunately, it doesn’t quite do the trick: annotations on the classes named are passed to the processor, but if the named classes are themselves annotations (and thus might need to be validated) they are not accessible via the RoundEnvironment, presumably because they are not being compiled.