Measure amount of WTFs per package or project

I created a library that measures amount of WTFs per package or project. This library is a by-product that derived from what I was doing few days ago:

I was dabbling with Java and decided to take a break. I came across a well know image that depicts code review session behind closed doors. The image called “The only valid measurement of code quality: WTF/minute”. I decided to make an extension to the latter concept.

This library brings you the ability to mark code smells in your source code (Classes, methods, fields etc.) with ‘WTF’ annotation. The WTF annotation accepts an arbitary message, if none provided, the default message ‘Dude.. WTF?!’ is used instead. When source compiles, the compiler generates a warning using the message from the detected annotation(s) and information about annotated element. The following is a sample output from compiling a class containing WTF annotations:

Warning: : In CLASS [wtf.per.project.model.DummyPojoImpl] :CLASS level => WTF?! Are you for real?! This naming convention is bad!
Warning: : In CLASS [wtf.per.project.model.DummyPojoImpl] : FIELD ‘SOME_CONSTANT’ => WTF?! What is this non-descriptive name?
Warning: : In CLASS [wtf.per.project.model.DummyPojoImpl] : CONSTRUCTOR ‘DummyPojoImpl(java.lang.String)’ => WTF?! Dude.. WTF?!

The library also provides a custom JUnit test runner class. The runner consumes package name, annotation class and search filter through @Grep annotation (used in conjunction with @RunWith). The runner scans .class files under the given package, its sub-packages and JARs for the given annotation (for example WTF.class) occurrences. If String regex pattern provided in @Grep, classes are filtered out from being scanned based on the filter. The runner uses a test class internally to assert whether the code is still infested with WTFs (or any other annotation class set in @Grep).

The analysis of .class files within given package, its sub-packages and any JAR files found is done using reflection. At first I was using third party library called ‘Reflections’ for this task (which is a very good tool btw!), but I ended up not using it anymore. I did not want to have third party dependencies and implemented my own meta data analysis in order to keep the library size small and lean. In the near future, I will extract the metadata analysis logic into a separate library. It should be quite flexible since there different .class file scanners in place. For example, scanner for constructors only or for method parameters, fields only etc.

So, if runner’s test assertion fails (given annotation like @WTF found present in the code), the test class generates metrics about how many WTFs are there and where. These metrics appended to the assertion failure message. For example, the following is the example of the custom JUnit runner:

@RunWith(WTFsPerProject.class) 
@Grep(packageName = "wtf.per.project.model", 
classNameFilter = ".*", annotationClass = WTF.class) 
public final class WTFsPerProjectRunner { }

I have few POJOs marked with WTF annoation, the following is the produced output after running the above runner:

Another example of the custom JUnit runner:

@RunWith(WTFsPerProject.class) 
//Grep only inner classes 
@Grep(packageName = "wtf.per.project.model", 
classNameFilter = ".*[$].*", annotationClass = WTF.class) 
public final class WTFsPerProjectRunner { }

Disclaimer

I created this library for fun. Nothing more. If someone actually decides to use it – great.

If you want you can fork it on Github

Please report if you experience any problems :)

Unfamiliarity Causes Rejection

Recently I listened to a talk given by an ex-ThoughtWorker, Simon Harris. One of the things that Simon talked about was how we, developers (and generally speaking – human beings) sometimes tend to reject what is unfamiliar to us. Within software development context it can be an existing/legacy application or a module that we need to extend, and which is difficult to understand.

Really, how many times we looked at someone else’s work (eg:. a developer that has left the company a long time ago) and thought “Dude, this is so weak … come one”?

Instead of just pointing fingers, maybe we should stop for a moment, try to think and understand, what were the reasons for producing that mediocre piece of code? Look at the current software’s state from a different angle. Sure, sometimes a poorly written software is simply just that – a poorly written software without a particular reason. But at other times, perhaps there were unknown variables in the equation that prevented developers produce something of a higher quality: technical limitations? Some internal politics? Tight deadlines? Environment?

Understanding the historical/current state of an application, can only help us to come up with better results in the long run. I really enjoyed Simon‘s talk, he clearly draws from his extensive experience.

Is TDD Only for … Junior Developers??

Just before the Easter holidays, I had a discussion with two senior developers from my project about TDD. These guys are experienced software engineers that have been around for some time: they have 11 and 20 years of experience in software development under their belts.

I don’t claim to be an advocate for TDD. Currently, I do not practice it (yet), but I do recognize and appreciate its importance. TDD really “forces” you to have clear understanding about the business requirements to be implemented. You cant implement what you do not understand, right? Because you have to write the test first, your code becomes more solid, less bug prune and you have better test coverage.

Sure, writing the test first its an interesting concept: You start writing a test, and you discoverer that you need a class, and possibly a function. You create the class, function and keep writing the test. Basically, you write few lines of test, and few lines of code, few more lines of test and few more lines of code etc. etc.

Ok, I think I diverted a bit, back to the topic :) The discussion took an interesting turn, and I still keep thinking about it. My question to them was – what do you think about TDD? The responses I received totally surprised me.

One of the claims supported by them, was that TDD does not serve much purpose, and a developer will end up re-factoring the unit tests eventually anyway as a result of re-factoring the concrete classes as the development goes on. So if latter is the case, my colleagues argued that there is no point writing unit tests first. Its better to finish writing concrete class, before attempting to write any unit tests.

Also, one of the developers claimed that many books and articles written on TDD discuss examples that are quite simple to implement, and in reality it is quite difficult to use TDD for complex business scenarios.

Another claim was that TDD should be used to teach junior developers the importance of unit testing, the experienced developers don’t really need to use it. The developers should follow KISS principles instead.

I respected their opinions about it, but it seemed fundamentally wrong to me that such experienced developers claim that TDD is basically overrated. The feeling that I got from them was that a lot of developers and software engineers in IT industry really got it wrong.

It got me wondering how, after more than ten years of experience in the industry, one does not appreciate one of the best software engineering practices around …

Offcourse, having said all that, I must point out that TDD is not suitable for every case. TDD can be effective only when it is clear what need to be implemented and there is a deep understanding of requirements.