A couple of days ago I stumbled upon this post. Now, I won’t discuss the post itself (which I think is well structured and surely worth a read) but just its title, that states: it’s OK not to write unit tests.
I’ve been working in C# lately: unit testing in a static language is hell. The clutter created by all those interfaces you’re forced to create solely for mocking is enough reason for give testing up… and that’s just the beginning. So yes, if you’re working in C# (and I think it applies to Java too) it is totally OK not to write unit tests: on the contrary, you’ve gotta have a damn good reason to aim for 100% coverage on a C# project. And by damn good I mean the customer is paying for it because, honestly, maintaining those tests is gonna be a huge waste of time, and you better make sure you can bill that time to someone.
Things change big time when talking about a dynamic language, like ruby. Testing in ruby is so easy that it’s a waste not to do it. Tools are also much better than their .NET counterparts (think RSpec). You still have problems, especially during refactoring, but personally I didn’t feel like throwing my test suite away each time I had to break something. If it’s OK not to write unit tests in ruby? That depends on your situation, but surely the cost of testing is much lower.
Because that’s the point, and that’s what it made me think about the title in the first place: testing has a cost and it’s much higher than what the so-called agile mentors often promise… maybe because the “implementation details” – and the deadlines involved – are usually to be fulfilled by someone else. Telling someone that the cost of testing is lower than the cost of problems due to not testing is lying: it varies greatly according to the situation.
So of course it’s OK not to write unit tests. I wonder how we got to the point that somebody has to argument that. I feel like agile has acquired this silver bullet status and we know how much management love those… let’s stop this before bad things happen – the one who has to maintain the next enterprise behemoth with 2k+ tests that break at every gust of wind might be you.
Here follows a dummy controller scenario :
While testing this method is certainly possible, given that both
@service are injected at some point before the callback is fired, tests are not going to be lean :
Mock objects are a necessary evil: when there’s no output to check and no clean way to access the modified state you can check for the correct calls to be made on your mocks. Sure, it looks better than no testing at all, but the price to pay is pretty high:
- Verifying expectations means testing how and not what, which is what you really care about.
- The tests become much less valuable since they are almost useless as documentation: it takes much less to go straight into the code than to understand what’s going on from reading an integration test.
- The majority of the test code will be about setting up the environment. In the example the only 2 lines we care about are the one where the expectation is set and the last one, the actual method call on the controller. All the remaining code is just to allow the method to execute.
- By the fact that you’re testing how mocking basically means duplication of your under-test logic. Adding insult to injury it also makes your tests extremely frail, because even a single change in your logic will reflect into lot of broken tests due to setups not working anymore.
When mocking becomes a source of too much hassle it’s often useful to stop testing and reconsider the code under test: there’s a good chance something ‘s wrong with the underlying design, much likely too much work is done by a single actor, usually the controller (or presenter, viewmodel, you pick it). For instance, assuming that the view mechanics are a given and cannot be modified, the example code would be a little better if refactored to be like this:
Controller methods should be a maximum of three line long, with no
ifs. No kidding! I also drop the testing completely on controllers if i can get methods that simple: they add no value and are a pain to keep green, especially on the first stages of development.
In the end i think it’s important to remember that mocking (and unit testing, agile and what else) is just a tool and by the moment it gets in your way you should carefully reconsider what the benefits are: methodology is always out there looking for a chance to strike, don’t let your guard off🙂
Learning is a very subjective matter and surely there are many different ways one can improve her skill, ranging from attending lectures all the way to open your favourite editor and hack randomly until you get it. Anyway, no matter how you learn the craft, the goal of practice is to add new tools to your belt so your can deal better with Real World™ problems.
Much easier said than done, though. There is often a wide gap between what you learn “by yourself” and what needs to be done in a production environment: that new technology/pattern/feature you just learnt of might look so cool that you really have to use it everywhere, just to find out some time later that your approach was too naïve and needs a rewrite, or that the technology was just not suited to the problem and you need to switch back to the old one, or that the guy who’s in charge of support really does not have a clue of what the code you wrote is supposed to do (you know, the fact you are willing to give up some of your spare time and learn new things outside work doesn’t mean everyone is).
One of the ways you can find out if you’re doing something that makes sense outside your devbox would be to go and take a look at what others have done in a similar situation and – guess what – open source projects are a great source of tested and reliable code, but the size of the projects can be sometime overwhelming: at first you just don’t know where to start looking and, without a sound approach, you can spend a lot of time before finding out how a feature is implemented from start to end.
I’ve been delving into open source code for a while now. I didn’t find much guidance around so I had to learn the hard way. I hope the following tips will be of some help to you:
start from tests – tests are often a great starting point: if they’re well written you can understand what a method is supposed to do without even looking at the implementation. You can use tests as they were a map (or an index) of the project.
start small – pick a minor feature at first, or even just a method if you can’t find a feature small enough. It might not be extremely interesting to know that datamapper checks for properties to be present on the model before querying but at least now you’ve got an handle, you can explore deeper from there.
when in doubt, debug – more than often the code will be not clear enough or not commented enough to be understandable. In those case debugging a test makes everything easier: both VS and NetBeans have great debugging capabilities, if you don’t know how to take advantage of the debugger already I strongly suggest you to learn how to. If you are debugging ruby with NetBeans you should install the gem
ruby-debug-ide for greatly improved debugging speed.
tinker with the code – a great way to know if you’ve learnt your way into the project is to make some small changes and see if the resulting change in behaviour is the same you were expecting. I suggest you to implement your changes strictly with TDD in this particular case, and to make sure all the project tests are green before and after your modifications so to lower the chances of creating uncatched side effects.
bug fixes come later – I believe that rushing through the learning stage and trying to create a patch to submit without having a deep understanding of the whole project first is a waste not just of your time but also of the guy who is checking your broken patch, so get a grip on your ego and take it easy. I think bug reports are much more useful to submit, and if you’re learning the right way you probably will have plenty of them to send in much before the time you are ready to fix them yourself.
As you might know Jeff Atwood and Joel Spolsky, during one of their podcast at stackoverflow, argued against the SOLID principles (defining them “idiotic”) going as far as saying “quality does not matter that much”. This triggered a blog reaction from Uncle Bob himself, which then led to another podcast and well, you can keep up with the story yourself from this point.
Of course I’m no match for these giants: i’ve got neither right nor intention to criticize their points of view. Still, I couldn’t really understand: why are they (both sides) so mad at methodology?
Let me explain. It seems that just a little share of the so-called programmers can actually program. But there’s more : even among this elite the difference between the best and the worst are of orders of magnitude. And worse : you can’t really learn it.
So, what the hell? There’s a huge amount of software that needs to be written out there, who’s gonna do that? Just the elites from the elite? Wait, here’s the solution: Methodology! You don’t need the gift, just blindly and faithfully follow the Instructions and everything’s fine! What a breakthrough!
And I’m serious about that. I’ve seen that in action, it costs – a LOT – in terms of productivity but you can actually make people that have no clue about programming building quality software. Take a Methodology and enforce it brutally: there’s no “maybe i’m doing this my way” or “let’s just patch it up for the next release”. Everything must be done by the book. It works. And i think it’s not a bad thing, of course unless Jeff and Joel are right and quality really doesn’t matter : but i like to think that’s not true – even knowing that i’m probably wrong.