It’s OK not to write unit tests – of course it is

A couple of days ago I stumbled upon this post. Now, I won’t discuss the post itself (which I think is well structured and surely worth a read) but just its title, that states: it’s OK not to write unit tests.

I’ve been working in C# lately: unit testing in a static language is hell. The clutter created by all those interfaces you’re forced to create solely for mocking is enough reason for give testing up… and that’s just the beginning. So yes, if you’re working in C# (and I think it applies to Java too) it is totally OK not to write unit tests: on the contrary, you’ve gotta have a damn good reason to aim for 100% coverage on a C# project. And by damn good I mean the customer is paying for it because, honestly, maintaining those tests is gonna be a huge waste of time, and you better make sure you can bill that time to someone.

Things change big time when talking about a dynamic language, like ruby. Testing in ruby is so easy that it’s a waste not to do it. Tools are also much better than their .NET counterparts (think RSpec). You still have problems, especially during refactoring, but personally I didn’t feel like throwing my test suite away each time I had to break something. If it’s OK not to write unit tests in ruby? That depends on your situation, but surely the cost of testing is much lower.

Because that’s the point, and that’s what it made me think about the title in the first place: testing has a cost and it’s much higher than what the so-called agile mentors often promise… maybe because the “implementation details” – and the deadlines involved – are usually to be fulfilled by someone else. Telling someone that the cost of testing is lower than the cost of problems due to not testing is lying: it varies greatly according to the situation.

So of course it’s OK not to write unit tests. I wonder how we got to the point that somebody has to argument that. I feel like agile has acquired this silver bullet status and we know how much management love those… let’s stop this before bad things happen – the one who has to maintain the next enterprise behemoth with 2k+ tests that break at every gust of wind might be you.

Unit testing and API design

I have encountered the following situation more than once while working on freightrain: suppose you have a feature in mind, like being able to declare regions (which are other viewmodels, constructed dynamically and plugged into the parent viewmodel’s view accordingly) inside your viewmodel just like this:

class MainViewModel < FreightViewModel

  region :list #:viewmodel defaults to :list
  region :detail, :viewmodel => :customer_detail

end

This has been implemented using a couple of dirty tricks and, of course, it’s all properly tested. The thing is that the tests have been written after the implementation, in a completely separated coding session.

Mind that I am not a very big fan of test driven development: more often than not, especially when the customer is neither trained nor willing to acknowledge the extra effort and the improved quality that comes with it, the costs of test first approach outweigh the benefits it generates. In this case, though, things are different: quality is crucial and I am more than happy to spend time making things better.

Still, I wasn’t really comfortable with writing the test before the actual code. I think that’s because I had a very clear design in mind: testing would just have made the code more complicated than it needs to be. On the downside, the tests written for that piece of code look nothing like the kind of tests that you usually get when doing things TDD style.

What I think I learned from the experience is:

  • While TDD generally helps your design, there are some cases where it gets you to a second best. If you are extremely sure that your design idea is sound then you should abandon testing, do some cowboy coding and when after you’re satisfied with results look back and test everything. The obvious trap is to ditch testing indefinitely: don’t do that.
  • Application design is very different from API design and you have to act (and test) accordingly. Testing the how doesn’t feel that wrong at this level.
  • Sometimes it’s easier to solve a problem with an if or two than to rearrange things: it’s a bad idea and you will see it by the time you test. While ifs are tolerable at the application level they’re a real pain when working at a lower level, and the better the surrounding design the more they hurt. Don’t ignore the warnings.

Horizontal vs. vertical complexity

If i were to choose a principle that summarizes what’s behind a good design, that would absolutely be KISS : as the wise once said nobody is really up to the challenge of being a programmer. Simplicity often means humbleness, which in turn shows that the author has already learned his lesson.
That said, complexity is unavoidable. Code itself is complexity: no lines of code is always the most elegant solution but, zen value put aside, it accomplishes little. We have to deal with some amount of complexity no matter what, so i think it is better to know who your enemy is than to bury your head in the sand ( even if that’s really agile ;-) ).
Let’s take a look at the following snippet :

Now the car must be modified: it shouldn’t be able to move if the engine is not ignited. What should we do to implement this change in behavior?
Horizontal complexity

An if in the right place and voil√†, the job is done. It’s the kind of change that TDD (and, by proxy, YAGNI) often brings to: the simplest thing that can possibly work, which is also very often the best thing you can do. It’s horizontal because it doesn’t add layers to your design: an atomic change is most likely to involve just one method or, at most, one class. It’s almost harmless when taken in small quantities but, when misused, can turn really, really bad.
I believe that the real problem about ifs (and conditional statements as a whole) is that they’re too easy. Providing such an easy way to fork the world in two at the language level is like putting RPGs for sale at the grocery store – no matter what’s your level of trust in people’s reasoning, somebody’s gonna get hurt… heck, there’s a whole movement out there trying to put them down.
Vertical complexity

Two new classes and constructor logic: that’s the price you pay so you don’t have to put one of that nasty ifs into your pristine method. It also hides some of the code away from sight: looking at the method only is no longer sufficient to understand its behavior. It’s vertical because the change in your code will be likely to be small in each single part but distributed in various layers.
If you write one if too much you ‘re shooting yourself in the foot; if you do this the wrong way you ‘re blowing your whole leg off.
Overengineering is cheap and often mistaken as quality. The damage done, which often presents itself in the form of Yet Another Crappy Framework destined for internal use, is massive if measures are not taken early so i think it’s a really good idea to stop coding if you don’t know what you’re doing, like when:

  • You just finished reading a certain book which was totally sweet and you can’t wait to see memento in action. On the invoices module.
  • You had a genius idea and you absolutely must put it in your code so everybody knows how freakin’ smart you are.
  • You start to wonder how cool would that be to have all that nasty business logic in a XML file so the customer can edit it himself

In the end…
Which one is better? That depends on you and on the situation: if the code you’re writing will be not be seen by human eyes after it is deployed then i think design and quality are not that important. If you know from the start your code will be mantained (which is pretty much always) it might be a good idea to add some structure (just some) even if the task to accomplish is simple. But if you have no clue please be a good boy, stop coding that CommonMethods class and ask for guidance :-)

Unit testing with Ruby (part 1)

Not having the compiler on your side makes life difficult. Example:

C#

public int Multiply(int firstFactor,int secondFactor)
{
      return firstFactor*secondFactor;
}

Ruby

def multiply(first_factor, second_factor)
   return first_factor*second_factor
end

What these methods are supposed to do is pretty straightforward. What they actually do, maybe not so.  What if you were to test both methods? What\how many tests would you write to make sure these methods behave as you expect?

Let’s take a look at the C# method. It takes two parameters (both int, so value types) and returns an integer. Relying on cyclomatic complexity to determine, for a given method, how many tests must be written is often uncorrect but let’s assume that it’s okay for this case, so one test that verifies the result for a given input might be enough.

What about the Ruby method? It takes two parameters and returns something. And the two parameters might be pretty much anything. How about testing this one? There’s no IFs so we might say that one test is still fine, or you might want to check parameters for correctness, or maybe make sure that who calls your method do it the way it’s supposed to. Either options (except the first but seriously, you’re not going for that one) requires extra effort in comparison to the C# approach.

About mocking:

C#

public Controller(IView view, IModel model)
{
      //some stuff happening here
}

And Ruby

def initialize(view, model)
      #some stuff happening here
end

Mocking view and model in C# requires some degree of IView = repo.CreateMock<IView>(). This relies on the IView interface, and you’ve got nothing like that in Ruby. You just mock methods instead. Mocha syntax:

def test_refresh_always_callRefreshOnView
   view = Mock()
   view.expects(:refresh)
   controller = Controller.new(view,Stub())
   controller.refresh
end

No need of VerifyAllExpectations() or such: this fails automatically if refresh is not invoked on the view object.

A similar problem arises when using IoC containers: you’ve got no interface to resolve. More about this later.

My 2c about the SOLID debate

As you might know Jeff Atwood and Joel Spolsky, during one of their podcast at stackoverflow, argued against the SOLID principles (defining them “idiotic”) going as far as saying “quality does not matter that much”. This triggered a blog reaction from Uncle Bob himself, which then led to another podcast and well, you can keep up with the story yourself from this point.
Of course I’m no match for these giants: i’ve got neither right nor intention to criticize their points of view. Still, I couldn’t really understand: why are they (both sides) so mad at methodology?

Let me explain. It seems that just a little share of the so-called programmers can actually program. But there’s more : even among this elite the difference between the best and the worst are of orders of magnitude. And worse : you can’t really learn it.

So, what the hell? There’s a huge amount of software that needs to be written out there, who’s gonna do that? Just the elites from the elite? Wait, here’s the solution: Methodology! You don’t need the gift, just blindly and faithfully follow the Instructions and everything’s fine! What a breakthrough!

And I’m serious about that. I’ve seen that in action, it costs – a LOT – in terms of productivity but you can actually make people that have no clue about programming building quality software. Take a Methodology and enforce it brutally: there’s no “maybe i’m doing this my way” or “let’s just patch it up for the next release”. Everything must be done by the book. It works. And i think it’s not a bad thing, of course unless Jeff and Joel are right and quality really doesn’t matter : but i like to think that’s not true – even knowing that i’m probably wrong.