Programming challenges – War!

The book programming challenges is one I see recommended quite often – so, I decided to give it a try. The first problem I’ve encountered involves replicating a children’s card game called war, which I’d honestly never heard before. The problem is stated as such:

A standard 52-card deck is dealt to two players (1 and 2) such that each player has 26 cards.
Players do not look at their cards, but keep them in a packet face down.
The object of the game is to win all the cards. Both players play by turning their top cards face up and putting them on the table.
Whoever turned the higher card takes both cards and adds them (face down) to the bottom of their packet. Cards rank as usual from high to low: A, K, Q, J, T, 9, 8, 7, 6, 5, 4, 3, 2.
Suits are ignored.
Both players then turn up their next card and repeat.
The game continues until one player wins by taking all the cards.
When the face up cards are of equal rank there is a war.
These cards stay on the table as both players play the next card of their pile face down and then another card face up.
Whoever has the higher of the new face up cards wins the war, and adds all six cards to the bottom of his or her packet.
If the new face up cards are equal as well, the war continues: each player puts another card face down and one face up.
The war goes on like this as long as the face up cards continue to be equal.
As soon as they are different, the player with the higher face up card wins all the cards on the table.
If someone runs out of cards in the middle of a war, the other player automatically wins.
The cards are added to the back of the winner’s hand in the exact order they were dealt, specifically 1’s first card, 2’s first card, 1’s second card, etc.

Nothing complex so far. So, I scroll down to see the solution and I’m welcomed by this snippet:

#define NCARDS 52 /* number of cards */
#define NSUITS 4 /* number of suits */

char values[] = "23456789TJQKA";
char suits[] = "cdhs";

int rank_card(char value, char suit)
    int i,j; /* counters */
    for (i=0; i<(NCARDS/NSUITS); i++)
        if (values[i]==value)
            for (j=0; j<NSUITS; j++)
                if (suits[j]==suit)
                    return( i*NSUITS + j );
    printf("Warning: bad input value=%d, suit=%d\n",value,suit);

Ugly! 5 (FIVE) levels of nesting! I understand that it’s C we’re talking about here but that seems a bit more complex than it should be. I stopped right there and rolled my own solution in Ruby (yeah, I’ve got nothing better to do tonight). This is the result:

Let’s define some constants first:

A = 14

CLUBS = :c


We’ll have to deal with cards – so let’s define a class for them

class Card

  attr_reader :rank
  def initialize(rank, suit)
    @rank = rank
    @suit = suit #unused

Modelling the card like this (with a rank and a suit) makes little sense but the problem specification suggested to do so (as does the C snippet above), so that’s it – cards could have been integers.

Cards will have to be divided into two decks, from which we’ll have to draw and to which we’ll have to add conquered cards. Let’s make a class for them as well. Its methods are quite self explanatory.

class Deck

  def initialize(cards)
    @cards = cards
  def draw
    return @cards.pop
  def collect(stack)
    while(!stack.empty?) do
      @cards.insert(0, stack.shift)
  def empty?
    return !@cards.any?


Now let’s generate the two starting decks (this is the equivalent of the C snippet above. It’s not straightforward, but I find it way more readable written this way.)

def create_decks
  all_cards = ((2..A).to_a).product(SUITS).map { |x|*x) }.shuffle
  return [all_cards[0..(all_cards.count/2 - 1)], all_cards[(all_cards.count/2)..all_cards.count]].map { |x| }

And now the game logic. If one of the two players can’t draw, he or she loses. If the two players draw cards with different ranks, the winning player is the one who drew the card with the highest rank. If the ranks are equal, then we get into a war. Let’s wrap this piece of logic like so:

def determine_outcome(card1, card2)
  return @deck2 unless card1 #if player 1 can't draw player 2 wins 
  return @deck1 unless card2 #and vice versa
  return war if(card1.rank == card2.rank) 
  return card1.rank > card2.rank ? @deck1 : @deck2

Where do card1 and card2 come from? We drew them from the respective player’s deck. The cards go on the table (here represented by the @stack), and the winner takes all the cards after resolution.

def combat
  @stack << @deck1.draw
  @stack << @deck2.draw	
  winner = determine_outcome(@stack[-2], @stack[-1])	
  return winner

Note that, since a war might have happened, there might be more than just the two cards added here in the @stack.
Let’s now see how #war is implemented:

def war
  @stack << @deck1.draw #facedown
  @stack << @deck1.draw
  @stack << @deck2.draw #facedown
  @stack << @deck2.draw
  return determine_outcome(@stack[-3], @stack[-1])

We add four cards in total to the stack: first, the first player draws a card, face down. This card is not used at all in the game, except for beling “pillaged” by the winning player. Then the first player draws another card, this one face up – this is the one that will be used for the comparison. The second player does the same. Then we pass the topmost and the third from top card on the @stack (table) to the comparison function – those are the cards played face up.

The game loop goes like this:

def play
  turns = 0
  while [@deck1, @deck2].all? { |x| !x.empty? }
    winner = combat	  
    turns += 1
  print "TURNS : #{turns}\n"
  print "WINNER : player#{winner == @deck1 ? 1 : 2}\n"

So, we count the turns passed and we keep drawing until one of the decks is empty. The winner is the last player winning a “fight”.

You can find the complete listing as a gist here. Note that the program will sometime go into an infinite loop – that’s not a bug, it’s actually likely that the game itself as it’s stated is undecisive, and the two player keep drawing forever without having a winner!

Avoid attachment deletion on destroy with Paperclip

In case you don’t know, Paperclip is a nice gem that integrates with ActiveRecord and takes care of saving binary data – like an image, a video or a pdf – as if it were an attribute of the record: basically, you call save on the entity that you declared the attachment on and the binary data will be persisted along with your database record. You can save the binary data directly on the server, send it somewhere else via HTTP or FTP, save it on Amazon’s s3… a lot more options are available via external plugins.

So, you call save and the attachment is persisted, you call destroy and the attachment gets deleted. What if you don’t want to delete the attachment when destroy is called on the owner? As it turns out, there’s a little option (:preserve_files), default false, that tells paperclip to leave the attachments alone when the parent entity gets destroyed. If you have an attachment declared like this:

  :has_attached_file => :attachment,
                        :styles => { :thumb => 100x100! }

All you need to do to avoid this attachment’s deletion on destroy is to add the preserve_files option:

  :has_attached_file => :attachment,
                        :styles => { :thumb => 100x100! },
                        :preserve_files => true

Hope this helps – took me a while to find out.

Unit testing and API design

I have encountered the following situation more than once while working on freightrain: suppose you have a feature in mind, like being able to declare regions (which are other viewmodels, constructed dynamically and plugged into the parent viewmodel’s view accordingly) inside your viewmodel just like this:

class MainViewModel < FreightViewModel

  region :list #:viewmodel defaults to :list
  region :detail, :viewmodel => :customer_detail


This has been implemented using a couple of dirty tricks and, of course, it’s all properly tested. The thing is that the tests have been written after the implementation, in a completely separated coding session.

Mind that I am not a very big fan of test driven development: more often than not, especially when the customer is neither trained nor willing to acknowledge the extra effort and the improved quality that comes with it, the costs of test first approach outweigh the benefits it generates. In this case, though, things are different: quality is crucial and I am more than happy to spend time making things better.

Still, I wasn’t really comfortable with writing the test before the actual code. I think that’s because I had a very clear design in mind: testing would just have made the code more complicated than it needs to be. On the downside, the tests written for that piece of code look nothing like the kind of tests that you usually get when doing things TDD style.

What I think I learned from the experience is:

  • While TDD generally helps your design, there are some cases where it gets you to a second best. If you are extremely sure that your design idea is sound then you should abandon testing, do some cowboy coding and when after you’re satisfied with results look back and test everything. The obvious trap is to ditch testing indefinitely: don’t do that.
  • Application design is very different from API design and you have to act (and test) accordingly. Testing the how doesn’t feel that wrong at this level.
  • Sometimes it’s easier to solve a problem with an if or two than to rearrange things: it’s a bad idea and you will see it by the time you test. While ifs are tolerable at the application level they’re a real pain when working at a lower level, and the better the surrounding design the more they hurt. Don’t ignore the warnings.

Custom table name in datamapper

I didn’t know how to do this and it took me a while to find out, so I’m posting it here. Let’s take a look at the following class:

class Author

  property :id      , Serial, :key => true
  property :surname , String, :length => 100
  property :initials, String

And here is the same class, changed to be persisted to a table called my_table_name, of course assuming that the repository in use is default

class Author
  storage_names[:default] = "my_table_name"
  property :id      , Serial, :key => true
  property :surname , String, :length => 100
  property :initials, String

The storage_names class method is in the class DataMapper::Model, you can find the documentation here. It is also possible to call storage_name= but it doesn’t work… just use storage_name[repository] =

Unit testing with ruby (part 3)

Here follows a dummy controller scenario :

While testing this method is certainly possible, given that both @view and @service are injected at some point before the callback is fired, tests are not going to be lean :

Mock objects are a necessary evil: when there’s no output to check and no clean way to access the modified state you can check for the correct calls to be made on your mocks. Sure, it looks better than no testing at all, but the price to pay is pretty high:

  • Verifying expectations means testing how and not what, which is what you really care about.
  • The tests become much less valuable since they are almost useless as documentation: it takes much less to go straight into the code than to understand what’s going on from reading an integration test.
  • The majority of the test code will be about setting up the environment. In the example the only 2 lines we care about are the one where the expectation is set and the last one, the actual method call on the controller. All the remaining code is just to allow the method to execute.
  • By the fact that you’re testing how mocking basically means duplication of your under-test logic. Adding insult to injury it also makes your tests extremely frail, because even a single change in your logic will reflect into lot of broken tests due to setups not working anymore.

When mocking becomes a source of too much hassle it’s often useful to stop testing and reconsider the code under test: there’s a good chance something ‘s wrong with the underlying design, much likely too much work is done by a single actor, usually the controller (or presenter, viewmodel, you pick it). For instance, assuming that the view mechanics are a given and cannot be modified, the example code would be a little better if refactored to be like this:

Controller methods should be a maximum of three line long, with no ifs. No kidding! I also drop the testing completely on controllers if i can get methods that simple: they add no value and are a pain to keep green, especially on the first stages of development.
In the end i think it’s important to remember that mocking (and unit testing, agile and what else) is just a tool and by the moment it gets in your way you should carefully reconsider what the benefits are: methodology is always out there looking for a chance to strike, don’t let your guard off :-)

Unit testing with Ruby (part 2)

In ruby, you can do this:

Class methods mocking

Class methods mocking

Hum. Static’s bad, don’t do static, mkay? Well, not necessarily. The fact that you can test class methods like this makes them much more usable than they are in C#, where using static all over the place brings your design to the dumpster pretty quickly. Still, the use of class methods usually covers for underlying design flaws, so it’s a technical debt and you should evaulate what is convenient to do case by case.

The above snippet is powered by mocha, a mocking framework for ruby. It has a JMock-like syntax and it provides you all the usual tools.


creates a mock object. You can set expectations on a mock object with


and stubs with


Expectations are verified at the end of the test: if you’ve set an expectation for :method_name and that method isn’t called on the mock object before the end of the test, that test will fail (no need to VerifyAllExpectations or such). Instead, stubs do not fail the test if not called.
So, in the following example:

if we remove the object.second call from the method run only the second test fill fail.

As seen before, you can set expectations and stubs not just on mocks but on any object, even class objects. Abusing of this feature, as to say in object =; object.expects(:method) ,  is probably not wise but as shown in the first snippet this allows testing of usually not testable situations.

You can force constraints on expectations using with(param_value) and you can return values with returns(value). For example, the following code:

mock_object = mock()

Sets an expectation on mock_object for the method get_capital, the expectation is satisfied only if get_capital is called with the string 'sweden' and, if called that way, returns the string 'stockholm'. Of course you don’t need both clauses to be present togheter, mock_object.expects(:get_capital).with('sweden') and mock_object.expects(:get_capital).returns('stockholm') work just as expected. If you don’t set a return value for any given expectation/stub it will return nil.

stub_everything() returns a mock object that stubs every possible method call. The return value will always be nil.

More on mocha will be found on its documentation page.

Is C better than Ruby?

Yesterday I stumbled into this through hacker news and well, maybe the guy hasn’t really grasped the whole client-server paradigm thing but here he really made a point :

We’re engineers, and we make machines work. You use the best tool for the job to do that. You don’t choose some pretty language [Ruby] that “brings back the joy of coding” or equivalent hippy shit.

And more:

C is the dominant language right now and probably for the distant future. It’s the lingua franca of computing. Nearly every server application worth a damn is written in C.

Well, the guy here maybe needs to chill out a bit but I think he’s right: C is here to stay. Planning to use anything else for a large project where performance is crucial is insane (Java and C#, you heard me :-)). C is a really powerful and valuable tool, and every developer should have at least some knowledge of it: I started out with C64 Basic, then Assembly for a long, long time, then lots of C, after that Java, VB, C# and then Ruby. I know people like Why are promoting Ruby as a learning language but honestly, I can’t really imagine someone with no knowledge of C writing good Ruby.

But probably in the 70′s somebody made similar statements, just replacing C with FORTRAN. Right now, C is the language. What about it in 10 years? I’m really, really surprised every time I read such statements: they make me feel like story never happened. No wonder Berlusconi is gonna win the elections with 70% of the votes next week.

And even if C is the best all-purpose tool we have now, what’s the point in using it even if the situation would allow a not-so performant but much more pleasant to use tool instead? Is there some kind of pride to gain in being miserable doing your job that I’m not seeing? Ruby performs like crap, but it’s a pleasure to work with. And 99% of the times nobody cares about performance.

All these things have been said countless times by people way more authoritative than I will ever be but it seems there’s still need to state them clear…

Unit testing with Ruby (part 1)

Not having the compiler on your side makes life difficult. Example:


public int Multiply(int firstFactor,int secondFactor)
      return firstFactor*secondFactor;


def multiply(first_factor, second_factor)
   return first_factor*second_factor

What these methods are supposed to do is pretty straightforward. What they actually do, maybe not so.  What if you were to test both methods? What\how many tests would you write to make sure these methods behave as you expect?

Let’s take a look at the C# method. It takes two parameters (both int, so value types) and returns an integer. Relying on cyclomatic complexity to determine, for a given method, how many tests must be written is often uncorrect but let’s assume that it’s okay for this case, so one test that verifies the result for a given input might be enough.

What about the Ruby method? It takes two parameters and returns something. And the two parameters might be pretty much anything. How about testing this one? There’s no IFs so we might say that one test is still fine, or you might want to check parameters for correctness, or maybe make sure that who calls your method do it the way it’s supposed to. Either options (except the first but seriously, you’re not going for that one) requires extra effort in comparison to the C# approach.

About mocking:


public Controller(IView view, IModel model)
      //some stuff happening here

And Ruby

def initialize(view, model)
      #some stuff happening here

Mocking view and model in C# requires some degree of IView = repo.CreateMock<IView>(). This relies on the IView interface, and you’ve got nothing like that in Ruby. You just mock methods instead. Mocha syntax:

def test_refresh_always_callRefreshOnView
   view = Mock()
   controller =,Stub())

No need of VerifyAllExpectations() or such: this fails automatically if refresh is not invoked on the view object.

A similar problem arises when using IoC containers: you’ve got no interface to resolve. More about this later.

Interface designing with Ruby and GTK+

As I said some posts ago, there is no support for GUI building with Ruby inside NetBeans or any other Ruby IDE (with the exception of Komodo). That means you’ll have to rely on external tools, namely the Qt Designer or the Glade interface designer (for GTK+): they are plentiful in features, really easy to use and documentation is widely available so it will be not a problem for you to create good GUIs with them in no time. The only problem about using one of these tools instead of building your GUI with your favourite IDE is that you’ll have to integrate your interface (externally built) with the code you’re writing. Let’s see how, using Glade and GTK+.

After creating a new Glade project some windows will pop up. Among them will be the palette window and the properties window : the first one allows you to create new widget, so click on the upper left button to create a window widget, then add a button to the newly created window by selecting the button widget and then left-click on the window body.

Palette window

Palette window

Now it’s time to look at the property window. By selecting the button that you craeted the property window will show all its properties. Select the signals tab and then click on the ellipse on the lower left of the window: that will show a dialog containing all the button’s available signals. Double click on clicked, then click add on the signals window.

Signals window

Signals window

Now save the project and close the Glade designer. You will see that Glade created two files in the folder you chose: the one that cares to us it the file. Now open a terminal at the folder location and type in:

ruby-glade-create-template > yourrubyfile.rb

Where yourrubyfile is a filename of your choice. This will create a new yourrubyfile.rb that looks like:

#!/usr/bin/env ruby
# This file is gererated by ruby-glade-create-template 1.1.4.
require 'libglade2'

class yourrubyfileGlade
  include GetText

  attr :glade

  def initialize(path_or_data, root = nil, domain = nil, localedir = nil, flag = GladeXML::FILE)
    bindtextdomain(domain, localedir, nil, "UTF-8")
    @glade =, root, domain, localedir, flag) {|handler| method(handler)}


  def on_button1_clicked(widget)
    puts "on_button1_clicked() is not implemented yet."

# Main program
if __FILE__ == $0
  # Set values as your own application.
  PROG_PATH = ""

Running the file will display the window you created and clicking on the button will execute whatever logic you’ve put inside the on_button1_clicked method.

Getting started with DataMapper

The most famous and widespread ruby ORM is undoubtly ActiveRecord, due to the fact that it comes in bundle with Ruby on Rails. But, has it often happens, that doesn’t mean it’s the best choice : DataMapper has a number of additional features (lazy loading, strategic eager loading – no need to check for possible select + 1 problems), implements the identity map pattern and it’s WAY faster than its counterpart.
First thing that you need to do is to install the related gem, so open up a command prompt, log in as root and type :

gem install data_mapper

After that you will be able to use the DataMapper module inside your Ruby application just with :

require 'rubygems'
require 'dm-core'

If you have an instance of MySql available you can connect to it using :

DataMapper.setup(:default, 'mysql://user:password@hostname/dbname')

For those of you who are acquainted to (N)Hibernate this is maybe the big point, as there’s no need to write mapping files (or attributes, or whatever). All that you need to leverage DataMapper is to include DataMapper::Resource in your class.

class Customer
include DataMapper::Resource

property :id, Integer, :serial => true
property :name, String
property :surname, String
has n, :bills
belongs_to :customergroup

Convention over configuration : unless you explicitly declare it, tables in the database take the plural declination of your business entity’s name (customer -> customers) and foreign key columns are in the format (parententityname)_id: in this case it will be customergroup_id.
DataMapper implements the active record pattern, so each entity is responsible for its own persistence

my_customer =

Data retrieval is done via class methods, like this:

my_customers = Customer.all
my_customers = Customer.get(4) #gets the customer with id = 4
my_customers = Customer.all(:name => "John", :surname => "Smith") # SELECT * FROM customers WHERE name = "John" AND surname = "Smith"

More on DataMapper’s site. A more detailed quickstart available here.