Programming challenges – War!

The book programming challenges is one I see recommended quite often – so, I decided to give it a try. The first problem I’ve encountered involves replicating a children’s card game called war, which I’d honestly never heard before. The problem is stated as such:

A standard 52-card deck is dealt to two players (1 and 2) such that each player has 26 cards.
Players do not look at their cards, but keep them in a packet face down.
The object of the game is to win all the cards. Both players play by turning their top cards face up and putting them on the table.
Whoever turned the higher card takes both cards and adds them (face down) to the bottom of their packet. Cards rank as usual from high to low: A, K, Q, J, T, 9, 8, 7, 6, 5, 4, 3, 2.
Suits are ignored.
Both players then turn up their next card and repeat.
The game continues until one player wins by taking all the cards.
When the face up cards are of equal rank there is a war.
These cards stay on the table as both players play the next card of their pile face down and then another card face up.
Whoever has the higher of the new face up cards wins the war, and adds all six cards to the bottom of his or her packet.
If the new face up cards are equal as well, the war continues: each player puts another card face down and one face up.
The war goes on like this as long as the face up cards continue to be equal.
As soon as they are different, the player with the higher face up card wins all the cards on the table.
If someone runs out of cards in the middle of a war, the other player automatically wins.
The cards are added to the back of the winner’s hand in the exact order they were dealt, specifically 1’s first card, 2’s first card, 1’s second card, etc.

Nothing complex so far. So, I scroll down to see the solution and I’m welcomed by this snippet:

#define NCARDS 52 /* number of cards */
#define NSUITS 4 /* number of suits */

char values[] = "23456789TJQKA";
char suits[] = "cdhs";

int rank_card(char value, char suit)
{
    int i,j; /* counters */
    for (i=0; i<(NCARDS/NSUITS); i++)
        if (values[i]==value)
            for (j=0; j<NSUITS; j++)
                if (suits[j]==suit)
                    return( i*NSUITS + j );
    printf("Warning: bad input value=%d, suit=%d\n",value,suit);
}

Ugly! 5 (FIVE) levels of nesting! I understand that it’s C we’re talking about here but that seems a bit more complex than it should be. I stopped right there and rolled my own solution in Ruby (yeah, I’ve got nothing better to do tonight). This is the result:

Let’s define some constants first:

A = 14

SPADES = :s
DIAMONDS = :d
HEARTS = :h
CLUBS = :c

SUITS = [SPADES, DIAMONDS, HEARTS, CLUBS]

We’ll have to deal with cards – so let’s define a class for them

class Card

  attr_reader :rank
  
  def initialize(rank, suit)
    @rank = rank
    @suit = suit #unused
  end
    
end

Modelling the card like this (with a rank and a suit) makes little sense but the problem specification suggested to do so (as does the C snippet above), so that’s it – cards could have been integers.

Cards will have to be divided into two decks, from which we’ll have to draw and to which we’ll have to add conquered cards. Let’s make a class for them as well. Its methods are quite self explanatory.

class Deck

  def initialize(cards)
    @cards = cards
  end
  
  def draw
    return @cards.pop
  end
  
  def collect(stack)
    while(!stack.empty?) do
      @cards.insert(0, stack.shift)
    end
  end
  
  def empty?
    return !@cards.any?
  end

end

Now let’s generate the two starting decks (this is the equivalent of the C snippet above. It’s not straightforward, but I find it way more readable written this way.)

def create_decks
  all_cards = ((2..A).to_a).product(SUITS).map { |x| Card.new(*x) }.shuffle
  return [all_cards[0..(all_cards.count/2 - 1)], all_cards[(all_cards.count/2)..all_cards.count]].map { |x| Deck.new(x) }
end

And now the game logic. If one of the two players can’t draw, he or she loses. If the two players draw cards with different ranks, the winning player is the one who drew the card with the highest rank. If the ranks are equal, then we get into a war. Let’s wrap this piece of logic like so:

def determine_outcome(card1, card2)
  return @deck2 unless card1 #if player 1 can't draw player 2 wins 
  return @deck1 unless card2 #and vice versa
  return war if(card1.rank == card2.rank) 
  return card1.rank > card2.rank ? @deck1 : @deck2
end

Where do card1 and card2 come from? We drew them from the respective player’s deck. The cards go on the table (here represented by the @stack), and the winner takes all the cards after resolution.

def combat
  @stack << @deck1.draw
  @stack << @deck2.draw	
  winner = determine_outcome(@stack[-2], @stack[-1])	
  winner.collect(@stack)	
  return winner
end

Note that, since a war might have happened, there might be more than just the two cards added here in the @stack.
Let’s now see how #war is implemented:

def war
  @stack << @deck1.draw #facedown
  @stack << @deck1.draw
  @stack << @deck2.draw #facedown
  @stack << @deck2.draw
  return determine_outcome(@stack[-3], @stack[-1])
end

We add four cards in total to the stack: first, the first player draws a card, face down. This card is not used at all in the game, except for beling “pillaged” by the winning player. Then the first player draws another card, this one face up – this is the one that will be used for the comparison. The second player does the same. Then we pass the topmost and the third from top card on the @stack (table) to the comparison function – those are the cards played face up.

The game loop goes like this:

def play
  turns = 0
  while [@deck1, @deck2].all? { |x| !x.empty? }
    winner = combat	  
    turns += 1
  end
  print "TURNS : #{turns}\n"
  print "WINNER : player#{winner == @deck1 ? 1 : 2}\n"
end

So, we count the turns passed and we keep drawing until one of the decks is empty. The winner is the last player winning a “fight”.

You can find the complete listing as a gist here. Note that the program will sometime go into an infinite loop – that’s not a bug, it’s actually likely that the game itself as it’s stated is undecisive, and the two player keep drawing forever without having a winner!

On technical debt (now with chickens!)

Technical debt, meant as the damage created to a codebase by hastily implementing a feature (and wrecking the codebase’s design in the process), seems to be a foreign concept to some managers/customers. Maybe they know and they just don’t want to listen, I’m not sure. Anyway, I thought of a little story that I might use the next time I’ll have to explain the pricing of some new feature:

A farmer has three chickens. Each chicken makes one egg per day.
The farmer is in business with a local grocer. The grocer buys two eggs per day from the farmer, so he can sell them at his shop.
All is fine with the world until the grocer shows up at the farmer’s place:


Grocer: Hey there. I'd like to have some chicken meat today.
Farmer: Meat? That wasn't in the agreement.
Grocer: I know. But I really need meat. It's for a Business-to-Stomach Enterprise Poultry-as-a-Service Platform I'm planning.
Farmer: What?
Grocer: Important stuff. Can you give me some?
Farmer: Well, that's not so easy - I'd have to hatch eggs and wait for the chicks to grow... I think it'll take a month or so.
Grocer: A month? That's way too much... I was thinking more on the likes of right now.
Farmer: Nature has its times, you'll have to wait a little bit.
Grocer: Hm, why don't you just kill one of the chickens? That way, I'll have my meat and you can still produce two eggs per day. You don't need more, right?
Farmer: Well, I don't think it's a good idea. That would put me in a tight spot in the case something happens to one of the other chickens.
Grocer: Come on, nothing's going to happen... and I really, really need that meat! Can you do that?
Farmer: Okay, I guess I can...

And so, the farmer picks his cleaver up and sends one of the birds to the Creator. The grocer gets his meat and returns to his shop.
One week later, the grocer pays the farmer another visit:


Grocer: Hey there!
Farmer: Hey, what's up?
Grocer: Listen - the meat was great. Actually, it was so great and it sold so well that now we absolutely need at least another chicken. By tomorrow.
Farmer: That's not gonna happen. If I give you another chicken then I'll not be able to deliver the two daily eggs that were in the pacts.
Grocer: Oh please come on! Customers want meat, and I've already promised it for tomorrow...
Farmer: No, I can't do that. I'll not be able to honor the contract if I do, do you understand? Eggs will not be guaranteed if I do that.
Grocer: But I really, really, really need the meat! By tomorrow! Otherwise customers will get mad, the earth will shatter and the world as we know it will end! Give me one of the chickens, now!
Farmer: Well, if you want it that bad, take it! Again, eggs are not guaranteed from now on, understood?
Grocer: Yea sure, got it. But you're a smart guy, I guess you'll figure out how to make it work anyway. Bye!

And the grocer leaves for his shop again.
The day after:


Grocer: Hello. What's wrong with the eggs?
Farmer: What do you mean?
Grocer: The eggs. Actually, the egg. There's just one. What happened?
Farmer: What happened? I had three chickens. You took two. Now there's just one. One chicken, one egg. I though I was being clear about that.
Grocer: But that wasn't in the pacts! The contract states it right here - you owe me two eggs per day! Now what will I say to my customers?
Farmer: Well, I was very clear about it. Nothing I can do about that.
Grocer: Okay, okay, sheesh, forget it. Talking about something else... it would be nice to have some meat. Can I have some?

So, don’t be a farmer – refuse to irreparably damage your codebase for the needs of now and, if you’re forced to do so, don’t take responsibility for the wreckage – and don’t be a grocer – don’t ask for the impossible and assume the responsibility for your decisions.

P.S.: no chickens were harmed during the writing of this blog post.

Avoid attachment deletion on destroy with Paperclip

In case you don’t know, Paperclip is a nice gem that integrates with ActiveRecord and takes care of saving binary data – like an image, a video or a pdf – as if it were an attribute of the record: basically, you call save on the entity that you declared the attachment on and the binary data will be persisted along with your database record. You can save the binary data directly on the server, send it somewhere else via HTTP or FTP, save it on Amazon’s s3… a lot more options are available via external plugins.

So, you call save and the attachment is persisted, you call destroy and the attachment gets deleted. What if you don’t want to delete the attachment when destroy is called on the owner? As it turns out, there’s a little option (:preserve_files), default false, that tells paperclip to leave the attachments alone when the parent entity gets destroyed. If you have an attachment declared like this:

  :has_attached_file => :attachment,
                        :styles => { :thumb => 100x100! }

All you need to do to avoid this attachment’s deletion on destroy is to add the preserve_files option:

  :has_attached_file => :attachment,
                        :styles => { :thumb => 100x100! },
                        :preserve_files => true

Hope this helps – took me a while to find out.

Smeed’s Law for programming – an explanation

I’ve come across an interesting blog post a while ago – it was about Smeed’s law and how it relates to software.

Basically, the blog authors said that the number of bugs in a system grows sublinearly with the size of the system itself. There’s no proof for this but it’s the same conclusion I’ve reached with the experience I’ve accumulated so far. This is my explanation for the phenomenon:

Say you have a small software system – for a viable example, think about a prototype, or a shell script you made to take care of trivial stuff. It is very likely that most, if not all, of its parts are not set in stone: if you ever need to add a new feature or to change the behavior of an existing one you will likely have to modify a high percentage of the existing code.

Yep, I suck hard at photoshop

Now think about a bigger system – it might be the prototype from the previous example that now has grown to a complete product. You need to add a feature to that system. If you’ve done things the right way, it’s very unlikely that you will have to modify big parts of the system: chance is, your software’s internals will have grown some sort of API, internal DSL, call it what you want, so all you need to to is either to extend or integrate with it. Also, with time, the bugs in your system’s core will be likely found and eradicated.

Yep, I suck hard at photoshop

Supposing that if you don’t touch it, it won’t break, the part of your system prone to bugs will be the “surface” (2πx), which grows slower than the total area (πx²).

Why you should stay away from Appcelerator’s Titanium

Let’s face it – programming on mobile devices is hard: everything’s much more complicated to accomplish than it is on the web or the desktop and, since the platform is fairly new, tools and frameworks are scarce and poorly refined. Plus, if you need your application to run on more than one target OS (iOS, Android, Blackberry) you’re pretty much forced to write a different application for each one of them.

So, enter Appcelerator’s Titanium – what Titanium aims to accomplish is to solve most of these problems, by hiding all the gory implementation details behind a simple and well-defined javascript API. On top of that, it will (more or less) compile your javascript to a different language (objective-c for iOS, Java for Android and soon Blackberry) so you can deploy your application to multiple targets with (supposedly) no changes to your code. And it’s free! Support and other goodies come for a price but the library itself is free for commercial use.

What’s the catch then? On the surface, it’s all good – you pop up Titanium Studio (a modified version of Aptana), write some trivial javascript and you’ve got a working, native app. Great, it took me a day or two to do this in native objective-c – with Titanium, it’s a couple of hours! So you dive head first and build more and more complex apps – animations, database connection, networking, geolocation, it’s got it all!

The more complex your applications become, the more often you encounter some weird glitches – why the navbar’s title not centered? And why sometimes the navigation controller doesn’t pop the last view? Or why the focus event gets called so often, for no reason, only on the device? But you know, Titanium’s so much better than going back to native development, so you deal with all its quirks and carry on. Until bad stuff starts to happen.

Your app starts to crash, seemingly at random. The reason : memory starvation. You see, although Titanium is supposed to take care of memory management for you, it doesn’t always work correctly. Most of the times, especially if your application is simple and doesn’t use many images and animations, the memory leaks Titanium generates over time will be small enough that you won’t even realise that your application is actually leaking. Problem is, when your application reaches a certain level of complexity, those leaks will become evident – in the form of sudden, unavoidable crashes.

There are some hacks, proposed by Titanium’s community that somewhat help to free some memory but they don’t always work and, even when they do, they don’t release all the memory they’re supposed to. Titanium doesn’t let you change your object’s reference count directly (probably because that would break some internals of theirs) so you have no way to dispose of objects yourself.

This leaves you stuck with a frail, buggy application that is ready to blow up at every click… and there’s nothing you can do about it except for a complete rewrite in objective-c or java.

Are you planning to develop mobile apps with Appcelerator’s Titanium? Think twice. It’s tempting but you really, really don’t want to find yourself stuck like it happened to me. Unless it’s either for small, simple applications, or they somehow manage to solve all their memory issues, I strongly suggest you to stay away from Titanium.

What’s so special about Apple products?

Some days ago I read on Reddit about John Resig’s setup:

Redditor : What is your development setup? (Software)

John Resig : I use OS X on an iMac with an extra monitor. For coding I use VIM in a terminal and have a screen session open to IRC in another terminal window. I then have a plethora of browsers open (Firefox, Chrome, Safari, Opera – a VMWare with IEs) for testing. That’s pretty much all that I use and need to use (I have a very similar set up on my Macbook Pro, as well).

So, John Resig actually develops on a mac. Also, down the same thread, some other redditor points out that 95% of the web conferences audience he goes to owns a MacBook or a MacBook Pro.

Well, for the last months I’ve been developing on a mac too and here’s some of the things I’ve had to deal with:

  • There is NO del key : the Macbook/Macbook Pro has no del key. That leads to an array of Hm, now delete / ack, there’s no delete / rightarrowrightarrowrightarrow / backspacebackspacebackspace, which is incredibly annoying because it completely kills the flow. Even worse, you unlearn to use the del key so when you switch back to a normal machine the tendency is to keep using the right arrow/backspace combo or (god forbid) the mouse.
  • Apple key instead of CTRL : on macs, the function that is normally accomplished by the CTRL key is instead done by the Apple (command?) key, which is located pretty much where the Win key would be on a Windows keyboard. This is annoying because it forces you to unlearn the convention which you’re used to (for no apparent good reason) but the worst thing is that the behaviour is NOT consistent from application to application, meaning that in some applications (example : nano) the CTRL key is behaving as usual and the Apple key does nothing (or does something completely unrelated to what you want to do).
  • You can’t cut a file on Finder : it seems that there is no way to cut a file and paste it somewhere else in finder. Even if there were a way to do that, there’s no keyboard shortcut for it (ctrl-x and apple-x do nothing) and there’s no mention of a cut option in the context menu. Also, since there’s no DEL key, if you want to delete a file you have to use the option in the context menu to do that.
  • Git is very slow : git commit takes an eternity to complete, for no apparent good reason. A commit which takes a second or two on my old laptop running Kubuntu or on a Windows machine will take 30-60 seconds on a Mac. Also git push is sensibly slower but not as much as git commit.
    Since it takes an eternity to commit my work, I’ve started to commit less frequently. That lead to a big loss of work on a couple of occasions.
  • Limited customization: I really don’t like the dock (that panel with all the applications icons) and I wanted to recreate something similar to what I use on Kubuntu, so one bar on the left with all the program launcher and a taskbar on the left. Turns out I can’t have a taskbar at all and the only thing I can do with the dock is to choose which side of the screen it has to be on.

On top of all this, my impression is that Apple machines are very slow compared to PCs with similar specs: a colleague of mine needed to clone a remote git repository so I’ve asked him to open a terminal – it needed a good 30 seconds for the terminal to start. On my Kubuntu laptop (which shares similar specs with my colleague’s MacBook) I can open a dozen in five seconds or so. Also, sometimes applications freeze for some seconds, again for no good apparent reason, and you have to wait until they come back up.

All in all, my experience with macs so far has been far from positive: I guess that if you’re a “normal” user you wouldn’t care one bit about all the things I pointed out but, as a programmer, I find all of this extremely annoying.

But anyway, since I have to work on iPhone/iPad applications, I thought that I could well pay a 100/200 € more for a mac machine and then dual boot linux – that way, I could both work on my mobile projects and still have linux on the same machine for everything else. I’ve wanted to change my laptop for a while now and this is what I had in mind, so I went to the online Apple Store and made myself a MacBook pro with similar specs. And I was once again surprised at the outcome:

The ACER AS594, my original choice – € 749.99

A MacBook Pro with similar hardware – € 2.149,00

That’s almost 3 times the price. John Resig thinks it’s worth it. 95% of web developers think it’s worth it. Do you? If so, would you please tell me why? Because, in all honesty, I don’t understand what’s so special about Apple products.

How to: get Titanium WebView’s content height

I’ve been working with Appcelerator Titanium and, compared to the hell that is native iOS application development, I’ve been quite satisfied so far… except from the occasional bug/glitch that forces me to write some ugly workarounds.

A notable example is when you need to get the content height of a WebView. Let’s say you have two elements inside a single ScrollView: one is a View of any kind with a given height (for example, an ImageView) and the other is a WebView. You want to scroll both the ImageView and the WebView at the same time so, when you slide a finger on the device vertically, the ImageView scrolls up and you see more of the WebView‘s content. Problem is, there is not a contentHeight property on the WebView control and you need that height to set the height property of the WebView to avoid its native scrollbar to be active: if you don’t set an height to the WebView it will display a vertical scrollbar for its own content, hence avoiding the scroll on the external ScrollView.

Fact is, there is actually a way to get the WebView‘s content height: the WebView has an evalJS() method which allows you to execute arbitrary JavaScript inside the WebView (as you would inside a browser). The trick is to ask the WebView for its own document.height and take that as the content’s height, as follows:

 
actualHeight = webView.evalJS("document.height;");
 

The only catch is that you will get the real content height only after the WebView has finished rendering. If you are rendering from an external URL you can use the load() event, which will get called when the page has finished rendering, otherwise you’ll have to set a timer, like this:

 
myTimer = setInterval(function () {
    actualHeight = webView.evalJS("document.height;");
    if(actualHeight > 1)
        {
            webView.height = actualHeight;
            scrollView.contentHeight = imageView.height + webView.height;            
        }
},1000);

and then dispose the timer with myTimer.clearInterval(). Hope this helps!

ruby-hackernews – an API to access HN’s site

I was surprised to find out that there’s no HTTP API to Hacker News. So, I wrote a simple ruby API (using Mechanize) to access the site: you can find more details about it here. It works with UseTheSource too, you just have to set the base url beforehand with ConfigurationService.base_url = "http://news.usethesource.com".

I hope somebody will find this useful!

It’s OK not to write unit tests – of course it is

A couple of days ago I stumbled upon this post. Now, I won’t discuss the post itself (which I think is well structured and surely worth a read) but just its title, that states: it’s OK not to write unit tests.

I’ve been working in C# lately: unit testing in a static language is hell. The clutter created by all those interfaces you’re forced to create solely for mocking is enough reason for give testing up… and that’s just the beginning. So yes, if you’re working in C# (and I think it applies to Java too) it is totally OK not to write unit tests: on the contrary, you’ve gotta have a damn good reason to aim for 100% coverage on a C# project. And by damn good I mean the customer is paying for it because, honestly, maintaining those tests is gonna be a huge waste of time, and you better make sure you can bill that time to someone.

Things change big time when talking about a dynamic language, like ruby. Testing in ruby is so easy that it’s a waste not to do it. Tools are also much better than their .NET counterparts (think RSpec). You still have problems, especially during refactoring, but personally I didn’t feel like throwing my test suite away each time I had to break something. If it’s OK not to write unit tests in ruby? That depends on your situation, but surely the cost of testing is much lower.

Because that’s the point, and that’s what it made me think about the title in the first place: testing has a cost and it’s much higher than what the so-called agile mentors often promise… maybe because the “implementation details” – and the deadlines involved – are usually to be fulfilled by someone else. Telling someone that the cost of testing is lower than the cost of problems due to not testing is lying: it varies greatly according to the situation.

So of course it’s OK not to write unit tests. I wonder how we got to the point that somebody has to argument that. I feel like agile has acquired this silver bullet status and we know how much management love those… let’s stop this before bad things happen – the one who has to maintain the next enterprise behemoth with 2k+ tests that break at every gust of wind might be you.

Windsor custom lifestyles

Windsor provides you with a pretty limited array of options when it comes to lifestyles:

  • Singleton: each time you resolve, you get the same instance. This is, for some reason, the default lifestyle
  • Transient: each time you resolve, you get a new instance. This is what I would have wanted as default :-)
  • PerThread: it pools a singleton for each thread, so you’ll always get the same instance if you call resolve from the same thread.
  • Pooled: the documentation is not very clear about this but from the source code it seems that a pool with a given capacity is created and on each resolve one of the objects from the pool is returned. I didn’t check how which object to return is determined.
  • there’s also an undocumented PerWebRequest lifestyle and I assume it does just what it says on the tin.

Let’s say that, as it was in my case, you need a container behavior a little different than one of these: you want the container to return a different instance based on a parameter used during the type’s activation. Example:

 
IDictionary argumentsOne = new Hashtable();
argumentsOne.Add("entity", apple);
_container.Resolve<IViewModel>(argumentsOne); // a new instance 
IDictionary argumentsTwo = new Hashtable();
argumentsTwo.Add("entity", pear);
_container.Resolve<IViewModel>(argumentsTwo); // another new instance
IDictionary argumentsThree = new Hashtable();
argumentsThree.Add("entity", apple);
_container.Resolve<IViewModel>(argumentsThree); // the same instance as when called with argumentsOne

So, to get this kind of behavior from your container you need to implement your own CustomLifestyle. The easy way to do it is to subclass AbstractLifestyleManager, which provides you with some methods you can override. The interesting ones are:

  • Resolve which gets called when you try to resolve the type
  • Release which gets called when you try to release the type
  • Dispose which works as usual
  • Init, a callback invoked when your CustomLifestyle gets constructed

Obviously, Resolve should return the instance of the type, be it from your cache or from calling base.Resolve, which should get you a new one. Release should delete the right element from the cache so the next time Resolve is called a fresh instance is returned.

Here follows a sample implementation for the above scenario.

 
public class PerEntityLifestyleManager : AbstractLifestyleManager
    {
    private IDictionary<object, object> _instances = new Dictionary<object, object>();
        
    public override object Resolve(Castle.MicroKernel.CreationContext context)
    {           
        object param = context.AdditionalParameters["entity"];
        if (!_instances.Keys.Contains(param))
        {
            _instances.Add(param, base.Resolve(context));
        }
        return _instances[param];
    }

    public override bool Release(object instance)
    {
        if (_instances.Keys.Contains(instance))
        {
            _instances.Remove(instance);
            return true;
        }
        return false;
    }
}

You should then apply your lifestyle to the types you want to be resolved with it as an attribute, like this:

 
[CustomLifestyle(typeof(PerEntityLifestyleManager))]
public class MyClass : IMyInterface
{
   
}