Design space

Yesterday I read this article, from Derek Sivers’ blog. I encourage you to read it as well, but here’s the summary:

  • We should move logic more into the database, because “Classes, models, and methods (OOP) are an unnecessary complication.”
  • Data structure doesn’t change often, but code (logic) does. So let’s move the logic closer to the data structure
  • “PL/pgSQL is not the most beautiful language, but the simplicity of having everything in the database is worth it.”
  • We can do that by using triggers, stored procedures and the likes. We then call them directly from the application layer (there follows an example of a REST api in sinatra).

First I thought it was a long winded joke, but the article was over before the expected “haha jk”. I actually had to scroll down quite a bit to find the first dubious commenter.

Now, let’s clarify one thing: his approach makes sense – under very specific conditions. Those conditions loosely being that the application is mostly data centric (data in, data out, CRUD and a little validation), and most of the work is related to how data is presented.

Fact is, we have already the perfect solution for this scenario, and we have had it for years: it’s called Microsoft Access.

So, why aren’t all applications out there written in MS Access, or one of its clones?

Enter: design space

So, what is the problem with having all the logic in the database?

Anyone who’s actually worked on a big application, with hundreds of triggers and stored procedures, knows perfectly what the problem is. It might be slightly more difficult to articulate it in clear words – I myself could not place a finger on the definition for a long time.

When you decide to store all logic in the database, you have made the decision to fight in a very tight space. Everytime something unexpected pops up, you have to work it out inside the very cramped code room you’ve created for yourself.

The advantage is that everything is in reach – add, delete, change a trigger here and a stored procedure there, you can alter a great deal of things with very little code.

The disadvantage is that you’re effectively coding yourself into a corner. Moving is impossible except by making very complicated contorsions. Everytime you do, you’re at risk of breaking something you didn’t want to touch at all. Side effects galore.

And, the more you do this, the more space you’re taking away from yourself – the room becomes more and more cramped with each trigger and each stored procedure you add. It turns effectively into the same OOP-bound complexity Sivers mentions – without the room needed to work the complexity out.

The reason why OOP (or really, anything that tries to handle this kind of complexity in a different way) exists is to give us programmers space. Yes, having to make data flow through different layers makes more work. Yes, you have to change lots of things every time something deep in the database changes.

It’s a tradeoff – what you get back from it is all the joints, the clean points of connection, the interfaces you can program against. If you do things right, you can make yourself all the room you need to solve a specific problem. Yes, maintaining those joints is work. It doesn’t mean that it’s automatically better to get rid of them.

So, if you know for a fact you do not need the design space, you can save yourself some time and some effort by storing your logic in the database or, even better, using some kind of MS Access clone – since there is not much logic for you to write anyway, it’s better to have something pre-made do the heavy lifting.

But at the same time, if you don’t need the design space, it is probably the case that your project does not need ad-hoc software and you are much better off buying something existing.

Which strategy to choose should be a conscious decision, to be taken for each organization and for each project inside of it. If you’re a senior programmer, that is the kind of decision you should be able to take. The fact that there seems to be absolutely no clue inside the community about seemingly simple topics like this one is, honestly, a bit scary.


What do League of Legends and working in IT have in common?

So, I now live in Norway and my sister now lives in Belgium. That makes it harder to keep in contact. One way to mitigate is of course to find a common activity to do together – and that’s how I started playing League of Legends. The game is played 5v5, on a fixed map. For someone that is used to see patterns everywhere, the similarities with life and work in IT are evident:

Skill is paramount

So you’re a successful professional. You’re smart, you’re experienced, your certifications are plenty. Still Mateusz, from Gdansk, 14, who has supposedly had repeated intercourse with your mother last night and is somehow well informed about your father’s sexual preferences, is better than you. By a lot. He (literally) runs circles around you and you’re there, hugging your tower, having to choose between letting him outfarm you to oblivion or feeding like there’s no tomorrow.

You are a successful professional, you’re smart, you’re- doesn’t matter: you’re getting pwnd, and that’s the end of it.

And that’s because he’s better than you. Yes, a 14-years old is better than you. Why is irrelevant – yes, he probably plays LoL all day – it does not matter. Still better than you. Acknowledge that, try to learn from the little bugger, and do not overextend. He’s gonna rape you, exactly like he said.

Respect skill. Learn to recognize and acknowlegde skill properly – skilled people might not be the people you would want them to be, or people you like. Don’t let irrelevant factors influence your judgement – or else, you’re going to pay for that mistake dearly. If you judge people on your team by any other metric, you’re going to fail. Skilled people are what make or break a project – forget that and you’re going to have a bad time.

Hire for skill, manage towards maximizing potential. Everything else is a second thought.

Taken skill for granted, teamwork is king

A very convenient (if you’re an occasional player) mechanic of LoL PvP is that your performance will be analyzed by the game and, based on that, the matching engine will try its best to put you against people on your skill level – which makes encounters with smurfs like the one previously mentioned a somewhat rare occurrence, thus offering a much more pleasant gaming experience for n00bz like me.

The first interesting thing you will notice once you’re matched against people with your skill level is how much of a difference teamwork makes: teams in which people call for MIAs, ping and ward consistently, and try to coordinate towards dragon, baron and major objectives are much more successful than teams who do not: try to stop the enemy team from taking inner mid 5v2 with the other three scattered across the map, with no aim and absolutely no intention to heed your desperate help mid plz, and you’ll know what I’m talking about.

Dying in a 2v5 fight with 3 assists is good team play. Ganging up 3v1 on an enemy and then spamming the chat with ksno you ks is not. The former are people you want in your team. The latter probably have a brilliant career in management ahead of them.

Truman said:

It’s amazing what you can accomplish when you do not care who gets the credit

That’s the definition of teamwork – it’s the magic that happens when nobody cares about who takes the credit. No, it’s not a measure of how much middle management likes you (while that would certainly help you more with your career than being a real team player). And no, it’s not that thing which shields you from your individual responsibilities. It’s a much less rewarding experience than the people in charge would like you to think it is – especially when it comes to your personal interest. So, if you are seriously up for being a true team player, make sure that your organization really values and rewards team players.

Again, this is all assuming that the skill is actually there. Otherwise you’re all wasting time and money. No matter how coordinated you are, an enemy diamond smurfing on the enemy team will just make minced meat of all of you – single handedly. So, better to try and get some of those on your team first. After you’ve managed to do that, you can worry about how can you better make the team work together. The order is important here.

If you can, carry

The convenient mechanic i mentioned earlier: it’s not flawless. That means you will most likely be matched with people in same league as yours but, even inside the same league, skill level varies wildly. As a consequence, there’s a chance that you might find yourself being the best player on your team.

As the best player in the team you have an implicit responsability, apart from winning your lane: you have to make sure that the advantage you build by outplaying your lane opponent is somehow shared by the other, weaker, members of your team: you might be the best player on SR, you might be winning mid 5-0 but if in the meantime your teammates lose bot and top, feed like crazy and generally throw the game you’re
not going to be able to face the extremely fed enemy carries 3v1 or even 2v1 late game.

So, in order to avoid the situation, change lane when your opponent is dead and try to be as helpful as possible – that’s what win games. Losing some cs in order to keep outer bot/top up is worth it.

(Again, that’s assuming you’re actually on the same order of magnitude of the other players when it comes to skill. If you’re smurfing, winning the game singlehandedly is faster than trying to help the helpless. Skill rules, always.)

Letting your coworkers struggle for days on a problem that would take you an hour or so is ultimately not good for you – yes, even if that’s not your job.

If you’re the carry: roam a lot, help where it’s needed, and be generally available – even if it costs you some time of your own. It’s worth it.

If you can’t, let others carry you

Same situation, but opposite role – you‘re the weak link. You call Ryze mid, you find a Cassiopeia on the other side and, on top of the bad matchup, she’s better than you… gg? Well, not if you keep your ego in check. Hug your tower, farm what you can from there, and for christ’s sake do not feed. You don’t have to win the game all by yourself: there’s four other players on your team.

You’re in a bad spot, sure. You don’t have to throw the game because of it. Acknowledge that you’re outplayed, play safe, and entrust your teammates with the game. If you try to punch above your weight, you’re going to hurt the team (and your collective chances at winning) much more than if you play passively.

If you find yourself struggling with a design and you can’t get out of it no matter how much you try, call for help. If you “brilliant ideas” routinely end up being unmaintainable bundles of spaghetti code, call for help. There’s other people in your team, working with you, they might be better than you at that task even if you’re more experienced, or more senior, or (god forbid) appointed as the architect by management’s decree. lol im diamond smurf give me mid doesn’t usually end well, especially if you’re not really “diamond smurf”. Keep your ego in check – skill should decide hierarchy, the most skilled people should make the calls. That’s for the well being of the whole organization.

Play the meta

So you’re playing a decent Ryze mid, 6-2 so far with a good gold advantage, tear and RoA fully stacked, boots and frozen heart completed. The enemy Garen struggles a lot when you burst him and has escaped the last gank, but just barely. Time to buy  archangel and go full offensive, as usual when it comes to mid game – tear is stacked and the resulting seraph gives you enough burst to blow up the entire enemy team from your fountain, plus the shield for when things go south. It’s just too good to even consider the alternatives.

Garen goes MIA for 5 mins or so, and the next time you see him you jump in for the kill… and he mauls you. WTF?

You try again, and again, and you go 6-5, from 6-2 in a matter of minutes. What happened?

Well, what happened is that Garen saw that coming, and bought tons of MR. So your burst now gives him the tickles. Too bad you didn’t even think about it and blindly followed your usual item progression path – if you bought a void staff instead of archangel you would have still had the upper hand, thanks to that sweet sweet MP. So your team loses, and you only have your zealotry and your inability to read the environment to blame.

Different situations call for different strategies, different tools and different mindsets. There’s no silver bullet, no matter how much one might look like it. Learn that, and use it to your advantage: having experience with lots of different tools and being able to be effective in a lot of coding styles makes you extremely valuable for an organization.

Programming challenges – War!

The book programming challenges is one I see recommended quite often – so, I decided to give it a try. The first problem I’ve encountered involves replicating a children’s card game called war, which I’d honestly never heard before. The problem is stated as such:

A standard 52-card deck is dealt to two players (1 and 2) such that each player has 26 cards.
Players do not look at their cards, but keep them in a packet face down.
The object of the game is to win all the cards. Both players play by turning their top cards face up and putting them on the table.
Whoever turned the higher card takes both cards and adds them (face down) to the bottom of their packet. Cards rank as usual from high to low: A, K, Q, J, T, 9, 8, 7, 6, 5, 4, 3, 2.
Suits are ignored.
Both players then turn up their next card and repeat.
The game continues until one player wins by taking all the cards.
When the face up cards are of equal rank there is a war.
These cards stay on the table as both players play the next card of their pile face down and then another card face up.
Whoever has the higher of the new face up cards wins the war, and adds all six cards to the bottom of his or her packet.
If the new face up cards are equal as well, the war continues: each player puts another card face down and one face up.
The war goes on like this as long as the face up cards continue to be equal.
As soon as they are different, the player with the higher face up card wins all the cards on the table.
If someone runs out of cards in the middle of a war, the other player automatically wins.
The cards are added to the back of the winner’s hand in the exact order they were dealt, specifically 1’s first card, 2’s first card, 1’s second card, etc.

Nothing complex so far. So, I scroll down to see the solution and I’m welcomed by this snippet:

#define NCARDS 52 /* number of cards */
#define NSUITS 4 /* number of suits */

char values[] = "23456789TJQKA";
char suits[] = "cdhs";

int rank_card(char value, char suit)
    int i,j; /* counters */
    for (i=0; i<(NCARDS/NSUITS); i++)
        if (values[i]==value)
            for (j=0; j<NSUITS; j++)
                if (suits[j]==suit)
                    return( i*NSUITS + j );
    printf("Warning: bad input value=%d, suit=%d\n",value,suit);

Ugly! 5 (FIVE) levels of nesting! I understand that it’s C we’re talking about here but that seems a bit more complex than it should be. I stopped right there and rolled my own solution in Ruby (yeah, I’ve got nothing better to do tonight). This is the result:

Let’s define some constants first:

A = 14

CLUBS = :c


We’ll have to deal with cards – so let’s define a class for them

class Card

  attr_reader :rank
  def initialize(rank, suit)
    @rank = rank
    @suit = suit #unused

Modelling the card like this (with a rank and a suit) makes little sense but the problem specification suggested to do so (as does the C snippet above), so that’s it – cards could have been integers.

Cards will have to be divided into two decks, from which we’ll have to draw and to which we’ll have to add conquered cards. Let’s make a class for them as well. Its methods are quite self explanatory.

class Deck

  def initialize(cards)
    @cards = cards
  def draw
    return @cards.pop
  def collect(stack)
    while(!stack.empty?) do
      @cards.insert(0, stack.shift)
  def empty?
    return !@cards.any?


Now let’s generate the two starting decks (this is the equivalent of the C snippet above. It’s not straightforward, but I find it way more readable written this way.)

def create_decks
  all_cards = ((2..A).to_a).product(SUITS).map { |x|*x) }.shuffle
  return [all_cards[0..(all_cards.count/2 - 1)], all_cards[(all_cards.count/2)..all_cards.count]].map { |x| }

And now the game logic. If one of the two players can’t draw, he or she loses. If the two players draw cards with different ranks, the winning player is the one who drew the card with the highest rank. If the ranks are equal, then we get into a war. Let’s wrap this piece of logic like so:

def determine_outcome(card1, card2)
  return @deck2 unless card1 #if player 1 can't draw player 2 wins 
  return @deck1 unless card2 #and vice versa
  return war if(card1.rank == card2.rank) 
  return card1.rank > card2.rank ? @deck1 : @deck2

Where do card1 and card2 come from? We drew them from the respective player’s deck. The cards go on the table (here represented by the @stack), and the winner takes all the cards after resolution.

def combat
  @stack << @deck1.draw
  @stack << @deck2.draw	
  winner = determine_outcome(@stack[-2], @stack[-1])	
  return winner

Note that, since a war might have happened, there might be more than just the two cards added here in the @stack.
Let’s now see how #war is implemented:

def war
  @stack << @deck1.draw #facedown
  @stack << @deck1.draw
  @stack << @deck2.draw #facedown
  @stack << @deck2.draw
  return determine_outcome(@stack[-3], @stack[-1])

We add four cards in total to the stack: first, the first player draws a card, face down. This card is not used at all in the game, except for beling “pillaged” by the winning player. Then the first player draws another card, this one face up – this is the one that will be used for the comparison. The second player does the same. Then we pass the topmost and the third from top card on the @stack (table) to the comparison function – those are the cards played face up.

The game loop goes like this:

def play
  turns = 0
  while [@deck1, @deck2].all? { |x| !x.empty? }
    winner = combat	  
    turns += 1
  print "TURNS : #{turns}\n"
  print "WINNER : player#{winner == @deck1 ? 1 : 2}\n"

So, we count the turns passed and we keep drawing until one of the decks is empty. The winner is the last player winning a “fight”.

You can find the complete listing as a gist here. Note that the program will sometime go into an infinite loop – that’s not a bug, it’s actually likely that the game itself as it’s stated is undecisive, and the two player keep drawing forever without having a winner!

On technical debt (now with chickens!)

Technical debt, meant as the damage created to a codebase by hastily implementing a feature (and wrecking the codebase’s design in the process), seems to be a foreign concept to some managers/customers. Maybe they know and they just don’t want to listen, I’m not sure. Anyway, I thought of a little story that I might use the next time I’ll have to explain the pricing of some new feature:

A farmer has three chickens. Each chicken makes one egg per day.
The farmer is in business with a local grocer. The grocer buys two eggs per day from the farmer, so he can sell them at his shop.
All is fine with the world until the grocer shows up at the farmer’s place:

Grocer: Hey there. I'd like to have some chicken meat today.
Farmer: Meat? That wasn't in the agreement.
Grocer: I know. But I really need meat. It's for a Business-to-Stomach Enterprise Poultry-as-a-Service Platform I'm planning.
Farmer: What?
Grocer: Important stuff. Can you give me some?
Farmer: Well, that's not so easy - I'd have to hatch eggs and wait for the chicks to grow... I think it'll take a month or so.
Grocer: A month? That's way too much... I was thinking more on the likes of right now.
Farmer: Nature has its times, you'll have to wait a little bit.
Grocer: Hm, why don't you just kill one of the chickens? That way, I'll have my meat and you can still produce two eggs per day. You don't need more, right?
Farmer: Well, I don't think it's a good idea. That would put me in a tight spot in the case something happens to one of the other chickens.
Grocer: Come on, nothing's going to happen... and I really, really need that meat! Can you do that?
Farmer: Okay, I guess I can...

And so, the farmer picks his cleaver up and sends one of the birds to the Creator. The grocer gets his meat and returns to his shop.
One week later, the grocer pays the farmer another visit:

Grocer: Hey there!
Farmer: Hey, what's up?
Grocer: Listen - the meat was great. Actually, it was so great and it sold so well that now we absolutely need at least another chicken. By tomorrow.
Farmer: That's not gonna happen. If I give you another chicken then I'll not be able to deliver the two daily eggs that were in the pacts.
Grocer: Oh please come on! Customers want meat, and I've already promised it for tomorrow...
Farmer: No, I can't do that. I'll not be able to honor the contract if I do, do you understand? Eggs will not be guaranteed if I do that.
Grocer: But I really, really, really need the meat! By tomorrow! Otherwise customers will get mad, the earth will shatter and the world as we know it will end! Give me one of the chickens, now!
Farmer: Well, if you want it that bad, take it! Again, eggs are not guaranteed from now on, understood?
Grocer: Yea sure, got it. But you're a smart guy, I guess you'll figure out how to make it work anyway. Bye!

And the grocer leaves for his shop again.
The day after:

Grocer: Hello. What's wrong with the eggs?
Farmer: What do you mean?
Grocer: The eggs. Actually, the egg. There's just one. What happened?
Farmer: What happened? I had three chickens. You took two. Now there's just one. One chicken, one egg. I though I was being clear about that.
Grocer: But that wasn't in the pacts! The contract states it right here - you owe me two eggs per day! Now what will I say to my customers?
Farmer: Well, I was very clear about it. Nothing I can do about that.
Grocer: Okay, okay, sheesh, forget it. Talking about something else... it would be nice to have some meat. Can I have some?

So, don’t be a farmer – refuse to irreparably damage your codebase for the needs of now and, if you’re forced to do so, don’t take responsibility for the wreckage – and don’t be a grocer – don’t ask for the impossible and assume the responsibility for your decisions.

P.S.: no chickens were harmed during the writing of this blog post.

Avoid attachment deletion on destroy with Paperclip

In case you don’t know, Paperclip is a nice gem that integrates with ActiveRecord and takes care of saving binary data – like an image, a video or a pdf – as if it were an attribute of the record: basically, you call save on the entity that you declared the attachment on and the binary data will be persisted along with your database record. You can save the binary data directly on the server, send it somewhere else via HTTP or FTP, save it on Amazon’s s3… a lot more options are available via external plugins.

So, you call save and the attachment is persisted, you call destroy and the attachment gets deleted. What if you don’t want to delete the attachment when destroy is called on the owner? As it turns out, there’s a little option (:preserve_files), default false, that tells paperclip to leave the attachments alone when the parent entity gets destroyed. If you have an attachment declared like this:

  :has_attached_file => :attachment,
                        :styles => { :thumb => 100x100! }

All you need to do to avoid this attachment’s deletion on destroy is to add the preserve_files option:

  :has_attached_file => :attachment,
                        :styles => { :thumb => 100x100! },
                        :preserve_files => true

Hope this helps – took me a while to find out.

Smeed’s Law for programming – an explanation

I’ve come across an interesting blog post a while ago – it was about Smeed’s law and how it relates to software.

Basically, the blog authors said that the number of bugs in a system grows sublinearly with the size of the system itself. There’s no proof for this but it’s the same conclusion I’ve reached with the experience I’ve accumulated so far. This is my explanation for the phenomenon:

Say you have a small software system – for a viable example, think about a prototype, or a shell script you made to take care of trivial stuff. It is very likely that most, if not all, of its parts are not set in stone: if you ever need to add a new feature or to change the behavior of an existing one you will likely have to modify a high percentage of the existing code.

Yep, I suck hard at photoshop

Now think about a bigger system – it might be the prototype from the previous example that now has grown to a complete product. You need to add a feature to that system. If you’ve done things the right way, it’s very unlikely that you will have to modify big parts of the system: chance is, your software’s internals will have grown some sort of API, internal DSL, call it what you want, so all you need to to is either to extend or integrate with it. Also, with time, the bugs in your system’s core will be likely found and eradicated.

Yep, I suck hard at photoshop

Supposing that if you don’t touch it, it won’t break, the part of your system prone to bugs will be the “surface” (2πx), which grows slower than the total area (πx²).

Why you should stay away from Appcelerator’s Titanium

Let’s face it – programming on mobile devices is hard: everything’s much more complicated to accomplish than it is on the web or the desktop and, since the platform is fairly new, tools and frameworks are scarce and poorly refined. Plus, if you need your application to run on more than one target OS (iOS, Android, Blackberry) you’re pretty much forced to write a different application for each one of them.

So, enter Appcelerator’s Titanium – what Titanium aims to accomplish is to solve most of these problems, by hiding all the gory implementation details behind a simple and well-defined javascript API. On top of that, it will (more or less) compile your javascript to a different language (objective-c for iOS, Java for Android and soon Blackberry) so you can deploy your application to multiple targets with (supposedly) no changes to your code. And it’s free! Support and other goodies come for a price but the library itself is free for commercial use.

What’s the catch then? On the surface, it’s all good – you pop up Titanium Studio (a modified version of Aptana), write some trivial javascript and you’ve got a working, native app. Great, it took me a day or two to do this in native objective-c – with Titanium, it’s a couple of hours! So you dive head first and build more and more complex apps – animations, database connection, networking, geolocation, it’s got it all!

The more complex your applications become, the more often you encounter some weird glitches – why the navbar’s title not centered? And why sometimes the navigation controller doesn’t pop the last view? Or why the focus event gets called so often, for no reason, only on the device? But you know, Titanium’s so much better than going back to native development, so you deal with all its quirks and carry on. Until bad stuff starts to happen.

Your app starts to crash, seemingly at random. The reason : memory starvation. You see, although Titanium is supposed to take care of memory management for you, it doesn’t always work correctly. Most of the times, especially if your application is simple and doesn’t use many images and animations, the memory leaks Titanium generates over time will be small enough that you won’t even realise that your application is actually leaking. Problem is, when your application reaches a certain level of complexity, those leaks will become evident – in the form of sudden, unavoidable crashes.

There are some hacks, proposed by Titanium’s community that somewhat help to free some memory but they don’t always work and, even when they do, they don’t release all the memory they’re supposed to. Titanium doesn’t let you change your object’s reference count directly (probably because that would break some internals of theirs) so you have no way to dispose of objects yourself.

This leaves you stuck with a frail, buggy application that is ready to blow up at every click… and there’s nothing you can do about it except for a complete rewrite in objective-c or java.

Are you planning to develop mobile apps with Appcelerator’s Titanium? Think twice. It’s tempting but you really, really don’t want to find yourself stuck like it happened to me. Unless it’s either for small, simple applications, or they somehow manage to solve all their memory issues, I strongly suggest you to stay away from Titanium.

What’s so special about Apple products?

Some days ago I read on Reddit about John Resig’s setup:

Redditor : What is your development setup? (Software)

John Resig : I use OS X on an iMac with an extra monitor. For coding I use VIM in a terminal and have a screen session open to IRC in another terminal window. I then have a plethora of browsers open (Firefox, Chrome, Safari, Opera – a VMWare with IEs) for testing. That’s pretty much all that I use and need to use (I have a very similar set up on my Macbook Pro, as well).

So, John Resig actually develops on a mac. Also, down the same thread, some other redditor points out that 95% of the web conferences audience he goes to owns a MacBook or a MacBook Pro.

Well, for the last months I’ve been developing on a mac too and here’s some of the things I’ve had to deal with:

  • There is NO del key : the Macbook/Macbook Pro has no del key. That leads to an array of Hm, now delete / ack, there’s no delete / rightarrowrightarrowrightarrow / backspacebackspacebackspace, which is incredibly annoying because it completely kills the flow. Even worse, you unlearn to use the del key so when you switch back to a normal machine the tendency is to keep using the right arrow/backspace combo or (god forbid) the mouse.
  • Apple key instead of CTRL : on macs, the function that is normally accomplished by the CTRL key is instead done by the Apple (command?) key, which is located pretty much where the Win key would be on a Windows keyboard. This is annoying because it forces you to unlearn the convention which you’re used to (for no apparent good reason) but the worst thing is that the behaviour is NOT consistent from application to application, meaning that in some applications (example : nano) the CTRL key is behaving as usual and the Apple key does nothing (or does something completely unrelated to what you want to do).
  • You can’t cut a file on Finder : it seems that there is no way to cut a file and paste it somewhere else in finder. Even if there were a way to do that, there’s no keyboard shortcut for it (ctrl-x and apple-x do nothing) and there’s no mention of a cut option in the context menu. Also, since there’s no DEL key, if you want to delete a file you have to use the option in the context menu to do that.
  • Git is very slow : git commit takes an eternity to complete, for no apparent good reason. A commit which takes a second or two on my old laptop running Kubuntu or on a Windows machine will take 30-60 seconds on a Mac. Also git push is sensibly slower but not as much as git commit.
    Since it takes an eternity to commit my work, I’ve started to commit less frequently. That lead to a big loss of work on a couple of occasions.
  • Limited customization: I really don’t like the dock (that panel with all the applications icons) and I wanted to recreate something similar to what I use on Kubuntu, so one bar on the left with all the program launcher and a taskbar on the left. Turns out I can’t have a taskbar at all and the only thing I can do with the dock is to choose which side of the screen it has to be on.

On top of all this, my impression is that Apple machines are very slow compared to PCs with similar specs: a colleague of mine needed to clone a remote git repository so I’ve asked him to open a terminal – it needed a good 30 seconds for the terminal to start. On my Kubuntu laptop (which shares similar specs with my colleague’s MacBook) I can open a dozen in five seconds or so. Also, sometimes applications freeze for some seconds, again for no good apparent reason, and you have to wait until they come back up.

All in all, my experience with macs so far has been far from positive: I guess that if you’re a “normal” user you wouldn’t care one bit about all the things I pointed out but, as a programmer, I find all of this extremely annoying.

But anyway, since I have to work on iPhone/iPad applications, I thought that I could well pay a 100/200 € more for a mac machine and then dual boot linux – that way, I could both work on my mobile projects and still have linux on the same machine for everything else. I’ve wanted to change my laptop for a while now and this is what I had in mind, so I went to the online Apple Store and made myself a MacBook pro with similar specs. And I was once again surprised at the outcome:

The ACER AS594, my original choice – € 749.99

A MacBook Pro with similar hardware – € 2.149,00

That’s almost 3 times the price. John Resig thinks it’s worth it. 95% of web developers think it’s worth it. Do you? If so, would you please tell me why? Because, in all honesty, I don’t understand what’s so special about Apple products.

How to: get Titanium WebView’s content height

I’ve been working with Appcelerator Titanium and, compared to the hell that is native iOS application development, I’ve been quite satisfied so far… except from the occasional bug/glitch that forces me to write some ugly workarounds.

A notable example is when you need to get the content height of a WebView. Let’s say you have two elements inside a single ScrollView: one is a View of any kind with a given height (for example, an ImageView) and the other is a WebView. You want to scroll both the ImageView and the WebView at the same time so, when you slide a finger on the device vertically, the ImageView scrolls up and you see more of the WebView‘s content. Problem is, there is not a contentHeight property on the WebView control and you need that height to set the height property of the WebView to avoid its native scrollbar to be active: if you don’t set an height to the WebView it will display a vertical scrollbar for its own content, hence avoiding the scroll on the external ScrollView.

Fact is, there is actually a way to get the WebView‘s content height: the WebView has an evalJS() method which allows you to execute arbitrary JavaScript inside the WebView (as you would inside a browser). The trick is to ask the WebView for its own document.height and take that as the content’s height, as follows:

actualHeight = webView.evalJS("document.height;");

The only catch is that you will get the real content height only after the WebView has finished rendering. If you are rendering from an external URL you can use the load() event, which will get called when the page has finished rendering, otherwise you’ll have to set a timer, like this:

myTimer = setInterval(function () {
    actualHeight = webView.evalJS("document.height;");
    if(actualHeight > 1)
            webView.height = actualHeight;
            scrollView.contentHeight = imageView.height + webView.height;            

and then dispose the timer with myTimer.clearInterval(). Hope this helps!