Post Project Review

September 18, 2008

Another project finished, so it’s time to give some thought to what worked well, and what didn’t work so well. In a change from my usual rants, this post is more observation and analysis. But as usual it is still full of opinion. After all, as Charles McCabe said, “Any clod can have the facts…”

Firstly, and most importantly, was the project a success?

Well, on the negative side it did go substantially over budget and schedule. But on the positive side, it was completed (just) in time for the client’s hard deployment date, the client is very happy with the result, and has paid all the bills. So on balance I must chalk it up as a success.

So, what worked well? In the interests of brevity, I will only mention 2 points:

1) The architecture was right.

Having an appropriate architecture was good for both parties. For me, it was possible to meet the performance requirements, and to accommodate the (inevitable) changes to requirements that happened along the way, with relatively little pain. It also made it possible to adjust the fees and schedules with reasonable certainty.

For the client, it meant they always felt they were going to get exactly what they wanted, and with only incremental changes in cost and schedule.

Getting the architecture right was a direct consequence of time spent doing an appropriate amount up front design.

2) The choice of OS, programming languages, database server, and revision control system all worked out well.

Some of these choices were not in line with the mainstream industry norm, and the client was initially a little surprised when they received the proposal. But with sound reasoning to support the choices, it was possible to sell the client.

The lesson here is not to be afraid to do something different, if you have the analysis to back yourself up.

OK, enough self congratulation. What did not work so well? Ah, so many things, but again I will mention only two:

1) Starting down the slippery slope…

Early on, I mocked-up of a certain piece of functionality. This was done in a somewhat tedious way, but worked OK initially, and allowed for development in parallel. I reasoned I would replace the mock-up later.

Unfortunately, when the time came, I felt it would take too long to completely rewrite the mock-up, and I felt I could implement the remaining functionality more quickly with the method I already had.

Yes, it felt like a compromise at the time, but did not have the courage to do this properly. You can guess, once I was a certain way down this slippery path, I could not turn back. For the rest of the project, I always regretted having to work on this part of the functionality, and every time I did, I felt I was pouring effort down the drain.

This functionality really would have benefited from some automatic code generation. The lesson is do not begrudge the short term effort of writing tools. They will pay off in long term benefits.

2) Not enough tests!

While a significant amount of test code was written, it was not nearly enough. In particular, regression testing was not thorough enough. This meant a lot of manual testing was involved when making revisions.

The solution, I think, is to change the balance at the costing stage of the job, to put more emphasis on the tests and less on the application code itself. Writing more tests really does reduce development time, and having properly budgeted for developing tests, I wouldn’t feel the temptation to skimp on this area once in implementation.

And so, to the final conclusion: as usual, I was too optimistic! One gets enthusiastic when the project offers some interesting challenges and the chance to work with interesting tools, and it is too easy for this to colour the estimate and schedule. But then possibly that is why I am still doing this software development business, when I should probably just be consulting instead!