Post Project Review

September 18, 2008

Another project finished, so it’s time to give some thought to what worked well, and what didn’t work so well. In a change from my usual rants, this post is more observation and analysis. But as usual it is still full of opinion. After all, as Charles McCabe said, “Any clod can have the facts…”

Firstly, and most importantly, was the project a success?

Well, on the negative side it did go substantially over budget and schedule. But on the positive side, it was completed (just) in time for the client’s hard deployment date, the client is very happy with the result, and has paid all the bills. So on balance I must chalk it up as a success.

So, what worked well? In the interests of brevity, I will only mention 2 points:

1) The architecture was right.

Having an appropriate architecture was good for both parties. For me, it was possible to meet the performance requirements, and to accommodate the (inevitable) changes to requirements that happened along the way, with relatively little pain. It also made it possible to adjust the fees and schedules with reasonable certainty.

For the client, it meant they always felt they were going to get exactly what they wanted, and with only incremental changes in cost and schedule.

Getting the architecture right was a direct consequence of time spent doing an appropriate amount up front design.

2) The choice of OS, programming languages, database server, and revision control system all worked out well.

Some of these choices were not in line with the mainstream industry norm, and the client was initially a little surprised when they received the proposal. But with sound reasoning to support the choices, it was possible to sell the client.

The lesson here is not to be afraid to do something different, if you have the analysis to back yourself up.

OK, enough self congratulation. What did not work so well? Ah, so many things, but again I will mention only two:

1) Starting down the slippery slope…

Early on, I mocked-up of a certain piece of functionality. This was done in a somewhat tedious way, but worked OK initially, and allowed for development in parallel. I reasoned I would replace the mock-up later.

Unfortunately, when the time came, I felt it would take too long to completely rewrite the mock-up, and I felt I could implement the remaining functionality more quickly with the method I already had.

Yes, it felt like a compromise at the time, but did not have the courage to do this properly. You can guess, once I was a certain way down this slippery path, I could not turn back. For the rest of the project, I always regretted having to work on this part of the functionality, and every time I did, I felt I was pouring effort down the drain.

This functionality really would have benefited from some automatic code generation. The lesson is do not begrudge the short term effort of writing tools. They will pay off in long term benefits.

2) Not enough tests!

While a significant amount of test code was written, it was not nearly enough. In particular, regression testing was not thorough enough. This meant a lot of manual testing was involved when making revisions.

The solution, I think, is to change the balance at the costing stage of the job, to put more emphasis on the tests and less on the application code itself. Writing more tests really does reduce development time, and having properly budgeted for developing tests, I wouldn’t feel the temptation to skimp on this area once in implementation.

And so, to the final conclusion: as usual, I was too optimistic! One gets enthusiastic when the project offers some interesting challenges and the chance to work with interesting tools, and it is too easy for this to colour the estimate and schedule. But then possibly that is why I am still doing this software development business, when I should probably just be consulting instead!


Computing in molasses

August 20, 2008

Way back in the 80’s I had a Commodore Amiga 1000 as a home computer. While there was much that was amazing about that machine, it was tedious to write code on — with one 800k byte floppy disk and a multi pass compiler and linker, it took what seemed like ages to edit-compile-run. It was like wading in molasses.

Meanwhile, at work I used DEC microvaxes running Ultrix, and Sun workstations running SunOS, and they were very sprightly. Editing large files with emacs was comfortable, compiling C code was quick, and shell and awk scripts ran fast enough that you didn’t even have to write C unless you had a really big data set to process.

In fact, it was almost more comfortable to dial in to work at 2400 baud and use a terminal session on my microvax than to use the Amiga for developing C code.

This was not the Amiga’s fault. With the limited hardware, it couldn’t be any faster. Add hard disks, more RAM, floating point hardware, and it would have been as responsive as a Sun workstation, but it also would have cost as much.

Things changed around 1990 when I bought a 40 MHz AMD 386 based PC with 120Mbyte hard disk. Initially running Coherent, then Linux, I finally had a home computer that was as comfortable as my work computer. With an add on IIT floating point coprocessor, my 386 system at home was entirely as responsive as the Sparcstation IPC on my desk at work.

And as hardware improved with time, things got even better both at home and at work. With ample computing power, you only got slowed down on serious problems. Most of the time it felt like you were jogging comfortably through the problems, and this happy situation continued until recently.

But now, again, I feel the molasses oozing around my knees, slowing me down and resisting my stride.

How can it be, that with GHz multi-core processors, Gigabytes of RAM, hundreds of Gigabytes of disk, and megabit per second interfaces, the computing experience is anything but instantaneous?

Recently, I have had to make relatively intensive (I mean all day long) use of some software that runs only on Microsoft Windows. This is really unpleasant.

The most unpleasant thing is the poor responsiveness of the system, in particular the irregular time lag between clicking on something and activity occurring. Sometimes things take 1/2 second, sometimes 5 seconds. Sometimes, after a 5 second wait, you think maybe the first click wasn’t seen, so you click again, only to find the first action was actually seen, it’s just that the lag was unusually long, so now you have 2 instances open.

It is not the hardware, or the system configuration. The problem is the completely shoddy software (from operating system up to application), and in particular, the way it is structured.

The problem sets are much larger today than they were 20 years ago, and I know we use interpreted languages with bigger runtimes, and garbage collection, etc, etc. That is not the problem. Modern hardware is more than up to the task. I am still impressed when I run a large program written in python that it runs in a matter of seconds. And then I can rewrite one module in C, wrap it with swig, and get something that runs in milliseconds. This hardware is FAST.

The problem is the software architecture. If you insist on making a GUI application, then you must design the software architecture so that the user interface is responsive, even when something time consuming like network access or serious computation is involved.

If I click on some button to start an action, and a dialog must pop up to ask for some input, then it should pop up quickly every time. I won’t mind if the action sometimes takes 20 seconds or sometimes 25 seconds, but it will break my stride if I have to wait some varying amount of time for the dialog to pop up. This problem is so common on Windows software — sometimes you click the File menu and that takes several seconds to come up.

On microsoft windows, most software is like this. On linux, firefox and open office are like this. I find these programs quite unpleasant to use.

Meanwhile emacs, which people used to deride for bloat — Eight Megs and Constantly Swapping — well, eight megs is basically cache memory now. Sure, emacs has bloated somewhat over time, but it is still a pleasure to use.

What’s wrong with GUIs

June 16, 2008

GUIs are so pervasive in today’s computing and development environment that some people have never seriously used an alternative method of interacting with a computer.

Yet I see so many places where the GUI is inappropriate and just plain counterproductive.

When developing a system, I do everything I can in an emacs shell.

Apart from the advantage of the powerful editing and searching facilities available, the biggest advantage this gives is in capturing a time ordered history of stimulus and response.

While experimenting and/or learning with a system under development, you are going to try a bunch of commands. Having history there means you can search back through results for things of note, snarf a chunk of output and filter it to hide the chaff, or sort it, or post-process it into a formatted report, or plot it in a graphic, or whatever.

This is especially powerful for working with dynamic languages, where you interact with your system as you are extending it. Once you’ve achieved something significant, you can simply cut and paste the command sequence into a permanent script, or a makefile, or whatever.

And not least, you also have text you can paste into a buffer for documentation. You may need to provide information to co-developers, or customers. Or you may need to do the same or similar thing in a week’s time. I any case, you can just look at the transcript doc to see the detail of how it was done.

I’ll often issue the odd ‘date‘ command during the interactive process, just to give some documentation on how long something took to run or how long it took me to work something out.

To do the same things with a GUI, you would need a lot of screen captures, or you need to write some inane sequence like

select menu blah, in the dialog check options blah and blah, then press button blah, then select menu blah…

Although the problem with that is the danger of missing a step or getting something out of order. So to do it properly, you really need to video capture the process.

I can’t stand videos when a text document would suffice. You see more and more of this now on everything from news sites to on-line tutorials. You go to a web site, and even if the item is technical, there is a good chance that the link is for a video of a presentation, complete with annoying ums and aahs and errs… It’s beyond me why anyone would prefer this to a text.

And the plain audio transcript is just as bad. Sure, it doesn’t waste as much bandwidth to download, but it still wastes time to listen to. That might be fine for entertainment, but it’s a waste of time for documentation.

Now, please note that I didn’t say I don’t like graphics, and I’m not saying that a high res display + mouse isn’t a boost to productivity. For sure, graphical presentation can be very helpful in visualizing data or processes.

The problem I have is with the current obsession with using a graphical metaphor everywhere, which means for processes which are far more efficiently handled with simple text.