Post Project Review

September 18, 2008

Another project finished, so it’s time to give some thought to what worked well, and what didn’t work so well. In a change from my usual rants, this post is more observation and analysis. But as usual it is still full of opinion. After all, as Charles McCabe said, “Any clod can have the facts…”

Firstly, and most importantly, was the project a success?

Well, on the negative side it did go substantially over budget and schedule. But on the positive side, it was completed (just) in time for the client’s hard deployment date, the client is very happy with the result, and has paid all the bills. So on balance I must chalk it up as a success.

So, what worked well? In the interests of brevity, I will only mention 2 points:

1) The architecture was right.

Having an appropriate architecture was good for both parties. For me, it was possible to meet the performance requirements, and to accommodate the (inevitable) changes to requirements that happened along the way, with relatively little pain. It also made it possible to adjust the fees and schedules with reasonable certainty.

For the client, it meant they always felt they were going to get exactly what they wanted, and with only incremental changes in cost and schedule.

Getting the architecture right was a direct consequence of time spent doing an appropriate amount up front design.

2) The choice of OS, programming languages, database server, and revision control system all worked out well.

Some of these choices were not in line with the mainstream industry norm, and the client was initially a little surprised when they received the proposal. But with sound reasoning to support the choices, it was possible to sell the client.

The lesson here is not to be afraid to do something different, if you have the analysis to back yourself up.

OK, enough self congratulation. What did not work so well? Ah, so many things, but again I will mention only two:

1) Starting down the slippery slope…

Early on, I mocked-up of a certain piece of functionality. This was done in a somewhat tedious way, but worked OK initially, and allowed for development in parallel. I reasoned I would replace the mock-up later.

Unfortunately, when the time came, I felt it would take too long to completely rewrite the mock-up, and I felt I could implement the remaining functionality more quickly with the method I already had.

Yes, it felt like a compromise at the time, but did not have the courage to do this properly. You can guess, once I was a certain way down this slippery path, I could not turn back. For the rest of the project, I always regretted having to work on this part of the functionality, and every time I did, I felt I was pouring effort down the drain.

This functionality really would have benefited from some automatic code generation. The lesson is do not begrudge the short term effort of writing tools. They will pay off in long term benefits.

2) Not enough tests!

While a significant amount of test code was written, it was not nearly enough. In particular, regression testing was not thorough enough. This meant a lot of manual testing was involved when making revisions.

The solution, I think, is to change the balance at the costing stage of the job, to put more emphasis on the tests and less on the application code itself. Writing more tests really does reduce development time, and having properly budgeted for developing tests, I wouldn’t feel the temptation to skimp on this area once in implementation.

And so, to the final conclusion: as usual, I was too optimistic! One gets enthusiastic when the project offers some interesting challenges and the chance to work with interesting tools, and it is too easy for this to colour the estimate and schedule. But then possibly that is why I am still doing this software development business, when I should probably just be consulting instead!


Computing in molasses

August 20, 2008

Way back in the 80′s I had a Commodore Amiga 1000 as a home computer. While there was much that was amazing about that machine, it was tedious to write code on — with one 800k byte floppy disk and a multi pass compiler and linker, it took what seemed like ages to edit-compile-run. It was like wading in molasses.

Meanwhile, at work I used DEC microvaxes running Ultrix, and Sun workstations running SunOS, and they were very sprightly. Editing large files with emacs was comfortable, compiling C code was quick, and shell and awk scripts ran fast enough that you didn’t even have to write C unless you had a really big data set to process.

In fact, it was almost more comfortable to dial in to work at 2400 baud and use a terminal session on my microvax than to use the Amiga for developing C code.

This was not the Amiga’s fault. With the limited hardware, it couldn’t be any faster. Add hard disks, more RAM, floating point hardware, and it would have been as responsive as a Sun workstation, but it also would have cost as much.

Things changed around 1990 when I bought a 40 MHz AMD 386 based PC with 120Mbyte hard disk. Initially running Coherent, then Linux, I finally had a home computer that was as comfortable as my work computer. With an add on IIT floating point coprocessor, my 386 system at home was entirely as responsive as the Sparcstation IPC on my desk at work.

And as hardware improved with time, things got even better both at home and at work. With ample computing power, you only got slowed down on serious problems. Most of the time it felt like you were jogging comfortably through the problems, and this happy situation continued until recently.

But now, again, I feel the molasses oozing around my knees, slowing me down and resisting my stride.

How can it be, that with GHz multi-core processors, Gigabytes of RAM, hundreds of Gigabytes of disk, and megabit per second interfaces, the computing experience is anything but instantaneous?

Recently, I have had to make relatively intensive (I mean all day long) use of some software that runs only on Microsoft Windows. This is really unpleasant.

The most unpleasant thing is the poor responsiveness of the system, in particular the irregular time lag between clicking on something and activity occurring. Sometimes things take 1/2 second, sometimes 5 seconds. Sometimes, after a 5 second wait, you think maybe the first click wasn’t seen, so you click again, only to find the first action was actually seen, it’s just that the lag was unusually long, so now you have 2 instances open.

It is not the hardware, or the system configuration. The problem is the completely shoddy software (from operating system up to application), and in particular, the way it is structured.

The problem sets are much larger today than they were 20 years ago, and I know we use interpreted languages with bigger runtimes, and garbage collection, etc, etc. That is not the problem. Modern hardware is more than up to the task. I am still impressed when I run a large program written in python that it runs in a matter of seconds. And then I can rewrite one module in C, wrap it with swig, and get something that runs in milliseconds. This hardware is FAST.

The problem is the software architecture. If you insist on making a GUI application, then you must design the software architecture so that the user interface is responsive, even when something time consuming like network access or serious computation is involved.

If I click on some button to start an action, and a dialog must pop up to ask for some input, then it should pop up quickly every time. I won’t mind if the action sometimes takes 20 seconds or sometimes 25 seconds, but it will break my stride if I have to wait some varying amount of time for the dialog to pop up. This problem is so common on Windows software — sometimes you click the File menu and that takes several seconds to come up.

On microsoft windows, most software is like this. On linux, firefox and open office are like this. I find these programs quite unpleasant to use.

Meanwhile emacs, which people used to deride for bloat — Eight Megs and Constantly Swapping — well, eight megs is basically cache memory now. Sure, emacs has bloated somewhat over time, but it is still a pleasure to use.


What’s wrong with GUIs

June 16, 2008

GUIs are so pervasive in today’s computing and development environment that some people have never seriously used an alternative method of interacting with a computer.

Yet I see so many places where the GUI is inappropriate and just plain counterproductive.

When developing a system, I do everything I can in an emacs shell.

Apart from the advantage of the powerful editing and searching facilities available, the biggest advantage this gives is in capturing a time ordered history of stimulus and response.

While experimenting and/or learning with a system under development, you are going to try a bunch of commands. Having history there means you can search back through results for things of note, snarf a chunk of output and filter it to hide the chaff, or sort it, or post-process it into a formatted report, or plot it in a graphic, or whatever.

This is especially powerful for working with dynamic languages, where you interact with your system as you are extending it. Once you’ve achieved something significant, you can simply cut and paste the command sequence into a permanent script, or a makefile, or whatever.

And not least, you also have text you can paste into a buffer for documentation. You may need to provide information to co-developers, or customers. Or you may need to do the same or similar thing in a week’s time. I any case, you can just look at the transcript doc to see the detail of how it was done.

I’ll often issue the odd ‘date‘ command during the interactive process, just to give some documentation on how long something took to run or how long it took me to work something out.

To do the same things with a GUI, you would need a lot of screen captures, or you need to write some inane sequence like

select menu blah, in the dialog check options blah and blah, then press button blah, then select menu blah…

Although the problem with that is the danger of missing a step or getting something out of order. So to do it properly, you really need to video capture the process.

I can’t stand videos when a text document would suffice. You see more and more of this now on everything from news sites to on-line tutorials. You go to a web site, and even if the item is technical, there is a good chance that the link is for a video of a presentation, complete with annoying ums and aahs and errs… It’s beyond me why anyone would prefer this to a text.

And the plain audio transcript is just as bad. Sure, it doesn’t waste as much bandwidth to download, but it still wastes time to listen to. That might be fine for entertainment, but it’s a waste of time for documentation.

Now, please note that I didn’t say I don’t like graphics, and I’m not saying that a high res display + mouse isn’t a boost to productivity. For sure, graphical presentation can be very helpful in visualizing data or processes.

The problem I have is with the current obsession with using a graphical metaphor everywhere, which means for processes which are far more efficiently handled with simple text.


I have had enough of procedural imperative programming languages

September 7, 2007

I have had enough of procedural imperative programming languages, and the retarded computer architectures that foster them!

How can you write x = x + 1; without feeling like an idiot?

The current insane state of affairs is all based on the upside down idea of computers invented by Johnny Von Neumann. It’s all about the CPU (and please be sure to genuflect reverently as you read CPU), which modifies the state of some memory (spit!). For special obfuscation, please find the rules for how said state is to be modified or evolved somwhere in the same memory.

To draw the hierarchy out explicitly, it is:

CPU –> Memory

It’s oh so easy in hindsight to cast aspersions on the genius of the man (and to be accurate, there were other contributors to the concept), but this idea is truly misguided. Like all good mind crippling paradigms, it leads directly to a number of BIG problems, the solution to which is naturally MORE of the same mind crippling paradigm.

I’ll just mention 3 here:

  1. The only way to know the result of a program is to run the program. It’s no coincidence that this seems to be the preferred programming paradigm of the borderline (or fully) autistic, as this seems to be the way they think anyway.
  2. You end up with all the state moving through a “von Neumann Bottleneck” between the cpu and the memory.
  3. Because state changes with time, you need to explicitly manage the time order of processing. This means parallelism is hard, and the natural way to speed things up is to speed up the CPU and the memory interface.

Of course, to a certain class of hardware guy this is the Right Thing. Like the engine freaks and their modified cars – you know, heaps of power or torque, but actually undriveable in a real word scenario – it’s all about the grunt of the CPU.

After all, when your program is a) do this tiny step, b) then do that tiny step, c) then do the next tiny step, etc… it all comes down to how fast can you “do a tiny step”. That means a processing beast, and of course, to feed the beast and keep it stoked you need a Hard Core memory interface. Just look at the memory interface of a modern computer. Level after level of caches, prefetchers, burst fills, speculative executions – its a hardware guy’s wet dream.

And please don’t mention Harvard architecture. I mean, one memory interface? Not enough! Several memory interfaces? Too many! Exactly TWO memory interfaces? Just right! (this insightful concept of computer engineering first espoused by one Goldilocks, I believe).

OK, enough griping. What is my alternative?

Well, for a change, think about the computer the other way up. With this world view, it’s all about the memory. That is, RAM, or disk, or paper – whatever.

Think about it: with a GUI, you already look directly at a big chunk of the computer’s memory on a visual display, and “see” the resulting state of a computation. Or on disk, you have the results of a computation in a file. So the memory is the most important thing.

Now, you need to modify the state of the memory. Think in terms of data flow: Memory contains data. Data flows out of the memory, is acted upon, and results (data) flow back into the memory.

That leaves the last question: How is data acted upon? That would be by a processor. It could be a CPU, but it could also be a piece of dedicated, or reconfigurable hardware (for example, an mpeg decoder). Whatever it is, think of it as a *function*.

See, this hierarchy is the exact opposite of the von Neumann:

Memory –> Data –> Processors

This is not good for the CPU designer’s sense of self importance – he’s relegated to just one of potentially many catch bags on the side of the memory. But it is a good way to solve the big issues, viz:

  1. The final result of a computation will be the evaluation of functions. If the functions f and g are correct, you don’t need to run the program to know the result g(f(x)) will be correct.
  2. Instead of one, super-duper interface to one mega CPU, you can have many interfaces to many functional processors, and those processors can be optimized for the function they perform.
  3. The functions (processors) have no state, just result = function(input). Time is not relevant. The result of the function does not change with time. Of course it takes time to produce the result, but that can be considered a quantum, a tick. Start a computation, *tick*, result is available. So now you can do parallelism, with a few simple primitives like composition, split, join, rendezvous, etc.

So now I think you can guess what my wet dream computer architecture looks like:

Memory.

Many interface pipelines to dynamically configured processing elements. Configured with a Hardware Description language. Could be verilog, could be VHDL, could be something more radical (for example, check out Bluespec). Whatever. But please, no more x86 brain damage CPUs.

Computations programmed with functional programming languages. For the really crusty, it can be lisp. Or scheme. Or Haskell. Recently, I like the look of clean. Whatever. But please, not more C, or C++, or java brain damage. Not even Python.

Ok, I’m ready for objections from the whining Asperger’s sufferers who will undoubtedly assure me that a “really useful engine” (attributed to one Thomas the Tank, I believe) can’t work that way.


Dell Optiplex 745 USFF – small and dumb!

August 2, 2007

Some time back I picked up a Dell Optiplex 745 USFF with Pentium D960 processor cheap, with the intention of using it as my desktop machine. Colleagues tried to tell me this was misguided, but who listens to others? Turns out, it was a mistake for more than one reason. If you care, here is the tale of woe…

Firstly, the Pentium D960 processor. Yes, this 3.6GHz sucker it is as slow as you’ve heard. Floating point wise, it is no faster than a 3GHz Pentium 4HT, and is almost beaten by a 1.5GHz Pentium M. It’s pretty obvious why the Core 2 architecture was evolved from the Pentium M and the Pentium 4 has been left for dead.

OK, slow processor, but what else. Well, secondly, the DVD drive is only 8x speed. This may have something to do with the fact that it it is installed in a removable bay. Nice. I can see that removability being real handy. Meanwhile, writing a 3GB DVD for backup takes about 15 minutes, which is a pain.

But look at the case – it’s so nice and compact – surely the convenience of the small footprint on the desktop makes up for these (minor!) foibles? Well, that might have been so, except precisely because the case is so compact, the fan must be tiny, and yet it still needs to move a lot of air to keep that Pentium D madman cool. The upshot: you guessed it – noisy.

Alright, so there was some bitching and griping when all this became apparent, but eventually one moves on, and therefore starts to install Slackware. Ah, Slackware Linux, a bizarre blend of BSD goodness and SysV brain damage. But that is another rant. Slackware 11 was current, and this went on without too much drama. Configuration – no problem, until it came time to configure the video card.

You see, this thing has an Intel 965Q chipset, which is pretty recent, and wasn’t recognized by xorg 6. The plain frame buffer driver worked OK, but would not do the 1680×1050 resolution required by the widescreen monitor. There followed much fiddling of configurations, downloading and compiling of xorg 7 modules, more downloading of Intel driver modules, more configuration, copious and vigorous cursing, but finally there was a 1680×1050 image on the monitor. Only it was crap.

Now, this was a real eye opener.The DVI output was 1680×1050@60Hz, and the info menu on the monitor confirmed that that was indeed the case, and also said that this was its preferred mode, and yet the picture was fuzzy.

Well, in disgust, slackware was wiped, and XP installed from the recovery disk. After the umpteen mandatory reboots, the monitor showed a pristine windows XP desktop at 1680×1050@60Hz resolution, and STILL FUZZY!

Ah, but it is using the default graphics driver… A bit of a rummage around in the packaging, and I located the Intel drivers disk. Installed that, and bingo – crystal clear 1680×1050@60Hz.

After installing cygwin X, it makes an OK X terminal. A bit noisy though…


It begins

May 3, 2007

OK, so another one announces to the world that he starts blogging, as if it is some momentous event.

Aren’t we all such special and precious individuals in this enlightened age?


Follow

Get every new post delivered to your Inbox.