I have had enough of procedural imperative programming languages, and the retarded computer architectures that foster them!
How can you write x = x + 1; without feeling like an idiot?
The current insane state of affairs is all based on the upside down idea of computers invented by Johnny Von Neumann. It’s all about the CPU (and please be sure to genuflect reverently as you read CPU), which modifies the state of some memory (spit!). For special obfuscation, please find the rules for how said state is to be modified or evolved somwhere in the same memory.
To draw the hierarchy out explicitly, it is:
CPU –> Memory
It’s oh so easy in hindsight to cast aspersions on the genius of the man (and to be accurate, there were other contributors to the concept), but this idea is truly misguided. Like all good mind crippling paradigms, it leads directly to a number of BIG problems, the solution to which is naturally MORE of the same mind crippling paradigm.
I’ll just mention 3 here:
- The only way to know the result of a program is to run the program. It’s no coincidence that this seems to be the preferred programming paradigm of the borderline (or fully) autistic, as this seems to be the way they think anyway.
- You end up with all the state moving through a “von Neumann Bottleneck” between the cpu and the memory.
- Because state changes with time, you need to explicitly manage the time order of processing. This means parallelism is hard, and the natural way to speed things up is to speed up the CPU and the memory interface.
Of course, to a certain class of hardware guy this is the Right Thing. Like the engine freaks and their modified cars – you know, heaps of power or torque, but actually undriveable in a real word scenario – it’s all about the grunt of the CPU.
After all, when your program is a) do this tiny step, b) then do that tiny step, c) then do the next tiny step, etc… it all comes down to how fast can you “do a tiny step”. That means a processing beast, and of course, to feed the beast and keep it stoked you need a Hard Core memory interface. Just look at the memory interface of a modern computer. Level after level of caches, prefetchers, burst fills, speculative executions – its a hardware guy’s wet dream.
And please don’t mention Harvard architecture. I mean, one memory interface? Not enough! Several memory interfaces? Too many! Exactly TWO memory interfaces? Just right! (this insightful concept of computer engineering first espoused by one Goldilocks, I believe).
OK, enough griping. What is my alternative?
Well, for a change, think about the computer the other way up. With this world view, it’s all about the memory. That is, RAM, or disk, or paper – whatever.
Think about it: with a GUI, you already look directly at a big chunk of the computer’s memory on a visual display, and “see” the resulting state of a computation. Or on disk, you have the results of a computation in a file. So the memory is the most important thing.
Now, you need to modify the state of the memory. Think in terms of data flow: Memory contains data. Data flows out of the memory, is acted upon, and results (data) flow back into the memory.
That leaves the last question: How is data acted upon? That would be by a processor. It could be a CPU, but it could also be a piece of dedicated, or reconfigurable hardware (for example, an mpeg decoder). Whatever it is, think of it as a *function*.
See, this hierarchy is the exact opposite of the von Neumann:
Memory –> Data –> Processors
This is not good for the CPU designer’s sense of self importance – he’s relegated to just one of potentially many catch bags on the side of the memory. But it is a good way to solve the big issues, viz:
- The final result of a computation will be the evaluation of functions. If the functions f and g are correct, you don’t need to run the program to know the result g(f(x)) will be correct.
- Instead of one, super-duper interface to one mega CPU, you can have many interfaces to many functional processors, and those processors can be optimized for the function they perform.
- The functions (processors) have no state, just result = function(input). Time is not relevant. The result of the function does not change with time. Of course it takes time to produce the result, but that can be considered a quantum, a tick. Start a computation, *tick*, result is available. So now you can do parallelism, with a few simple primitives like composition, split, join, rendezvous, etc.
So now I think you can guess what my wet dream computer architecture looks like:
Many interface pipelines to dynamically configured processing elements. Configured with a Hardware Description language. Could be verilog, could be VHDL, could be something more radical (for example, check out Bluespec). Whatever. But please, no more x86 brain damage CPUs.
Computations programmed with functional programming languages. For the really crusty, it can be lisp. Or scheme. Or Haskell. Recently, I like the look of clean. Whatever. But please, not more C, or C++, or java brain damage. Not even Python.
Ok, I’m ready for objections from the whining Asperger’s sufferers who will undoubtedly assure me that a “really useful engine” (attributed to one Thomas the Tank, I believe) can’t work that way.