We Are All Damned

Grady Booch, a Fellow at IBM, and one of the deepest minds on software development in the history of the field – Gary Marcus

Someone thinks we are all stupid

Dear oh dear - if this is one of the deepest minds we have, we really are in trouble..

Before we start making claims of what will happen, how about we look at our limitations. Four pieces for one. We can only handle four pieces of information in our conscious mind at once, so if our conscious mind is telling us something about something complex, it is probably wrong. Looking back at all the waves of AI - Prolog, Expert Systems, Intelligent Agents - a few moments thought should have told people each wave was not soundly based. Artificial Neural Networks - how could it have happened that something so limited - a directed resistor network - could be compared with a self-extensible active network that can appear to be undirected. Going back a little more – the Turing Machine. If you use an inconsistency, you can prove anything. If you add a state indicator (a light) to Turing’s Machine so the two states are visible, the proof collapses (the proof was created to order, so that is an excuse for its foolishness).

The only way we will achieve AGI is by building a machine which is too complex for us to understand, but that is OK - we do facets quite well. People are spending millions of manhours on generative methods that require hundreds of billions of nodes (Google quotes 576 billion nodes for its PaLM), but the major problems we face (Climate Change) are new and rapidly evolving, making data-driven methods with no intelligence like ML or DL or LLM useless against them.

Well, what might the machine look like? It will probably have nodes, operators and links, with states and values propagatable in any direction, and self-extension. We could call it Active Structure, because the structure changes its shape as it works on a problem (it changes its connections, or directions of flow of information). How do we get it off the ground? By loading a hundred thousand definitions from a human-readable dictionary. Yes, a human-readable dictionary is pretty terrible – atrocious circularity – “mend -> repair -> fix -> mend”, and no attempt to describe emotional states (angry -> annoyed -> angry), but we have to start somewhere (at least some of it is quite good). Many words have multiple meanings – “set” has 72, “run” has 82, “on” has 77, and many words have figurative meanings – a barnacle, a bulldozer. We can’t allow this level of uncertainty to infect the machine, so the machine has to establish the right meaning for the particular word in its context – in the beginning, it will require human help. Humans also clump things unconsciously – “he put the money from the bank on the table” or “an imaginary flat surface of unlimited extent”. 

We do all this unconsciously – we don't know we are doing it, so we give it no value. We have to emulate something which is too complex for us to understand to reach human-level performance. But we want to surpass human-level performance – the Four Pieces Limit is an absurdly low limit, which the machine won't have.

Mr Booch's outburst arms the naysayers - the only way we can counter that is by trashing Mr Booch's legacy - UML.

A line from Wikipedia describing UML talks about a coder looking at a system diagram and writing a piece of low level code to provide a function. The systems we need to build do not lend themselves to such a simplistic approach - see Lane Following and think about the complexity of a system that can do what a human easily does. UML was an attempt to clean up the mess that had gone before, it was not an attempt to look into the future building of complex systems.

Why do we need to make a machine “understand” a natural language? Because humans can describe a complex problem more completely using a natural language than in any other way. Forcing them to use a maths-based systems tool like UML degrades what they can describe easily in their natural language.


Popular Posts