Hallucinations? No, Mistakes

 

It is said that LLMs have hallucinations. Hallucinations require an inner world that is out of kilter. An LLM lacks the complexity to do this. It is looking for a word pattern, without knowing what words mean. It is a cheap take on intelligence, hoping that something useful will emerge without having to learn the complexities of language. Language isn’t just words, but also phrases, whose meanings have long ago left the situations that caused them to arise. “A bridge too far” = trying to conquer one bridge too many in the Second World War. “A walk in the park” – pleasant, and easy to do. “ab externo” and all the other phrases in another language - “fait accompli”. Legal phrases – "sub judice", “accessory after the fact”. Science phrases – “abyssal plain”, “polar mass”, “atmospheric river”. Medical phrases – “end-of-life”, “CAT scan”..

An LLM can provide the unendearing stupidity of a chatbot, so as long as we stick to really shallow things using simple words (no phrases, thank you), it will sort of work. But humans need help with really complex things – legislation, specifications, plans, even psychological analysis. We need to move from the toy end of the spectrum to a machine that demonstrates intelligence – something that “understands” what humans say, because it knows all the meanings of all the words.

Big talk. What have you got to back it up?

Complexity. Or playing with lots of money. 

Robodebt – 2 billion reparations– add in malign intent. 

MRH-90 – 4 billion wasted – add in a little bit of hubris. 

Boeing 737 MAX MCAS - people tell lies - 346 deaths = 20 billion hit to reputation

F-35 (US) – hundreds of billions overrun - the old hands mumbled “Jack of all trades – master of none”. What would they know? As it turned out, quite a lot.

Humans don’t handle complexity well – they have a built-in very low limit (and a built-in blindness to that limit).

Spectrum of Complexty

We currently have a vocabulary of 45,000 words and 10,000 wordgroups (we did a logic and maths system 40 years ago – the Semantic AI system is built on its bones). We don’t set much store on words being close together – in legislation or a specification they may be hundreds of pages apart, but it is still essential that the relation between them is known to the system.


Orion Semantic AI


Comments

Popular Posts