Metaphorically Speaking
Humans most
need help when the complexity of the task with which they are dealing is new,
and exceeds their (very low) input limit.
Some
examples:
Money Laundering – Poorly written legislation – limit below which transaction
not checked - $10,000, so feed a hundred envelopes through a deposit ATM – a
million dollars laundered in 10 minutes. Automated banking (labour-saving) processes,
no intelligence. This is legislation attempting to thwart a nimble, intelligent adversary, and failing badly.
Robodebt – turning legislation into a program – hundreds of thousands of people traumatised,
2 suicides, $1.7 billion reparations. Lies, skulduggery, stacking of board
reviewing decisions. Corruption or looking the other way all the way down – a
few people risking their careers by speaking out about the government “stealing” from the poorest people in the society.
Horizon (UK) – turning a specification into a program – thousands
traumatised – 13 suicides, 70 wrongly incarcerated, ₤1.7 billion
reparations. Rank incompetence, trying to cover with lies, skulduggery. Killing
people to try to save a reputation.
Boeing 737 MAX MCAS – deliberate lying to regulator, resulting in 346 deaths -
$20 billion hit to reputation – shoddy engineering, shoddy civics, overflowing
order book, no consequences. The worst crime was trying to put the blame on "third world pilots", when a ticking time bomb was added without their knowledge to a plane they were familiar with.
Lockheed Martin F-35 – average design (slow, slow climb rate) saved by maybe better
missile. Many cascading mistakes – stealth paint washes off in rain, Navy
undercarriage too weak to land on carriers with full fuel load. Component commonality initially
estimated at 75%, tight aircraft design results in 24% - a huge cost overrun
(hundreds of billions), a decade of delays and no hope of a fifty-year life (a
new design commissioned from a different manufacturer after five years).
What could
AI have done to avoid these disasters?
We need to
emphasise that the examples stem mostly from skulduggery (either deliberate
lies from the get-go, or trying to cover up incompetence) rather than
overwhelming complexity. Lying adds a complexity which is usually ignored, but
projects in the tens or hundreds of billions are rife with it (including lying
to oneself that it will be all right in the end).
We will be
comparing LLMs with Semantic AI for their suitability for complex problems. The
simplest solution for the first three examples is to turn the text into working
models so the layperson can see what it does, but LLMs don’t allow that.
LLMs
This is not fertile ground for LLMs to prosper. They were invented for Search Engines, and can quickly find patterns of words. If you wanted to find something about “dog parks”, earlier Search Engines would look for “dog” and then for “park”, resulting in many irrelevant hits. Now the SE looks for “dog park”, or even “segregated dog park”. But the Search Engine has no idea about what the words mean, so it can’t make use of synonyms. “A park for dogs and their owners, split into different areas for different size dogs” sounds synonymous with “a segregated dog park”, but only if the thing doing the searching knows what the words mean. A Search Engine hasn't the time for such analysis. One mitght ask why it took twenty years to twig to a better way.
Synonyms can also be structural, so
To him, animals are more important than
people
can be rearranged into
Animals are more important than
people to him
LLMs are
based on a simple idea – patterns of words yes, avoid meaning like the plague, but the
simplicity of their structure prevents their use where meanings are important –
almost everywhere. They can be pushed a little further than the infuriating
crudity of chatbots, but not much further because the meaning of the text is
ignored, Their errors are called "hallucinations", but they don't have an imagination, so they are just mistakes from misapplication.
Training of LLMs
There is talk of training LLMs, but that does not mean
training in any way to do with meaning. One form of training is to find a
likely piece of text, and send it off to another tool, suited to the intended
task. This is broadly unreliable – there is nothing that can put the right word
in the right pigeonhole.
Figurative Use
A reasonably generalist vocabulary would run to about 50,000 words, but along with that are about 10,000 phrases and figures of speech. Some examples:
A bridge
too far
A fly in the ointment
A whale of a time
Beat around the bush
fly under the radar
get on someone’s nerves
go along to get along
hand-to-mouth
have other fish to fry
Hoist by one’s own petard
Look the other way
Without a reasonable knowledge of idiom and metaphor, much
technical text and general reporting would be unintelligible. This is a
fundamental problem of LLMs – they have no idea what the fly is doing in the ointment, or why the fish need frying, or why someone would be beating around a bush – they have
no idea how to go about finding out what is meant – they are suitable only for
the shallowest analysis of text – finding and copying a school homework essay,
say.
I am willing to believe that LLMs aren’t going to do the hard
stuff. But what about something else – say Neurosymbolics?
What is the
argument for converting something described in English into a much inferior
language that most people do not use to describe a complex problem.
Neurosymbolicx is a mishmash of technologies – a mixture of directed resistor
networks, a clumsy and largely unworkable logical structure which puts logic
remote from the operational structure, and bits of English (a languagewhich folds
predicative, existential and temporal logic into its own structure – “he wanted her to
know the truth about John before she signed the agreement” – how do you separate the logic from the rest of the
text?). Neurosymbolics is the latest, and hopefully last, effort to avoid the
work of making a machine “understand” English. As I sit here giving Semantic AI some understanding of the time continuum, why would you use anything else? Or if you did, dreams of AGI are put off another twenty years, this time with no excuse about limited memory - the only excuse possible is lack of imagination. The earliest programmers, in the fifties, dreamt of making computers read English - it has been possible for thirty years.
Neurosymbolics
is not going to conquer the world in the way that English has, and it’s not
going to help with another pressing need humanity has – collaboration among specialists. Yes, significant computer resources are required to emulate a human without a conscious limit, but when billions of dollars are regularly wasted (or hundreds of billions if we try hard), the need is obviously there.
Collaboration
If a project
is large and important, it will incorporate the work of many specialties. Let’s
take Domestic Violence. There would be lawyers, police, politicians,
psychologists, psychiatrists, trauma specialists, culture specialists, Complex
Systems engineers, and the general public. These people all talk different
dialects of technical English. The machine’s job is to be familiar with those
different dialects and meld a complete picture, comprehensible to all the
parties. That doesn't mean some simplified version that doesn't satisfy anyone - it means using heavily freighted words for the area specialist, and a necessarily long but simplified version for others.
Semantic AI
English isn't the problem-solving language - instead, it provides a stream of words that allows the structure in the speaker's head to be duplicated in the listeners's head.
Semantic AI takes each word or phrase in the text and turns it either into an object or an operator in an undirected self-extensible network, where logical, existential and temporal logic is diffused through the network. Yes, it needs a large vocabulary and lots of metaphorical phrases, but if it is to interact with humans, it has to speak their lingo, not force them to crush the problem description into some cramped Mickey Mouse language, cooked up by someone who wouldn't understand the complexity of the problem anyway.
Orion Semantic AI
Comments
Post a Comment