Topsy-Turvy
Recent
events gave demonstrated how inappropriate a Large Language Model (LLM) is for
providing intelligence on a rapidly evolving position.
The USA is
rapidly changing its positions on Ukraine and Israel/Palestine with Europe
scampering to keep up or try to slow down the rate of change.
Without
commenting on the new positions being taken, it should be obvious how
inappropriate an LLM approach is for dynamic situations. Looking on the
internet for the most popular meaning is not appropriate, when radical change
can occur within an hour. The only approach that has a prayer of keeping up is
a semantic one, and even with this, words are being redefined as you read this.
LLMs offer
no int3elligence to the understanding of text –instead they work as a useful
electronic index to text that changes slowly with time. Search engines used to
look up key words as separate words, until someone realised that searching for
“dog” and “park” separately was silly. and searching for “dog park” – a word
pattern – was much more efficient. No intelligence was added to the result, but
a lot of irrelevant hits could be avoided, together with the message that a
million hits had been found, making the use of a search engine useless.
With an LLM,
the problem remains that it doesn’t understand the meaning of a single word of
English, so it can’t synthesise. Amother problem (and there are lots) is that
for many English word combinations, the meaning is not easily derived from the
components – he wants in on the deal, track down, beat up, get to the bottom of,
critical thinking. In total, about ten thousand word combinations need to be
known, as well as a vocabulary heading for fifty thousand words (and meanings
headed for a hundred thousand, with some words, mostly simple words, having seventy meanings).
We can’t
wait for full AGI, we need to use a semantic approach to make up for our severe
limitations – the Four Pieces Limit. While a human is very flexible while the
text is limited, that flexibility falls away as a piece of text increases in
size and complexity. Ten pages – easy, although still mistakes from
inattention, or where they wandered off to get a cup of coffee, a hundred pages
– getting quite hard, a thousand pages – liable to be full of mistakes, as it
crosses the boundaries of several specialties, and no-one understands it in toto.
Why isn’t
this obvious?
Reading
complex text is a complex task, which we have handed over to our Unconscious
Minds, as the only thing we have which can handle complexity. It means we don’t
think about the problem consciously (because we can’t), and we then make the
schoolboy error that, because we didn’t think about it consciously, we didn’t
think about it at all, and so it must be trivial and we can ignore it.
What about technical language -
Technical language has nowhere near the range of a natural language like English. If we get the semantics right, we can easily mix in mathematical symbols.
Comments
Post a Comment