Aedvice to Xi

 

Gary Marcus (Computer Science Professor at NYU and LLM-basher in chief) has suggested that President Xi be given advice to cut the ground out from under the current American AI boom by doing it cheaper and better (à la DeepSeek).  This contradicts Napoleon’s advice: “If your adversary is making a mistake, don’t interrupt him”. And certainly, using LLMs for anything to do with intelligence is a huge mistake.

There was an interesting article in the NYT (https://www.nytimes.com/2025/10/11/business/china-electric-grid.html) about a million volt DC transmission line running 3,000 km across China. It was almost embarrassing how impressed the American reporter was. When you have engineers at the top level of government willing to back you with the state’s resources, something stronger is called for (almost Manhattan-like). So, what is stronger? Semantic AI.

What is Semantic AI? A form of AI which uses a natural language (like English or Mandarin) to communicate with a machine, and the machine uses the same language for its internal workings, not turning what it is told into mathematical logic or some Mickey Mouse language (NeuroSymbolics, INSA), which throws away what it can’t understand (the machine has to understand humans very well to understand their problems – recent news – a psychologist approved someone accused of domestic violence to return to the family home. Two days later, the domestic partner was dead – humans don’t understand humans very well).

How does Semantic AI compare with LLMs. It doesn’t. LLMs were developed by Google as a way of improving the response of a Search Engine, which would otherwise come back uselessly with a million hits. You create a “prompt”, which might include “dog park” or “clinical depression” or longer phrases which will conjure up the desired text from the internet. At no stage does the LLM have to know what the words mean, either in the prompt or the targeted text, and with English, that is a bit fatal. Many words have a dozen or more meanings (up to 80). To make it worse, English uses a lot of figurative language (“a walk in the park”, “he raised the bar”, “keep up with the Joneses”). Instead of LLMs not knowing the meaning of any word, Semantic AI knows every meaning of every word, and figurative meanings, and can work out the appropriate meaning for the particular word in its context. English text also has a lot of elided words. “a chess set”, “a movie set” seem straightforward. “A movie set in Hawaii” is not “a movie set that is located in Hawaii”, but “a movie that has Hawaii as its locale”. Keeping all these pieces in play is a lot of hard work on the part of our Unconscious Mind, and without something that emulates its abilities in a machine, the notion that the machine can read and understand English is fanciful.

But Isn’t That Going To Be Slow?

Yes, but compared to what? A new piece of legislation – someone commented it would take a person 40 hours to read it, but very little would stick. Semantic AI can read it faster, but everything sticks, and you end up with a working model of the text “in the machine’s head”. The machine can then assess it, together with the things it links to, for correctness, coherence, consistence, and non-experts can see exactly what it does (Robodebt and the UK’s Horizon would never have happened).

But We Can Do This

We can do it, to a very limited extent. Our Conscious Mind has a limit of no more than four variables at any one time. If something is complex, we have no choice but to simplify it (unconsciously). If we are close to a solution, we are very good at finding it – if we are not close enough, we have to wait, sometimes thousands of years, before we find it (a hang glider is a good example – it was buildable 5,000 years ago).

It Only Does Legislation?

No, it does any complex, meaningful text – specifications, plans, strategies, tactics, how to repair a damaged system, or refashion a system for an unexpected contingency.

Hasn’t This Been Tried Already?

Yes. Cyc began in 1984, amusingly enough to head off a threat from Japan’s “fifth generation” project. They gathered linguistic and AI experts, and proceeded to attack the problem for thirty years. Ontologies, machine learning, etc.

Why Did Cyc Fail?

Its foundational principles destined it to fail. It would take statements in English and convert them into mathematical logic, based on “common sense”. Common sense is a fickle friend – if you are going to a new, hazardous environment like Mars, common sense is definitely not your friend – it will kill you in seconds. Back on Earth, if you are doing something new, like a million volt transmission line, a new aircraft, or a new medical treatment, then common sense can get you into a world of pain – “it worked before” doesn’t mean it will work this time.

Don’t have the details, but it is a fair bet that Cyc wasn’t handling multiple meanings for words, figurative meanings, and elided words – it was translating complex English into a much inferior form

Cyc was proud of its “common sense” approach to knowledge. It doesn’t work for people (“a blind rage”), and it doesn’t work for something like Quantum Entanglement.

If they had stuck to English, they might have got somewhere, but they didn’t. Trying hard over many years when you don’t understand the problem is not good enough.

I can’t say whether Semantic AI would be easier to do in English or Mandarin, but Sun Tzu managed penetrating insights into the human psyche thousands of years ago, when the to-be speakers of English could only manage grunts (and Mandarin must be more than capable of rising to new concepts like Quantum Entanglement, as Chines scientists have demonstrated by implementing an application of it).

The Next Five Years

The US will spend trillions of dollars on the death throes of LLMs for AI (effectively putting a modern-day version of Encyclopedia Britannica in every pot) before going back to ANNs and Mickey Mouse languages. The failure of Cyc has convinced them that semantic methods are too hard, even though millions of people learn how to speak English each year. A revealing quote from a respected person in AI – “I love programming deep neural nets”. AI is about making a machine self-sufficient in an unpredictable environment, not for someone to have an ego trip setting resistor values. If one understands even a little how real neural nets work, it should be obvious that ANNs lead nowhere.

So what advice is President Xi likely to receive:

“We should be creating a form of AI that can be used on our most complex problems – economic, social, technological, military, logistical – where our people are unable to comprehend the full extent of the problem and can only manage partial solutions. Doing so is not a weakness, but a strength – an admission that we will build tools to make ourselves stronger. The US will spend a few more years playing with LLMs, then go back to playing with their ANNs, and wondering why we are so far in front – a good example of intelligent direction against selling snake oil in the free market”.

Comments

Popular Posts