Skip to main content

Posts

Featured

Getting AI Back on Track

  “Altman admits scaling isn’t the answer” - how long do we have to wait until Altman admits LLMs aren’t the answer? What it shows is that a large percentage of people working in Computer Science have no understanding of what they are doing when reading or writing. How can that be – they can write programs. The symbols used in programs have a single meaning, so no confusion. But a specification is different – it uses words, and many words have multiple meanings. How do they handle this? Humans have a very limited bandwidth, so most of the parsing is done unconsciously, leaving enough bandwidth to understand the meaning of the message. This is how they can be fooled into thinking an LLM is “reading” the text, when it is doing no such thing. An example: A high jump frame, over which competitors have to jump. “Raing the bar” means increasing its height, increasing the difficulty of jumping over it. “Raising the bar” can be used as a figurative allusion, meaning something is harder...

Latest Posts

AI Regulation

Why Does Language Do As Well As It Does?

Where Do We Go From Here?

A Working Model

We Don't Need Another Mountain

Reading, Analysing and Activating Complex Text

Too Big to Fail?

Is Mental Therapy a Life-Critical Task?

Consulting Using Semantic AI