“It thinks like an expert” - New York Times

 An article in the New York Times about Generative AI and Medicine by Daniela J. Lamas

The article is full of quotes like “The potential is dazzling”. This one is troubling - generative A.I. — “in which a computer can actually create new content in the style of a human”. No, it doesn’t create new content – the pieces of text it uses were written by a human. What it can do is cobble together pieces of human-generated text, to give the illusion that it has created new text. The act of “cobbling together” is problematic, as the system has no idea what the words mean, only that the new piece of text contains the words it is looking for in the prompt.


The problem with that is that words can have many meanings – a chess set, a set of tennis, a movie set, a movie set in Hawaii, the rain set in, he is set in his ways. The word “set” has seventy meanings, across a noun, a verb, a past participle, an adjective, and a flood of collocations – “set off”. “set out”, “set up”, etc. Generative AI doesn’t concern itself with parts of speech, or different meanings, or collocations, only that it could find the word.

“This is not the same as looking up a set of symptoms on Google; instead, these programs have the ability to synthesize data and “think” much like an expert.”

These programs think like a very unusual expert – one who does not know the meaning of a single word. This is from a medical specialist, who certainly understands that the patient is pivotal – with comorbidities, they become a distinct entity, where clashes of drugs among different treatments is common, and must be carefully analysed. Presuming to use general information cobbled together from different bits of text is of little use in such a situation.

Why do intelligent people do this? We have been through similar waves of enthusiasm before – Expert Systems, Intelligent Agents, Machine Learning, Artificial Neural Networks, Deep Learning – don’t bother to think about how it works, go with the label. There is an AI coming that will do what Ms Lamas wants – it is called AGI (Artificial General Intelligence), and it understands the POS and meaning of every word. Assembling a fifty thousand word vocabulary that responds to all the nuances of English takes time – a human brain takes twenty years to do it. By its nature, getting a machine to become proficient in a natural language is also a single mind problem – having someone who is expert on nouns, and someone else who is expert on verbs isn’t going to work – figuring out whether the word is being used as a noun or a verb is part of the problem. At the moment, there is huge enthusiasm for Generative AI, which neatly sidesteps all that tedious work, with people losing their heads over the fad.

To date, we have not integrated generative A.I. into our work in the intensive care unit. But it seems clear that we inevitably will.“

This is an area requiring immediate regulation, so uninformed and unthinking people do not make horrendous, potentially fatal, mistakes.

It is disappointing that the New York Times is giving space to such stuff.

 


Comments

Popular Posts