Lies

 

We had been hoping not to have to use Active Structure in the rather shallow theatrics of a courtroom, reserving it for deeper problems like making sure Robodebt never happens again.

But there is another sort of problem, which, while shallow, does not allow us to wait for it to unfold, the consequence being too grave. Boeing 737 MAX MCAS is an excellent example.

It involves lying. A machine is never going to be very good at telling whether a person is lying, particularly when there may be no face-to-face interaction.

The lies for MCAS began with the realisation that moving the engines forward from under the wing (they are bigger and won’t fit) causes a tendency to climb. This effect is corrected by adding a sensor to detect unwanted climbing and bring it back to level flight, using a motor to drive the tail trim tab (another disastrous decision – no limit on how much the trim tab could be changed to eliminate the effect– sloppy thinking everywhere). The problem with automatic correction is it would mean changing the Flight Manual, making the aircraft uneconomic, adding $271,000 to retrain the pilots. So nothing is said, and the Flight Manual is not changed. We have to assume acquiescence up the chain of command. So far, no disastrous consequences.

The entire flight control system of a commercial airliner has triple redundancy – that is, instead of a single sensor sensing climb, three should have been used, so the failure of one, or even two, sensors can be tolerated without loss of control. It was decided not to give the sensor redundancy, which meant it could not be built by a company familiar with commercial airliner controls – too many questions would be asked. This was the fateful decision, resulting in the deaths of 346 innocent people.

So how can a machine that can’t tell if people are lying stop them from doing so? By making the risk of exposure too great. It has all the paperwork and can see all the connections, including those recently added.

(Note – all is not well with the regulator. The FAA is chronically understaffed and often appoints members of the plane-maker’s own staff as temporary FAA inspectors. You could have the farcical situation where the machine is reporting a serious breach of the regulations to the person causing the serious breach).

Open versus Closed Problems

Most of current AI works on closed problems, with Generative AI the classic example. Find a piece of text that embodies what you want to say, and with a little bit of topping and tailing, the job is done. If you want it to cobble together pieces of text from different sources, not so good, as it has no idea what the words mean (some words, like “set” or “run” have sixty or eighty meanings).

Why Not Use Meaning?

It is hard to do. The text has not fully resolved itself, and you may want to catch it before it does, so a disaster does not occur. You need something dynamic.

 You may not be able to tell if they are lying, but you can tell by their actions and inactions (and a lot of machine work) what is afoot.

Active Structure

We can use the English words, because there is massive structure behind the words, so they behave just like English words.

 


 

 

 

Comments

Popular Posts