Red Lines
Comments On “ Make AI safe or make safe AI? ” A n article supporting the Red Lines initiative by Stuart Russell, Professor of Computer Science, University of California, Berkeley The declaration associated with the global AI Safety Summit held at Bletchley Park, signed by 28 countries, “affirm[ed] the need for the safe development of AI” and warned of “serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.” The article is directed at LLMs, which have shown appalling error rates. The only regulation that makes sense for them is “Never to be used for Life-Critical Applications”. This would allow them to be seen as harmless toys, and escape regulation, while their use in Search Engines continues. AGI will be different. Despite this, AI developers continue to approach safety the wrong way. For example, in a recent interview in the Fina...