You’ll probably use AI today without even realising it. In fact, you might have already done so if you’re reading this newsletter on an iPhone: Apple’s FaceID technology that unlocks phones is powered by AI. It’s far too complicated for me to really explain. Luckily, CNet has a handy explanation: “The phone lights up your face, fires out 30,000 invisible infrared dots that highlight your features and create a rough pattern, takes pictures of those dots with the infrared camera and then decides whether the picture looks like you.” The chance of fooling it is one in a million, which Apple says is a vast improvement on one in 50,000 for fooling the fingerprint lock used on previous models. All those “smart” devices in your house – like thermostats or Alexa – are driven by AI. So is Google search, the suggested finishes to sentences when you type emails, and, of course, chatGPT. Unstoppable ‘frontier AI’ So far, not so scary. But the scary stuff could be not too far off, the experts warn, as the technology behind AI is developing at a staggering pace. The most serious dangers are posed by “frontier AI”, which is defined as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety”. What this actually means, Dan says, is the technology being used to create dangerous things, and humans being unable to pull the plug on. “There is a fear about so-called ‘God-like’ AI. These are systems that avoid human control, and would be able to replicate themselves and potentially make decisions at the expense of human interests.” Examples of potentially dangerous frontier AI technology include designing new biochemical weapons or creating highly sophisticated cyber-attacks. In the immediate term, there are also concerns that AI image and text generators could mass-produce disinformation that could be used to disrupt elections. ‘Now I am become Death, the destroyer of worlds’ The people worrying most about the threat of AI – and telling us and our governments to worry about it – are the very people who helped create it. “There are big parallels with J Robert Oppenheimer,” says Dan. “He’s called the father of the atomic bomb for his role in creating it, but went on to campaign against its use for decades.” In this case, it’s two computer scientists – Prof Yoshua Bengio and Dr Geoffrey Hinton – who helped create AI for which they won the prestigious Turing prize in 2019 and earned the “godfathers of AI” nickname. (There is a third godfather, Yann LeCun, but more of him later.) Bengio and Hinton were among 350 leading AI executives, researchers and engineers who released a one-sentence statement warning that: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” That warning was preceded by an open letter from the Future of Life Institute, signed by the likes of Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month pause in giant AI experiments. Those interventions kicked off the public and political debate that led to Sunak convening the AI summit, expected to be attended by the US vice-president, Kamala Harris, and the French president, Emmanuel Macron, alongside the bosses of the world’s leading technology firms, as well as top scientists. More regulations on sandwich shops than AI companies |