It’s almost impossible to avoid ‘The Great AI Debate’. Open a news website, turn on the radio or access social media and there’s another article. It might be about the incredible benefits that AI can bring, how it can perform lifesaving tasks and change society, or considerations around trust and ethics, or the dangers of deep fake, or fake news. Politicians, meanwhile, talk about guard rails to contain existential risks.
One thing seems consistent: a lot of hype and not enough fact.
However, one story that is undoubtedly important is the open letter signed by some of the most respected figures in AI, warning that ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’ Whilst this has grabbed the headlines and made some people sit up, the bigger question is ‘how?’
The EU is discussing a draft regulatory framework, while the US is looking at how to implement a risk-based framework across multiple agencies. Such approaches suffer from legislating for technology and applications that do not currently exist and may not have even been conceived. And unless a global approach is taken, rogue states or groups may refine their capabilities and, potentially, jump ahead. Other planet-wide solutions have been suggested, such as a United Nations type body like the International Atomic Energy Agency for nuclear weapons. But there’s no consensus, and no plan of action in train.
So, back to my question: is regulation needed or workable? I think the answer to the first is “yes”, but the jury is out on the second.
Frameworks such as GDPR work because they are relatively easy to define and police. The IAEA can – with consent – visit a site or inspect a suspicious building. But given that the IT community is struggling to define AI as it is now, let alone what it could be, legislation seems too early, and has the side danger of suppressing innovation whilst not catching the people who aim to deceive. I’d suggest this issue must be addressed via the UN, but with an agency that is agile, able to move quickly, and is populated by experts who can work with innovators rather than against them.
What do you think?