Insights

What do we want? AI regulation! When do we want it? Erm…

Jon Bance, Chief Innovation & Technology Officer at Leading Resolutions, looks beyond the hype and asks if regulation of artificial intelligence is necessary or even workable.
July 25 2023

In the new Mission: Impossible movie, the world is threatened by a powerful entity that has developed from artificial intelligence. A main character describes it as “a self-aware, self-learning truth-eating digital parasite infesting all of cyberspace” before wryly adding: “well, it was bound to happen sooner or later”. Wireddescribes it as “the perfect AI panic movie”. Bloomberg said that with AI, Hollywood has found the perfect villain. Of course, it’s only the latest in a long line of villainous AI. You probably remember the Terminator movies, with the AI-based defence system Skynet becoming self-aware and waging war on humanity. And earlier still, the rogue computer HAL-900 in 2001: A Space Odyssey. Hollywood hokum aside, the serious point is, how concerned should we be about AI and the threats that it poses to society? Outside the multiplexes, it’s nigh on impossible to avoid The Great AI Debate.

Hype and hysteria

As our CEO Pete Smyth posted recently, you only need to open a news website, turn on the radio, or access social media and there’s another article or comment. It might be about the amazing benefits AI can bring – how it can perform lifesaving tasks, say – or alarms raised around trust and ethics, and the dangers of deep fakes. There are real concerns that chatbots, like ChatGPT, could “supercharge” the production of online misinformation: fake news. A business contact recently told me that she suspected a recent new hire, who turned out to be a disaster, had used AI to generate the ideal application letter and resumé for a specific role. And once fired, had apparently done a similar thing in his next role. Politicians, meanwhile, talk about providing “guard rails” to contain the existential risks we face. There’s an awful lot of smoke and mirrors, and hyperbole, but not enough fact.

Saying that, one recent development that carried significantly more weight was an open letter that was signed by many of the most respected figures in AI. This warned that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. (Interesting to note that previous threats in Mission: Impossible movies revolved around… pandemics and nuclear war.) With so much being misunderstood, misreported and frankly unknown about AI, we should sit up and take note when leaders in the tech profession have concerns around the very tech they’re building.

How exactly can we mitigate this “risk of extinction” now the AI genie is out of the bottle? My view is this does need regulation, and fast, not to confound or stop developments but to openly agree on the necessary guardrails. But I also fear that, like the climate targets, it will just be a target, and not signed up to, and by all.

So what’s actually happening? The situation is far from clear. This is hardly surprising, as regulation now is like trying to hit a fast-moving target.

AI rules in the EU, USA and China

The EU has been discussing a draft regulatory framework for some time, and looks to be ahead of the game in delivering a first-of-its kind AI Act: the draft has been described as the world’s strictest set of AI rules, and may yet be amended. Are they going too far, too fast? In June, 150 Europe’s biggest companies (including Airbus, Siemens, Renault and Heineken) wrote to the European Commission warning that the law could affect the bloc’s economy if it prevented businesses from being able to freely use AI technology. They said the new rules risked harming competitiveness while actually failing to deal with potential challenges. The worst of both worlds, in other words.

While the EU seems to be jumping ahead, is the USA now playing catch up? On 21st July, major tech players and leading lights in AI such as Amazon, Google, Meta, Microsoft and OPenAI announced new safeguards. The Guardian newspaper reported that the White House had “secured voluntary commitments from seven US companies meant to ensure their AI products are safe before they release them.” These measures include watermarks for AI content to make identification easier, third-party testing in an attempt to find dangerous flaws, investing in new cyber security measures, and prioritising research on AI’s societal risks.

Also in July, China announced its interim “rules” for regenerative AI to manage the country’s booming industry. Notably, Beijing said regulators would seek to “support” development of the technology while ensuring security. Analysts were quick to point out that the measures announced were significantly less onerous than those contained in an earlier draft.

In such a fast-developing field, these approaches will all suffer, to varying degrees, from trying to legislate for technology and applications that don’t currently exist, and may not even have been conceived yet. And unless we take a global approach, the reality is that rogue states or groups may refine their capabilities faster and jump further ahead. Mission: Impossible’s “entity” gets more real by the day?  Planet-wide solutions are already being suggested, such as a United Nations type body similar to the International Atomic Energy Agency (IAEA) for nuclear weapons. But there’s no consensus.

Too much too early – or too little too late?

Is regulation desirable? Yes. Would it be workable? Well, there’s the rub. Like most things in life, it all depends on how we go about it. With IT, businesses and governments struggling to define AI as it is today, let alone what it might be tomorrow, legislation does seem a little early. And as that letter to the European Commission suggested, too early/too stringent legislation risks suppressing innovation and competitiveness whilst not actually restraining those with darker motives. Would they even take any notice of regulations?

Like our CEO, I think this issue will most likely be addressed via the UN. But it needs an agency that is both agile and is populated by acknowledged experts who both understand and can work with the innovators, mitigating risks while not putting up unnecessary barriers.

What is clear is that we need more education, discussion and informed debate on the opportunities, benefits and risks of AI. As part of that ongoing debate, I’m hosting a live webinar, Q&A and demonstration in conversation with Alistair Park and Brani Angelov of our digital consulting and delivery partner Estafet. Focusing on generative AI, it’s a live event at noon on Thursday 3rd August 2023, and you can register here. I hope you can join us.

The Author

Jon Bance

Chief Innovation & Technology Officer