The Least Risky AI Strategy Is a Bold One by James Arroyo, Director

Programme
Pausing our current technological progress would only help the world’s most privileged.

Artificial intelligence (AI) technology offers plenty to worry about before we get to the oft-cited risk that one day we might construct intelligent machines that could turn against us. It’s true that we could do a great deal of harm with AI, writes Ditchley's Director James Arroyo in Foreign Policy magazine, but equally we have had no problem creating mayhem without it.

Most of the near- and mid-term risks of AI hinge on malicious human actions. In a 2021 Stanford University study on the most pressing dangers of AI, researchers wrote, “The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage.”

This understanding of the risks of AI also helps us better appreciate its possibilities. Only the most privileged could imagine pausing at our current state of technological development as an attractive option. AI technology could soon be approaching a feedback loop where knowledge will be created more quickly than at any point in history. Biotechnology will be a prime candidate for advances. Other long-promised inventions are already underway—for example, autonomous taxis are now live and operating in Phoenix and San Francisco and about to begin freeway trials. Quantum computing is also making progress and could be the hot rod equivalent to AI of adding nitrous oxide to gasoline.

To read the whole article click here.