Perhaps we should ban AI to protect humanity as they did in “Dune”

Opinion  Banning AI saved humanity in ‘Dune.’ So why can’t this work for us?

By Megan McArdle, Columnist

May 11, 2023 at 6:00 a.m. EDT

From time to time, after reading about a new talent artificial intelligence has acquired — this week it is the ability to take orders at Wendy’s — I make the same joke on Twitter. “Butlerian Jihad: Now more than ever.”

Readers of the “Dune” books will recognize my reference to the sweeping backstory of Frank Herbert’s universe, in which a long-ago war destroyed all the intelligent machines and established a new commandment: “Thou shalt not make a machine in the likeness of a human mind.”

The joke brings smiles from fellow nerds — but the thing is, I’m not really sure I’m joking.

Last week, Geoffrey Hinton, a pioneer of AI research, quit his job at Google so he could air his own concerns about the accelerating speed of his field. “The idea that this stuff could actually get smarter than people — a few people believed that,” he told the New York Times. “But most people thought it was way off … I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

The risks seem obvious. What if some mad scientist or endangered dictator uses super-intelligent machines to create a superweapon? What if smart machines do so themselves, having decided they’d be better off without people around clogging up the planet?

For that matter, what if they decide to keep us around? A techno-futurist might dream of “fully automated luxury communism,” but even Karl Marx envisioned a tomorrow with plenty to keep us busy, where a man might “hunt in the morning, fish in the afternoon, rear cattle in the evening and criticise after dinner.”

How does that work if the machines are better at all those things than we are? What if they outdo us even at poetry and music? What will be left to poor humans except our opposable thumbs and deaths of despair?

Press Enter to skip to end of carousel

Other Opinions on AI

Banning AI saved humanity in ‘Dune.’ So why can’t this work for us?


Type in your job to see how much AI will affect it


The hype around ChatGPT is very real


A terrible decision on AI-made images hurts creators


How an AI safeword can protect you and your money


What might ChatGPT do for humanity? The ancient Greeks offer a clue.


The wizards of AI can’t give it a brain, or heart, or consciousness


The next level of AI is approaching. Our democracy isn’t ready.


How to defend against the rise of ChatGPT? Think like a poet.


AI could cause a mass-extinction of languages — and ways of thinking


All artificial intelligence is prone to the biases of its creators


TikTok might be part of a plot to make us dumber


AI can’t teach children to learn. What’s missing?


A writing teacher got schooled by ChatGPT. Here’s what he learned.


AI changes everything. We need new guardrails to survive it. And soon.


Government should go slow when regulating AI


Who’s responsible when ChatGPT goes off the rails? Congress should say.


Silicon Valley faces another make-or-break moment


Technology of the future shouldn’t trap people in the past


To protect human artistry from AI, new safeguards might be essential


End of carousel

These fears seem plausible. Yet, I’m not sure there’s much use in rehearsing them, because for all that I enjoyed “Dune,” I don’t think there’s much chance a Butlerian Jihad could work in real life.

In his interview with the Times, Hinton suggests that the best hope for humanity is for leading researchers to come together and game out ways to make AI safe: “I don’t think they should scale this up more until they have understood whether they can control it.” Similarly, a recent open letter from Elon Musk and thousands of other tech leaders and researchers called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

But try to imagine getting this done. Like climate change or the nuclear arms race, AI is a collective action problem: Individually, it’s rational to invest in advanced AI, even if collectively it’s suicide. And just as with nuclear weapons and climate change, we have no multinational body capable of fully solving the problem.

This is not to say global cooperation is impossible; nations working together have expanded global trade, stabilized the global financial system and made real progress on problems from polio eradication to the depletion of the ozone layer. On the other hand, despite strenuous efforts, we have failed to halt the global drug trade, lower world greenhouse gas emissions or stop nuclear proliferation. There comes a point when individual or national self-interest overwhelms the regulatory capacity of individual governments and multilateral institutions.

AI, I’d argue, is one of those cases, precisely because it is so easy to imagine the awesome powers it could confer. Businesses scrambling for profits and countries scrambling for geopolitical advantage will not be able to resist the potential — at least as a way to keep on an even footing with less scrupulous rivals. And even if countries could reach a global accord to stifle such research, there would be no credible way to enforce it. We haven’t even enforced limits on greenhouse gas emissions, and a coal plant is much harder to hide than a computer bank.

So however much we might want to raise the Butlerian sword, in the end we won’t dare; we will enter the AI race rather than risk losing it. And while that’s an uncomfortable conclusion, it is not quite a counsel of despair. The enormous uncertainty around AI’s potential includes plenty of upside as well as downside.

Sure, it’s terrifying to imagine AI crafting a superweapon to polish off humanity, but isn’t it at least as plausible to imagine it inventing cures for cancer or diabetes? We might fear that people will wither in the shadow of superior machines, but why couldn’t the machines equally well make us better — by acting as tutors for the young or assistants for adults? Why couldn’t they offer companionship for the elderly or, better yet, make more time for us to provide it?

If this sounds optimistic, well, it is. But guarded optimism seems better than giving in to despondency or continuing to hunt for a pause button that isn’t there.

Opinion by Megan McArdleMegan McArdle is a Washington Post columnist and the author of “The Up Side of Down: Why Failing Well Is the Key to Success.” TwitterFollow

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.