03 Jun 2023 News in English 4 min. to read

‘This is civilization-threatening’: Here’s why AI poses an existential risk

By Hiawatha Bray Boston Globe ,Updated May 31

This was the week artificial intelligence got real scary, as hundreds of scientists and academics in the United States, Asia, and Europe issued an open letter warning that unchecked AI could kill us all.

The message on Tuesday was brief and blunt: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Many of those who signed this stark warning actually know what they’re talking about. Over 100 of them work in the AI industry, at companies including OpenAI, Anthropic, Google, and Microsoft. These companies expect to earn billions from AI. Yet here were their top people, admitting that their own products terrify them.

Maybe the scariest thing about the letter is what it doesn’t say. There’s no clearplan to make AI systems safe, because even the people who create them aren’t sure how to do that.

The letter also doesn’t describe how AI could actually pose such a terrible danger. It’s just software running in data centers. How much harm can it really do?

Conversations with Boston-area scholars who signed the letter suggest that the threat of total annihilation might be overblown — might — but the unchecked use of AI offers plenty of opportunities for social chaos.

Daniel Dennett, professor emeritus of philosophy at Tufts University, fears that if AI isn’t quickly brought under control, “it’s going to destroy the epistemological world we live in.”

In case you skipped Plato, epistemology is about how people know what’s true and what isn’t. Dennett said that AIs are getting so good at emulating human faces and voices, for example, that it could become impossible to trust any communication that’s not conducted face to face.

That family member on the phone asking for an emergency loan might be an AI-simulated voice created by identity thieves. Even if it’s a video call, the screen could display a digital avatar that looks and moves just like him. These AI scams already happen. Last year they cost Americans $11 million, according to the Federal Trade Commission. But the toll is bound to grow, because the latest AI tools are so easy to use.

The only hope, Dennett said, is mandating digital watermarks on all AI files, proving they were created by a machine and imposing severe criminal penalties for noncompliance.

“People that use this stuff without watermarks are committing a crime,” Dennett said. “This isn’t fun. This isn’t cool. This is civilization-threatening.”

Tufts research professor in physics and astronomy Ken Olum warnsthat AIs could do terrible things because they lack a conscience.

“All AIs we’re able to build are psychopaths,” Olum said. They don’t know right from wrong. If such a system is ordered to achieve a particular goal, it won’t be too concerned about how it gets there, he said. A world inhabited by millions of amoral AIs could easily descend into chaos.

Say you give a super-intelligent AI control over your finances. “Your budget is a million dollars,” said Olum. “You want it to make you $100 million. Maybe I’m thinking it’s going to invest wisely, but maybe it thinks it’s going to rob a bank.”

The AI might identify security flaws in a bank’s security systems, then quietly siphon off enough money from other accounts to reach your goal. It might not even tell you what it’s doing. If the AI hurts others in the process, it won’t care.

But if they’re ever to be safe, AI systems must be made to care, said Olum. “They need to have some sort of moral sense,” he said. “If we don’t know how to give a functioning moral compass to an AI, then we’d better stop building AIs that we can’t control.”

James Miller, professor of economics at Smith College, is especially pessimistic. “I think it’s more than likely that it’s going to kill everybody … sometime in the next 20 years,” he said.

The reason is that AI systems can constantly learn from the world around them, and even upgrade their software without human assistance. As they keep getting smarter, said Miller, “you could imagine something as above us in intelligence as we are to chimpanzees.”

Humanity’s only hope, said Miller, is to make AIs that care about their creators. He compared the process to child-rearing. “Your goal is, when the kid’s 20, he doesn’t want to hurt you,” said Miller. “He wants to protect you.”

That means changing the diet of data used to train an AI system. “We’re currently training it on the Internet, where it’s basically exposed to positive and negative things,” said Harvard University astronomy professor Avi Loeb. Instead, said Loeb, training data must be purged of bad values.

“When we have kids and we send them to school, we don’t let them read just any books. They might read “Mein Kampf,” Loeb said. “In the same way, we should train the AI systems on materials we want them to abide by.” He also favors legal liability for AI makers who fail to build the right values into their systems.

Just one problem: Nobody’s sure what values to teach.

Last week, OpenAI announced that it will spend $1 million to fund 10 groups of independent researchers who will draft standards for socially responsible AI systems. It might be a good first step, said Olum, but it’s not nearly enough.

“If it’s a billion dollars to build AIs and a million dollars to make them safe,” he said, “that could be the wrong balance of resources.”

Tags:

WelcomeToMA © & “ILike.Boston”™. All Rights Reserved. 2023.