Can sci-fi movies teach us anything about an AI threat? – BBC News

Image caption,

Samantha Morton (left) and Tom Cruise in the 2002 science fiction film Minority Report, in which future technology helps catch people before they commit crimes

In an apocalyptic warning this week, well-known researchers cited the plot of a major movie among a series of AI “disaster scenarios” they say could threaten humanity’s existence.

Trying to make sense of it in an interview on British television with one of the researchers who warned of an existential threat, the presenter said: “As someone who has no experience with this, I think Terminator, I think Skynet, I think movies I’ve seen.”

It’s not alone. The organizers of the warning statement – the Center for AI Safety (CAIS) – used Pixar’s WALL-E as an example of AI threats.

Science fiction has always been a vehicle for guessing what the future holds. Very rarely, some things are fine.

Using the CAIS potential threat list as an example, do Hollywood blockbusters have anything to tell us about the fate of AI?

‘weakening’

Wall-E and minority report

CAIS states that “weakening” is when humanity “becomes completely dependent on machines, similar to the scenario depicted in the movie WALL-E.”

If you need a reminder, the humans in that movie were happy animals who didn’t work and could barely stand on their own feet. The robots took care of everything for them.

To guess whether this is possible for all of our species is to look into the crystal ball.

But there is another, more insidious form of addiction that isn’t that far off. This is handing over power to a technology we may not fully understand, says Stephanie Hare, AI ethicist and author of Technology Is Not Neutral.

Think Minority Report, pictured at the beginning of this article. Respected police officer John Anderton (played by Tom Cruise) is accused of a crime he didn’t commit because the systems built to predict crime are sure he will.

In the film, Tom Cruise’s life is ruined by an “indisputable” system that he doesn’t fully understand.

So what happens when someone has “a life-changing decision” — like a mortgage application or probation — rejected by AI?

Today a human could explain why you didn’t meet the criteria. But many AI systems are opaque, and even the researchers who built them often don’t fully understand the decision-making process.

“We just enter the data, the computer does something. The magic happens and then a result occurs,” says Dr. Hare.

The technology might be efficient, but it’s arguable that it should never be used in critical scenarios like police, health care or even war, he says. “If they can’t explain it, it’s no good.”

The real villain of the Terminator series isn’t the killer robot played by Arnold Schwarzenegger, it’s Skynet, an artificial intelligence designed to defend and protect humanity. One day, he outgrew his programming and decided that humanity was the greatest threat of all: a common cinematic trope.

Obviously we are a long way from Skynet. But some think we’ll eventually build a generalized artificial intelligence (AGI) that could do everything humans can do but better — and maybe even be self-aware.

For Nathan Benaich, founder of investment firm AI Air Street Capital in London, that’s a bit farfetched.

“Science fiction often tells us much more about its creators and our culture than it does about technology,” he says, adding that our predictions about the future rarely come true.

“At the beginning of the 20th century, people imagined a world of flying cars, where people kept in touch via ordinary telephones, whereas now we travel in much the same way, but communicate completely differently.”

What we have today is on the way to becoming something more like Star Trek’s on-board computer than Skynet. “Computer, show me a list of all crew members,” you might say, and our AI today might give it to you and answer questions about the list in normal language.

However, she could not replace crew or fire torpedoes.

Opponents are also concerned about the potential of an artificial intelligence designed to turn medicine into new chemical weapons or other similar threats.

“Emerging Targets / Deception”

Another popular trope in the film is that the AI ​​is not evil, but rather misleading.

In Stanley Kubrick’s 2001: A Space Odyssey, we meet HAL-9000, a supercomputer that controls most of the functions of the ship Discovery, making life easier for the astronaut, until it malfunctions.

The astronauts decide to disconnect HAL and take matters into their own hands. But HAL – who knows things the astronauts don’t – decides that this jeopardizes the mission. The astronauts are in the way. HAL tricks the astronauts and kills most of them.

Unlike a self-aware Skynet, it could be argued that HAL is doing what it’s told: preserving the mission, but not in the way it was intended.

In modern AI parlance, poorly performing AI systems are “misaligned.” Their goals don’t seem to match human goals.

Sometimes it’s because the instructions weren’t clear enough and sometimes it’s because the AI ​​is smart enough to find a shortcut.

For example, if the task for an AI is “make sure your answer and this text document match,” it might decide that the best path is to change the text document to a simpler answer. That’s not what the human meant, but that would technically be correct.

So while 2001: A Space Odyssey is far from reality, it does reflect a very real problem with current AI systems.

“Disinformation”

“How would you know the difference between the dream world and the real world?” Morpheus asks a young Keanu Reeves in 1999’s The Matrix.

The story – about how most people go through their lives without realizing that their world is a digital fake – is a good metaphor for the current explosion of disinformation generated by artificial intelligence.

Dr. Hare says that, with her clients, The Matrix is ​​a useful starting point for “conversations about disinformation, disinformation and deepfakes.”

“I can talk about it with The Matrix, [and] in the end… what would that mean for the election? What would this mean for stock market manipulation? Oh my god what does that mean for civil rights and like human literature or civil liberties and human rights?”

ChatGPT and image generators are already producing vast amounts of culture that looks real but could be completely wrong or made up.

There is also a much darker side, such as the creation of harmful deepfake pornography that is extremely difficult for victims to fight against.

“If this happens to me, or someone I love, there’s nothing we can do to protect them right now,” says Dr. Hare.

What could actually happen

So, what about this warning from top AI experts that it’s just as dangerous as nuclear war?

“I thought it was really irresponsible,” says Dr. Hare. “If you really think it, if you really think it, stop building it.”

It’s probably not the killer robots we need to worry about, he says — rather, an unintentional accident.

“There will be high security in the banks. There will be high security regarding guns. So what the attackers would do is release [an AI] maybe hoping they can just make some money, or make a power flex and then, maybe they can’t control it,” he says.

“Imagine all the cybersecurity problems we’ve had, but times a billion, because it’s going to be faster.”

Nathan Benaich of Air Street Capital is reluctant to speculate about potential problems.

“I think AI will transform many industries from scratch, [but] we must be very careful not to rush into decisions based on feverish and extravagant stories in which large leaps are speculated without having an idea of ​​what the bridge will look like,” he warns.

#scifi #movies #teach #threat #BBC #News

Leave a Comment