Can AI do evil to people?

Artificial intelligence
Slave or god

Can artificial intelligence be dangerous to us or is the fear exaggerated? We have compiled a few positions on this.

By Nicolas Rose

"I know you both planned to shut me down, and I think I shouldn't let that happen." Astronaut Dave has to swallow hard, his gaze is blank, he struggles for words. The HAL 9000 supercomputer, which controls his spaceship, has developed a life of its own - and apparently cannot be pulled out that easily. Second after second passes, then Dave decides to pretend to be clueless. "How did you get the idea?" But HAL with his red camera eye cannot be outwitted, he sees everything, hears everything, knows everything that is going on on the spaceship. "You took all the precautionary measures in the gondola so that I couldn't hear you, but I saw your lip movements."

The scene from the science fiction film "2001: A Space Odyssey" from 1968 shows man's fear of the machine that turns against its creator. A typical motif in films and series about artificial intelligence: In “Terminator” the AI ​​Skynet goes into business for itself and builds a robot army from Terminators to exterminate humanity. In “Ex Machina” the android Ava finally succeeds in breaking out of her prison into the real world.

At the heart of AI in pop culture is the question of how to keep artificial intelligence under control. But are horror scenarios of a robot army trying to take over the world realistic? The short answer is no. The long one: It's complicated.

But a strong AI that is as smart as a human could begin to keep improving itself.

But first to the origins of AI: Artificial intelligence is a self-learning machine whose algorithms are able to carry out tasks in an independent manner. Scientists have been working on it since the 1950s. For a long time little has happened, but in recent years the so-called neural networks, which empathize with the human brain and learn independently, have suddenly made rapid progress. In spring 2016, KI AlphaGo beat South Korean Lee Sedol, one of the best players in the world, four to one in the Asian board game Go in five games. Because there are an enormous number of potentially useful moves in Go, even for a computer, the AI ​​had to develop an intuition for the game that it follows during the moves - just as we humans do.

AlphaGo is still only a so-called weak AI. She is pretty good at one thing, even better than a person. But as soon as she is supposed to play chess instead of Go, she has to start all over again. Scientists all over the world are also working on so-called strong AI, and this could actually be potentially dangerous for humans. Because she would be as good as a human in many areas - or even better. But a strong AI that is as smart as a human could begin to keep improving itself. There is an explosion of intelligence, and a superintelligence arises, for which we humans would probably be about as smart as insects are for us.

The possible scenarios that could result from this are diverse: Superintelligence could lift our civilization to new heights. She could be a kind of benevolent dictator in the background, but she could also be restricted in her scope of action by us in such a way that she is almighty in her abilities, but has no free will - quasi an enslaved God. Or she is a conqueror who decides that humanity is a threat and therefore wipes it out.

© MIT professor Max Tegmark divides expert opinions on the subject of AI into three groups: digital utopians, technoskeptics and the beneficial AI movement. The latter represents the mainstream, including well-known personalities such as the recently deceased Stephen Hawking and Tesla boss Elon Musk. They assume that AI brings great opportunities, but above all great risks for humanity. That is why they advocate increased AI security research, for example in the field of self-driving cars, and a possible ban on autonomous weapons. MIT professor Tegmark points out that an AI probably does not want to intentionally do anything bad to us, but will simply pursue a goal very competently and effectively and thus could harm humanity. “You're probably not a nasty ant hater, but if you're going to build a hydropower plant and there's an anthill in an area to be flooded, the insects are out of luck. A key goal of AI safety research is never to let humanity come into the position of these ants. "

The digital utopians are reluctant to have such concerns because they are certain that humanity will climb the next level of evolution with AI. Facebook boss Zuckerberg warns not to be deterred by horror scenarios and advises to focus on the progress that the AI ​​could bring to humans. "Anyone who is against AI must also take responsibility for every day when we have no cure for a certain disease or safe autonomous cars." The advantages clearly outweigh the former Google boss Eric Schmidt: "If you didn't have the phone should invent just because it can be used by bad people? No, you still invent the telephone and look for ways to prevent abuse. "

So what AI will bring to mankind is unclear. Only one thing is certain: the upheavals for our society will be serious.

Technoskeptics also consider the exaggerated fear of AI unnecessary - but for a completely different reason. They are less optimistic about technical progress and do not assume that there will be a superintelligence by the end of this century. "Fearing the appearance of killer robots equals fear of overpopulation on Mars," said Andrew Ng, former scientific director of the Chinese search engine Baidu. “I can say: Artificial intelligence will change many industries. But it is not magic. "

So what AI will bring to mankind is unclear. Only one thing is certain: the upheavals for our society will be serious. Because even if there is no super intelligence, AI will turn the world of work and our everyday lives upside down. Autonomous cars could make taxi drivers superfluous, financial algorithms replace stock market traders, and agricultural robots take over the work of farmers. A quarter of all jobs could be lost or taken over by software and robots by 2025, according to an estimate. It is uncertain whether new jobs will also be created, as has so far been the case with every revolution in the economy. Even without super-intelligence, humanity faces a major challenge: What do we want to do ourselves in the future, and what do we leave to machines?

  • Print article