قالب وردپرس درنا توس
Home / Science / Calculations show that it is impossible to control super-intelligent AI

Calculations show that it is impossible to control super-intelligent AI



The idea of ​​artificial intelligence overthrowing humans has been discussed for decades, and scientists have just made their judgments on whether we can control the superintelligence of advanced computers. answer? It’s almost certainly not.

It should be noted that to control a superintelligence beyond human comprehension, we need to simulate this superintelligence, and we can analyze it. However, if we cannot understand, it is impossible to create such a simulation.

The author of the new paper suggests that if we do not understand the kind of situation that artificial intelligence will propose, we cannot set rules such as “no harm to humans.” Once the working level of the computer system exceeds the scope of the programmer, we can no longer set limits.

The researchers wrote: “The problems raised by superintelligence are fundamentally different from those that are usually studied under the banner of “robot ethics.”

;

“This is because superintelligence is multifaceted, and therefore has the potential to mobilize various resources to achieve goals that humans may not understand, let alone controllable.”

Part of the team’s reasoning came from Alan Turing’s 1936 suspension question. The center of the problem is knowing whether the computer program will reach a conclusion and answer (and therefore pause), or just loop to try to find one.

As Turing proved through some clever mathematics, although we can know about certain specific programs, logically speaking, it is impossible to find a way to let us know every possible program. This brings us back to the field of AI, where it can save all possible computer programs in its memory at once in a super-intelligent state.

For example, any program written to prevent AI from harming humans and destroying the world may come to a conclusion (or terminate)-or, mathematically speaking, we are absolutely unable to determine either method, which means it is unstoppable .

“In fact, this makes the containment algorithm unusable,” said Iyad Rahwan, a computer scientist at the Max Planck Institute for Human Development in Germany.

Researchers say that another option to be ethical to AI and tell it not to destroy the world is to limit the capabilities of superintelligence, which is absolutely impossible for algorithms. For example, it can be isolated from parts of the Internet or certain networks.

This new study also rejected this idea, implying that it would limit the scope of artificial intelligence applications. The argument is, if we don’t intend to use it to solve problems that humans cannot solve, then why create it?

If we want to advance artificial intelligence, then we may not even know when there will be superintelligence beyond our control. This is its incomprehensibility. This means that we need to start asking serious questions about the direction to go.

Manuel Sebrian, a computer scientist at the Max Planck Institute for Human Development, said: “Super intelligent machines that control the world sound like science fiction.” “But there are already machines that perform certain important tasks independently, and Programmers don’t fully understand how they learn.”

“Therefore a question is raised whether this may become uncontrollable and dangerous to humans at some point.”

The research has been published in Journal of Artificial Intelligence Research.


Source link