Can the machine rebel?
1) Can machines rebel?
In many movies, the rebellion of the robots is the key element in their topics, and the thing is that many people are asking, is it possible that robots can rise against their masters, humans? The answer is no if we are regularly thinking about that thing. Machines cannot rebel, because they don't have a will, the ability to create abstraction and ambition. Lack of those abilities makes machines safe for people, who are using them.
The robot doesn't want to become president, or it cannot imagine people to serve it, and it always follows the orders, what the users of the robots are giving to it. And this is the thing, what we must understand that the people, who are creating robots can give it the instructions, what makes those things dangerous.
2) There are three versions of the hypothetical cases, where robots can be rebellious
2.2 The communication line between defense-robots and headquarters is cut, and robots start to declare people as enemies.
If we are thinking about artificial intelligence and killer-robots, those things might cause risk, if the controller of the system would act the wrong way. In some scenarios the robots, what are created for the defending the nation would turn to rebel.
In the most interesting versions of those cases, the robots would turn against people, when the communication line between them and headquarters have been cut. And then those robots are starting to operate against the enemy because their system has been translated the situation that the enemy attack.
2.2) Artificial intelligent machines turn rebellious because of misunderstanding the data, what they are handling.
When we are thinking about the autonomous learning method of computers, there is a possibility, that the learning machines would accidentally turn to use the wrong sources for their data. And that thing would cause the robots to start to attack people. Sometimes this kind of scenario causes thoughts, that could mankind sometimes face the situation, which is introduced in "Terminator-movies"?
In some bad jokes are asked what if the learning machines like "Terminator-robots" are looking too much police-movies? The thing is that the machine doesn't understand the difference between the fact and fiction and if we are thinking about independently learning computers and algorithms, there is a possibility that those computers would connect to the database of the justice department.
2.3) Can automatically learn robot turn violent if it sees too many police-series?
Then something goes wrong, and the database is connected accidentally to the combat robot, where is stored the data, what is collected from some simulations. The thing is that algorithm, what is controlling machine learning can use the cameras of the robot to get data, and then somebody accidentally loads that kind of algorithm to combat robot, and then some engineer would look the Netflix-series in the front of that machine.
Then the computer thinks that there are billions of people in trouble, and the system uses some action-movie as the base of the actions, what it makes. And in that kind of case robots can be dangerous. But can robots make people serve themselves?
In this case, the man-looking robot, what is created for playing a human in a dangerous situation like undercover operations of the law enforcement gets it's role-model accidentally from the movie like "Full Metal Jacket". This kind of thing is possible if the creators of the learning algorithm are forgetting to describe the difference, what those robots are seeing in some screens and what they are seeing in real life.
Those robots can have the module, which allows the machine learning because of the communication between robot and support station might be disturbed. That kind of system needs autonomous operation capacities and the robot doesn't make a difference between TV-series and real life, there might be many people in trouble.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.