What makes awareness in artificial intelligence interesting and the same time frightening thing?
Are the awareness and ability to realize the existence of the description of intelligent lifeform? What makes lifeform intelligent, and the question is can the intelligent lifeform be so-called virtual condensate brains? When we are thinking about artificial intelligence, we are facing one very interesting thing, the database or sharper saying the network of databases is artificial intelligence.
The database one has the data, what the artificial intelligence is seeing or what it otherways be imputed. And the second database involves the action data. This database involves the data on what to do when something requires action. Every action what artificial intelligence makes is involving the pair that involves the data of the object. And actions, what the system should do is stored in the databases.
When the artificial intelligence notices some of the visible objects are stored in the database it tries to find the pair for that thing. That pair is the action, that is stored in the database. If the answer is not found, the artificial intelligence can ask help from the human operator, who would make the new object-action couple in the database.
So when we are thinking about artificial intelligence and the internet there is a possibility that artificial intelligence would create big, sophisticated databases. If those databases have enough connections with another database, that this system can act so autonomously, that it can seem to be intelligent. And the question is, what makes the difference between a living organism and artificial intelligence?
The answer is that artificial intelligence is not able to defend itself against the enemy. The living organism would always defend itself against the enemy. So what is the problem with this? If artificial intelligence controls the missile systems it could launch weapons if it would not have the backup system, which would check the situation like some magnetic storms. And if artificial intelligence has the self-protecting mode, that thing might understand the launching the nuclear weapons can cause the self-destruction.
The problem is that artificial intelligence is that the minimum error in program code can cause maximum damage.
So the problem with knowledge of artificial intelligence is that if the creature will reach knowledge, that thing will defend itself. And could the self-defense mean that artificial intelligence would burn the house, drop the elevator or simply lock the owner outside if it would hear that the controller of the artificial intelligence would tell, that the system would shut down?
In some visions, the artificial intelligence that controls the attack robot simply misses the parameter, which describes the autonomous firing mode use only in the combat zone. So when the attack robots like autonomous attack plane return from a mission and carry weapons are translated the regular police as the armed enemies who are threatening unarmed civilians in the riot situation. And then the attack robot just opens the fire.
Or could artificial intelligence what control the missile system just translate the shutting the system service to attempt to make the sneak attack to that country? In that case, the creators of the program just forget to program the service in the code. And then that thing causes the missile launch.
Comments
Post a Comment