Wednesday, June 3, 2020

Artificial intelligence and the risks of that kind of system.

 


Artificial intelligence and the risks of that kind of system. 

Artificial intelligence doesn't think, and that makes it dangerous. Artificial intelligence operates by using the pre-programmed models and if something is missing in the code, that thing can make the robot acting unexpecting way. 

When we are comparing human and artificial intelligence, the programmer must program every single action in that robot. And if we are thinking the case that robot would operate at the battlefield, the only thing what it should do is shoot everything, what doesn't send the green IFF (Identify Friend or Foe) signal.  

Those robots must also go in the houses and clean them from enemies, and they need millions of lines of programming language. And if something doesn't send that signal they would-be enemies. But if we would want to make "robocop", the law enforcement model making the programs for that system is even more complicated. 

Is artificial intelligence some kind of Frankenstein, the monster that is made for serving good, but turns to evil. This is one question that we should find out when we are making research with artificial intelligence, and learning machines. 

One of the primary questions with artificial intelligence is, what might cause the rebel of the machines? Sometimes when I remember the Terminator-movies, where highly sophisticated artificial intelligence fights against humans, I think that would the reason for that kind of action the police movies, what are shown on television?

Can advanced combat robot mess with some movie, where bandits are gunfight with police, and then start to declare regular people as an enemy? 

In this scenario, the independently operating robot, what mission is to protect the nation would search the data from the internet, and during that process, the system would find the database of the movies. And then it finds the Hollywood-action movie. In that movie, the robot would notice that normal people are attacking against police, and police are the representer of the government. So the robot starts to declare normal people as enemies. 

This is the key element in artificial intelligence. The image-identify programs recognize the police uniform, but the system doesn't recognize the difference between movies and real life. So what makes this thing possible? The answer is that the programmer must describe every single thing, what the artificial intelligence recognizes. And would the programmer of the law enforcement robots remember the place, called movie theaters? 

So in some scenarios, that kind of robot would walk in the movie theater and sees the action movie. The thing is making this kind of system is extremely complicated. Every single thing that that system needs for the reaction must describe and that description must be done by using the programming language. And that makes the program very complicated, so is it possible that sometimes the coder just forgets some object like the screen of the computer, and that might cause a terrible situation if the combat robot would march in the shopping center, and somebody is playing the action games by using laptop computers. 

The surveillance camera, which makes automatic alarms by using image recognition might make false alarms if it sees weapons on screens. This is the problem with artificial intelligence. It is the ultimate tool, but it also makes mistakes. The automatic alarm systems, what are basing the artificial intelligence recognize some objects like weapons or movements of hands, and they can call security or police automatically. But the problem is that the system needs an extremely powerful and well-done computer code. 

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Can Sci-fi weapons: nanomachines and sophons be a reality someday?

  The grey fog is one of the superweapons that are so horrifying that we cannot even imagine them. That grey fog can erase entire planets. N...