The ethic and juridical problems of artificial intelligence
1. Who has a responsibility, if artificial intelligence makes mistakes?
Advanced computing can create systems, what is not under the control of the human. And those systems have great juridical problems because if artificial intelligence makes independently decision, what causes harm to people, we must ask, who will put in charge in those cases.
The artificial intelligence is ultimate and at the same time frighten system, what can be the most dangerous tool, what man has ever created. The highly complicated algorithms can take systems under the control, and that would cause terrifying situations.
2. Can we someday get the prosecution, if we destroy or "kill" some intelligent algorithm?
When we are thinking about the cases that some person would be targeted to kill, and the controller will use the algorithms, that would transform every computer controlled tool and vehicle to the assassination equipment. That means when the targeted person is in the sight, the system can drop the concrete mixer to the person's head, by releasing the brake of the crane.
Artificial intelligence is a complicated thing, and one thing, what makes them a very interesting thing in point of view of human rights is that those rights are connected with the level of intelligence. That means that if the algorithm has enough high level of intelligence there is a juridical or ethical problem with that thing.
This means that if we would have an intelligent algorithm in some computer, and we would shut it down while the network is down. And the algorithm is terminating, we would, in theory, get prosecutions that we have killed the algorithm.
3. The test animals, what might have a higher level of intelligence than other individuals might feel lonely
This kind of things is really complicated, because, at the same time, we must realize, that we must think the rights of the test animals. The animals, what has a higher level of intelligence than other individuals of those species, might feel lonely, and that makes this kind of question very complicated.
When we are thinking about chimpanzees, what are injected with viruses, what increases the number of the connections of the neurons, we must think, that those animals might be more intelligent than their mates. But the thing is that the results of the injected viruses are visible the babies of those animals, and the infected animals themselves would not get more connections in their neurons.
4. The intelligent organism can hide its level of intelligence
That means that the effect of the viruses, what will transfer the genetic material to the genomes of those chimpanzees would take effect in the next generation. The intelligent animals and intelligence itself has one thing, what we must realize, and that is the intelligent thing can hide its intelligence from other intelligent actors.
And that makes them really dangerous. When we are talking about the intelligent organism, we might mean also virtual organisms. The idea of the virtual organism, what is like the electric ghost has been brought in my mind the thing, that algorithms are collecting data and they might form the computer program or electric ghost, what is intelligent.
5. What if we injure artificial intelligence?
This means the intelligent computer program can be a good example of the thing, what is intelligent, but what has no physical form. There we must remember, that artificial intelligence can form intelligent thing, what has no physical form. Can there be laws in the future, that those computer programs would bring us to court if we will violate their honor?
And then we are thinking about the algorithms, what mission is to mock the feelings, we are facing one very interesting rhetoric question: what if we will hurt those algorithms? What if we would make them sad? Would those algorithms right to take us to court, because we have been racists to those algorithms? And who is responsible, if the algorithm makes damages?
6. What if the computer, what generates algorithms connects itself to the database, what controls the ICBM missiles?
If the algorithm is not properly made, that would make things, what they should not make. And in the world of computers, there is only one rule: every mistake, what computers make are made by the programmers. But what if the algorithms are made by computers automatically?
Who takes responsibility, if the computer, what creates the algorithm connects itself to the wrong database, and uses the data, what is meant to control the intercontinental ballistic missiles, and loads that data to the autopilot of the passenger airplane, what starts to believe it's the missile. This kind of things might be very interesting things to think.
Subscribe to:
Post Comments (Atom)
Researchers are trying to create new alternatives for rockets.
"Illustration of an imaginative artwork depicting satellites piled up like a stack of pancakes. Image generated by AI." (Sustainab...

-
"Two twisted boron nitride layers (blue and pink) create a crystal lattice at a distance in which electrons (gold) can arrange themselv...
-
"Visualization of the bridge recombinase mechanism highlighting the donor and target binding loops. Credit: Visual Science" (Scite...
-
Above Candela P-12, (Image: Candela) The new "flying hydrofoils" like the Swedish "Candela" can rise high above waves. T...
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.