"Experts advocate for human-centered AI, urging the design of technology that supports and enriches human life, rather than forcing humans to adapt to it. A new book featuring fifty experts from over twelve countries and disciplines explores practical ways to implement human-centered AI, addressing risks and proposing solutions across various contexts." (ScitechDaily, 50 Global Experts Warn: We Must Stop Technology-Driven AI)
The AI is the ultimate tool for handling things that behave is predictable. Things like planets' orbiting and other mechanical things that follow certain natural laws are easy things for the AI. The AI might feel human, it can have a certain accent. And that thing is not very hard to program.
It just requires the accent wordbook, and then AI can transform grammatically following text into text with a certain accent. Then the AI drives that data to the speech synthesizer. The accent mode follows the same rules as language translation programs. The accent wordbook is needed to translate spoken commands grammatically so that the system understands them.
The problem with spoken commands is that most people don't speak grammatically right in their normal life. Computers require that the user gives commands precisely the right way. That means the user must use precise grammatic. In AI-based language models the system might have multiple possibilities for how the user can give commands. But the user must select one of those choices. And even if AI reacts to accents the user must use an accent that is on the list.
Users create most AI problems themselves. If we want AI to predict our death moment, we get what we want. When we search only negative things like death and violence the screen is full of those things. So is this kind of negative answer the user's or AI's fault?
People expect too much from them. They want the AI to make a thesis for them. And those kinds of things cause ethical problems. In those cases, the AI is like a ghostwriter. And the use of them is strictly prohibited. Giving another's text as own is called plagiarism.
The AI is an excellent tool to collect sources for the thesis, but the person should know about topics that the AI is a useful tool. And they should write their texts themselves. Or there could be lots of topical errors.
AI is a good tool for computer programming. But it's not so good when it must act as a therapist. If people want to get bad answers from AI, and the use of it turns into masochism, that is their problem. If people want to generate some nazi-soldier images using AI, we must realize that there are lots of authentic nazi-soldier images on the net. So why the AI should make that kind of image?
Those things are problematic. The AI can create many things like cyborg chickens, that is over this text. But why should it make some historical characters? I once tried to make AI to generate the photorealistic version of Vincent van Gogh's "Starry Night" painting. And the AI refused to make that thing. There would be no problem with AI art if there is mention that it's made using AI.
AI plagiarism is also easy to deny. The AI must just keep records or databases about stuff that it created. And there the clients like social media channels can check if the text or image is created using AI. And then there could come to mention that "made by A.I". This kind of plagiarism detection is already in use in high schools. Or it has been used for over 15 years. So can this type of plagiarism detection used in AI?
https://scitechdaily.com/50-global-experts-warn-we-must-stop-technology-driven-ai/
Comments
Post a Comment