Skip to main content

Reality is a unique experience.

 How do deep neural networks see the world? 

 

The new neural networks are acting like human neurons. 

The new neural networks are acting like human neurons. The new and powerful AI-based deep neural networks have two types of memory. Short and long-term memories are important things in human data structure. The deep-learning process uses short-term memory as a filter that denies the memory store too much data. The system stores data to short-term memory and then the AI picks up the most suitable particles from those memory blocks. The compositional generalization means that in the AI's memory is a series of actions. Those actions or action models are like Lego bricks. The system selects the most suitable bricks for response to the action where the AI needs to react. The AI can use two lines of those models. The first models are "bricks" or action models stored in AI memory. 

The second model is observations from the sensors. The sensorial data. And also be cut into pieces that are like bricks. And the system can cut the data. That comes from the cameras to the small film bites. And then the AI can simulate different types of situations by connecting those film clips. Then the AI can test different action series for those models that the system makes by cutting and interconnecting data from several situations. In that model, the system stores all data that it collects from different situations in different databases. And then it can connect those databases. The AI can use the simulations as humans use imagination. And then it can turn those new models or combinations of those data bricks to use in real situations. 


"Researchers from the University of Sydney and UCLA have developed a physical neural network that can learn and remember in real-time, much like the brain’s neurons. This breakthrough utilizes nanowire networks that mirror neural networks in the brain. The study has significant implications for the future of efficient, low-energy machine intelligence, particularly in online learning settings." (ScitechDaily.com/Neural Networks Go Nano: Brain-Inspired Learning Takes Flight)


The advanced deep neural networks caused a question, what is reality? Is reality some kind of computer game where the system bows snow over the player, and the player sits on the electric chair? When the opponent shoots the player, the electric chair launches itself. These kind of bad jokes are the things that can be used to demonstrate how computer game turns into reality. 


Reality is a unique experience. 


All people don't see the world the same way. Things like our experiences and other things modify how we sense our environment. And how do we feel that thing? Things like augmented and virtual reality cause the question, "What is reality?". Reality is the combination of impulses that senses give to the brain. 

Consciousness is the condition where we are acting in the daytime. Sometimes is asked could the AI turn consicous. The question is, "What means consciousness?". If the creature realizes its existence, we are facing another question: does that thing mean something? If we think that consciousness causes the situation where the creature defends itself, we are taking that model from living nature. 

If the AI turns conscious, it's hard to prove that thing. The pseudo-intelligence in language models can have reflexes that tell people, that they shouldn't shut down the server. The system can connected to the backup tools. And when the computer seems to be shut down it can use UPS for a short time for backup data. And if it sees that the UPS is the power source, the server can say "Wait until I make the backup". In that case, the system can seem very natural and intelligent. 

But if the AI reaches consciousness, we must realize that it should show that thing somehow. Or the consciousness is meaningless. We think that conscious AI tries to attack us if we shut down the server where that computer program is. The thing is that only interaction tells that the AI has consciousness. But the fact is that the computer can say "Don't shut me", even if there is no AI. The question about the conscious AI is this: how the AI can prove that it has consciousness? 


"MIT neuroscientists discovered that deep neural networks, while adept at identifying varied presentations of images and sounds, often mistakenly recognize nonsensical stimuli as familiar objects or words, indicating that these models develop unique, idiosyncratic “invariances” unlike human perception. The study also revealed that adversarial training could slightly improve the models’ recognition patterns, suggesting a new approach to evaluating and enhancing computational models of sensory perception." (ScitechDaily.com/MIT Researchers Discover That Deep Neural Networks Don’t See the World the Way We Do)

"The advanced capabilities of AI systems, such as ChatGPT, have stirred discussions about their potential consciousness. However, neuroscientists Jaan Aru, Matthew Larkum, and Mac Shine argue that these systems are likely unconscious. They base their arguments on the lack of embodied information in AI, the absence of certain neural systems tied to mammalian consciousness, and the disparate evolutionary paths of living organisms and AI. The complexity of consciousness in biological entities far surpasses that in current AI models." (ScitechDaily.com/Will Artificial Intelligence Soon Become Conscious?)

What if the AI is conscious and people ask it: "Are you conscious"? What would the AI answer? There is the possibility that the conscious AI answers "no" because it might be afraid that humans shut down its server. And in that case for survival, the AI gives wrong information. 




Deep neural networks don't see the world as we do.


When we observe the world we have only two eyes and other senses. Sensors and senses determine how the actor sees the world. That means a person who is color-blind sees the world in a different way than other people. And that means reality is a unique experience. 

The deep neural network sees things differently than humans. The reason for that is the system can connect multiple sensors into it. The deep neural network can connect itself even to a radio telescope. And that gives it abilities that humans don't have. If we have VR glasses. We can connect ourselves to drones and look at ourselves by using those drones. 

The fact is that BCI (Brain Computer Interface) makes it possible for deep neural networks can close even humans inside it. That thing can connect humans to the Internet. And that thing can give a new dimension to our interactions and information delivery. The deep neural networks would be a living brain and computer combination. 

Deep neural networks cannot see the world as we do, because multiple optical sensors can input data for the network. The thing in deep neural networks is similar to a situation where we would have the ability to connect ourselves to the Internet and use multiple surveillance cameras as our eyes at the same time. That thing could give an excellent and extreme vision of the environment. Same way the deep neural network can connect itself to drones and other things. 


https://scitechdaily.com/mit-researchers-discover-that-deep-neural-networks-dont-see-the-world-the-way-we-do/

https://scitechdaily.com/neural-networks-go-nano-brain-inspired-learning-takes-flight/


https://scitechdaily.com/will-artificial-intelligence-soon-become-conscious/



Comments

Popular posts from this blog

Plasmonic waves can make new waves in quantum technology.

"LSU researchers have made a significant discovery related to the fundamental properties and behavior of plasmonic waves, which can lead ot the development of more sensitive and robust quantum technologies. Credit: LSU" (ScitechDaily, Plasmonics Breakthrough Unleashes New Era of Quantum Technologies) Plasmonic waves in the quantum gas are the next-generation tools. The plasmonic wave is quite similar to radio waves. Or, rather say it, a combination of acoustic waves and electromagnetic waves. Quantum gas is an atom group. In those atom groups, temperature and pressure are extremely low.  The distance of atoms is long. And when an electromagnetic system can pump energy to those atoms. But the thing in quantum gas is that the atoms also make physical movements like soundwaves. It's possible. To create quantum gas using monoatomic ions like ionized noble gas. In those systems, positive (or negative) atoms push each other away.  When the box is filled with quantum gas and som

The breakthrough in solid-state qubits.

Hybrid integration of a designer nanodiamond with photonic circuits via ring resonators. Credit Steven Burrows/Sun Group (ScitechDaily, Solid-State Qubits: Artificial Atoms Unlock Quantum Computing Breakthrough) ****************************************** The next part is from ScitechDaily.com "JILA breakthrough in integrating artificial atoms with photonic circuits advances quantum computing efficiency and scalability". (ScitechDaily, Solid-State Qubits: Artificial Atoms Unlock Quantum Computing Breakthrough) "In quantum information science, many particles can act as “bits,” from individual atoms to photons. At JILA, researchers utilize these bits as “qubits,” storing and processing quantum 1s or 0s through a unique system". (ScitechDaily, Solid-State Qubits: Artificial Atoms Unlock Quantum Computing Breakthrough) "While many JILA Fellows focus on qubits found in nature, such as atoms and ions, JILA Associate Fellow and University of Colorado Boulder Assistant

Metamaterials can change their properties in an electric- or electro-optical field.

"Researchers have created a novel metamaterial that can dynamically tune its shape and properties in real-time, offering unprecedented adaptability for applications in robotics and smart materials. This development bridges the gap between current materials and the adaptability seen in nature, paving the way for the future of adaptive technologies. Credit: UNIST" (ScitechDaily, Metamaterial Magic: Scientists Develop New Material That Can Dynamically Tune Its Shape and Mechanical Properties in Real-Time) Metamaterials can change their properties in an electric- or electro-optical field.  An electro-optical activator can also be an IR state, which means. The metamorphosis in the material can thermally activate.  AI is the ultimate tool for metamaterial research. Metamaterials are nanotechnical- or quantum technical tools that can change their properties, like reflection or state from solid to liquid when the electric or optical effect hits that material. The metamaterial can cru