Menu

Blog

Archive for the ‘virtual reality’ category: Page 47

Aug 23, 2020

Facebook is training robot assistants to hear as well as see

Posted by in categories: information science, robotics/AI, virtual reality

In June 2019, Facebook’s AI lab, FAIR, released AI Habitat, a new simulation platform for training AI agents. It allowed agents to explore various realistic virtual environments, like a furnished apartment or cubicle-filled office. The AI could then be ported into a robot, which would gain the smarts to navigate through the real world without crashing.

In the year since, FAIR has rapidly pushed the boundaries of its work on “embodied AI.” In a blog post today, the lab has announced three additional milestones reached: two new algorithms that allow an agent to quickly create and remember a map of the spaces it navigates, and the addition of sound on the platform to train the agents to hear.

Aug 23, 2020

Stanford Scientists Slow Light Down and Steer It With Resonant Nanoantennas

Posted by in categories: augmented reality, biotech/medical, computing, internet, nanotechnology, quantum physics, virtual reality

Researchers have fashioned ultrathin silicon nanoantennas that trap and redirect light, for applications in quantum computing, LIDAR and even the detection of viruses.

Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.

Now, in a paper published on August 17, 2020, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.

Aug 18, 2020

Scientists slow and steer light with resonant nanoantennas

Posted by in categories: augmented reality, biotech/medical, computing, internet, nanotechnology, quantum physics, virtual reality

Light is notoriously fast. Its speed is crucial for rapid information exchange, but as light zips through materials, its chances of interacting and exciting atoms and molecules can become very small. If scientists can put the brakes on light particles, or photons, it would open the door to a host of new technology applications.

Now, in a paper published on Aug. 17, in Nature Nanotechnology, Stanford scientists demonstrate a new approach to slow light significantly, much like an echo chamber holds onto sound, and to direct it at will. Researchers in the lab of Jennifer Dionne, associate professor of materials science and engineering at Stanford, structured ultrathin silicon chips into nanoscale bars to resonantly trap light and then release or redirect it later. These “high-quality-factor” or “high-Q” resonators could lead to novel ways of manipulating and using light, including new applications for quantum computing, virtual reality and augmented reality; light-based WiFi; and even the detection of viruses like SARS-CoV-2.

“We’re essentially trying to trap light in a tiny box that still allows the light to come and go from many different directions,” said postdoctoral fellow Mark Lawrence, who is also lead author of the paper. “It’s easy to trap light in a box with many sides, but not so easy if the sides are transparent—as is the case with many Silicon-based applications.”

Aug 18, 2020

Mix-StAGE: A model that can generate gestures to accompany a virtual agent’s speech

Posted by in categories: robotics/AI, space, virtual reality

Virtual assistants and robots are becoming increasingly sophisticated, interactive and human-like. To fully replicate human communication, however, artificial intelligence (AI) agents should not only be able to determine what users are saying and produce adequate responses, they should also mimic humans in the way they speak.

Researchers at Carnegie Mellon University (CMU) have recently carried out a study aimed at improving how and robots communicate with humans by generating to accompany their speech. Their paper, pre-published on arXiv and set to be presented at the European Conference on Computer Vision (ECCV) 2020, introduces Mix-StAGE, a new that can produce different styles of co-speech gestures that best match the voice of a and what he/she is saying.

“Imagine a situation where you are communicating with a friend in a through a ,” Chaitanya Ahuja, one of the researchers who carried out the study, told TechXplore. “The headset is only able to hear your voice, but not able to see your hand gestures. The goal of our model is to predict the accompanying the speech.”

Aug 17, 2020

Philosophical Insights on Universal Consciousness and Evolving Phenomenal Mind

Posted by in categories: alien life, particle physics, quantum physics, robotics/AI, singularity, virtual reality

The Universe or any other phenomenon or entity contained therein is not objectively real but subjectively real. Patterns of information emerging from the ultimate code are what is more fundamental than particles of matter or space-time continuum itself all of which is levels below the Code. Nature behaves quantum code-theoretically at all levels. It’s hierarchies of quantum networks all the way down and all the way up. Being part of hierarchical quantum neural networks, a conscious observer system possesses a strange quality: collapsing quantum states of entangled conscious entities and having a privileged interpretation of that. From this perspective, entangled conscious agents would be a mirror conscious environment, whereas the quantum observer would be a central node of the entangled network.


“If we accept that the material universe as we know it is not a mechanical system but a virtual reality created by Absolute Consciousness through an infinitely complex orchestration of experiences, what are the practical consequences of this insight?” –Stanislav Grof

Just like absolute idealism, solipsism certainly defies our common sense but the deeper layer of truth is not what first meets the eye. Here’s what Richard Conn Henry and Stephen Palmquist write in their paper “An Experimental Test of Non-local Realism” (2007): “Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the illusion of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism.” One can extend their line of reasoning by arriving at pantheistic solipsism as a likely revelation to ponder about.

Continue reading “Philosophical Insights on Universal Consciousness and Evolving Phenomenal Mind” »

Jul 13, 2020

More than 3000 scientists gather online for Neutrino 2020

Posted by in categories: particle physics, virtual reality

A dash of virtual reality helps replicate the serendipitous interactions of an in-person conference when participants are scattered across the globe.

Jun 14, 2020

Videogame Technology Could Bring Biofeedback Therapy to the Living Room

Posted by in categories: entertainment, virtual reality

The immersive qualities of virtual-reality gaming are making effective biofeedback treatment of anxiety and other conditions more affordable and accessible.

Jun 10, 2020

The limits of color awareness during active, real-world vision

Posted by in categories: food, virtual reality

Color is a foundational aspect of visual experience that aids in segmenting objects, identifying food sources, and signaling emotions. Intuitively, it feels that we are immersed in a colorful world that extends to the farthest limits of our periphery. How accurate is our intuition? Here, we used gaze-contingent rendering in immersive VR to reveal the limits of color awareness during naturalistic viewing. Observers explored 360° real-world environments, which we altered so that only the regions where observers looked were in color, while their periphery was black-and-white. Overall, we found that observers routinely failed to notice when color vanished from the majority of their visual world. These results show that our intuitive sense of a rich, colorful world is largely incorrect.

Color ignites visual experience, imbuing the world with meaning, emotion, and richness. As soon as an observer opens their eyes, they have the immediate impression of a rich, colorful experience that encompasses their entire visual world. Here, we show that this impression is surprisingly inaccurate. We used head-mounted virtual reality (VR) to place observers in immersive, dynamic real-world environments, which they naturally explored via saccades and head turns. Meanwhile, we monitored their gaze with in-headset eye tracking and then systematically altered the visual environments such that only the parts of the scene they were looking at were presented in color and the rest of the scene (i.e., the visual periphery) was entirely desaturated. We found that observers were often completely unaware of these drastic alterations to their visual world. In the most extreme case, almost a third of observers failed to notice when less than 5% of the visual display was presented in color.

Jun 5, 2020

Metasurface design methods can make LED light act more like lasers

Posted by in category: virtual reality

UC Santa Barbara researchers continue to push the boundaries of LED design a little further with a new method that could pave the way toward more efficient and versatile LED display and lighting technology.

In a paper published in Nature Photonics, UCSB electrical and computer engineering professor Jonathan Schuller and collaborators describe this new approach, which could allow a wide variety of LED devices—from to automotive lighting—to become more sophisticated and sleeker at the same time.

“What we showed is a new kind of photonic architecture that not only allows you to extract more photons, but also to direct them where you want,” said Schuller. This improved performance, he explained, is achieved without the external packaging components that are often used to manipulate the emitted by LEDs.

May 31, 2020

‘One-way’ electronic devices enter the mainstream

Posted by in categories: computing, internet, military, mobile phones, quantum physics, virtual reality

Waves, whether they are light waves, sound waves, or any other kind, travel in the same manner in forward and reverse directions—this is known as the principle of reciprocity. If we could route waves in one direction only—breaking reciprocity—we could transform a number of applications important in our daily lives. Breaking reciprocity would allow us to build novel “one-way” components such as circulators and isolators that enable two-way communication, which could double the data capacity of today’s wireless networks. These components are essential to quantum computers, where one wants to read a qubit without disturbing it. They are also critical to radar systems, whether in self-driving cars or those used by the military.

A team led by Harish Krishnaswamy, professor of electrical engineering, is the first to build a high-performance non-reciprocal on a compact chip with a performance 25 times better than previous work. Power handling is one of the most important metrics for these circulators and Krishnaswamy’s new chip can handle several watts of power, enough for cellphone transmitters that put out a watt or so of power. The new chip was the leading performer in a DARPA SPAR (Signal Processing at RF) program to miniaturize these devices and improve performance metrics. Krishnaswamy’s group was the only one to integrate these non-reciprocal devices on a compact chip and also demonstrate performance metrics that were orders of magnitude superior to prior work. The study was presented in a paper at the IEEE International Solid-State Circuits Conference in February 2020, and published May 4, 2020, in Nature Electronics.

“For these circulators to be used in practical applications, they need to be able to handle watts of power without breaking a sweat,” says Krishnaswamy, whose research focuses on developing integrated electronic technologies for new high-frequency wireless applications. “Our earlier work performed at a rate 25 times lower than this new one—our 2017 device was an exciting scientific curiosity but it was not ready for prime time. Now we’ve figured out how to build these one-way devices in a compact chip, thus enabling them to become small, low cost, and widespread. This will transform all kinds of electronic applications, from VR headsets to 5G cellular networks to quantum computers.”

Page 47 of 104First4445464748495051Last