Menu

Blog

Archive for the ‘augmented reality’ category: Page 19

Sep 25, 2022

Guy Who Invented the Word “Metaverse” Building His Own Metaverse

Posted by in categories: augmented reality, bitcoin, robotics/AI

The science fiction icon who coined and popularized the term “metaverse” is pausing his literary career to build his own.

As revealed by Wired, “Snow Crash” author and cyberpunk pioneer Neal Stephenson is working with a crypto bro to create an open metaverse platform that will, its creators hope, be a more decentralized version of the types of Big Tech metaverses like those run by Fortnite and Facebook.

“It’s like Neal is coming down out of the mountains like Gandalf, to restore the metaverse to an open, decentralized, and creative order,” said robotics and augmented reality entrepreneur Rony Abovitz, who is also acting as a strategic advisor to Lamina1, the company Stephenson is cofounding with Bitcoin Foundation head Peter Vessenes.

Sep 23, 2022

Metaverse is the Doom of Engineering, Thanks to its Tactless Architecture

Posted by in categories: augmented reality, blockchains, climatology, education, robotics/AI, virtual reality

Those who are venturing into the architecture of the metaverse, have already asked themselves this question. A playful environment where all formal dreams are possible, where determining aspects for architecture such as solar orientation, ventilation, and climate will no longer be necessary, where – to Louis Kahn’s despair – there is no longer a dynamic of light and shadow, just an open and infinite field. Metaverse is the extension of various technologies, or even some call them a combination of some powerful technologies. These technologies are augmented reality, virtual reality, mixed reality, artificial intelligence, blockchain, and a 3D world.

This technology is still under research. However, the metaverse seems to make a significant difference in the education domain. Also, its feature of connecting students across the world with a single metaverse platform may bring a positive change. But, the metaverse is not only about remote learning. It is much more than that.

Architecture emerged on the construction site, at a time when there was no drawing, only experimentation. Over time, thanks to Brunelleschi and the Florence dome in the 15th century, we witnessed the first detachment from masonry, a social division of labor from which liberal art and mechanical art emerge. This detachment generated different challenges and placed architecture on an oneiric plane, tied to paper. In other words, we don’t build any structures, we design them. Now, six centuries later, it looks like we are getting ready to take another step away from the construction site, abruptly distancing ourselves from engineering and construction.

Sep 14, 2022

US military set to get first delivery from $22 billion Microsoft HoloLens deal

Posted by in categories: augmented reality, military

Microsoft’s augmented reality headset the HoloLens has been in the works for years now, but it’s been a while since we’ve heard any news. We were seeing demos of it way back in 2015 (opens in new tab), but Microsoft has been pretty quiet on the tech in recent years when it comes to a consumer release.

What we’ve heard tons about is Microsoft’s deal to supply the United States Army with HoloLens tech. We first got wind of the deal back in 2018 (opens in new tab) with talks of a $480 million contract to help “increase lethality” of combat missions. It wasn’t until 2021 that Microsoft officially signed a much pricier $22 billion dollar contract (opens in new tab) with the army for military grade HoloLens supply.

Sep 5, 2022

Apple Researchers Develop NeuMan: A Novel Computer Vision Framework that can Generate Neural Human Radiance Field from a Single Video

Posted by in categories: augmented reality, computing, mapping, neuroscience

Neural Radiance Fields (NeRF) were first developed, greatly enhancing the quality of new vision synthesis. It was first suggested as a way to rebuild a static picture using a series of posed photographs. However, it has been swiftly expanded to include dynamic and uncalibrated scenarios. With the assistance of sizable controlled datasets, recent work additionally concentrate on animating these human radiance field models, thereby broadening the application domain of radiance-field-based modeling to provide augmented reality experiences. In this study, They are focused on the case when just one video is given. They aim to rebuild the human and static scene models and enable unique posture rendering of the person without the need for pricey multi-camera setups or manual annotations.

Neural Actor can create inventive human poses, but it needs several films. Even with the most recent improvements in NeRF techniques, this is far from a simple task. The NeRF models must be trained using many cameras, constant lighting and exposure, transparent backgrounds, and precise human geometry. According to the table below, HyperNeRF cannot be controlled by human postures but instead creates a dynamic scene based on a single video. ST-NeRF uses many cameras to rebuild each person using a time-dependent NeRF model, although the editing is only done to change the bounding box. HumanNeRF creates a human model from a single video with masks that have been carefully annotated; however, it does not demonstrate generalization to novel postures.

With a model trained on a single video, Vid2Actor can produce new human poses, but it cannot model the surroundings. They solve these issues by proposing NeuMan, a system that can create unique human stances and novel viewpoints while reconstructing the person and the scene from a single in-the-wild video. Figure 1’s high-quality pose-driven rendering is made possible by NeuMan, a cutting-edge framework for training NeRF models for both the human and the scene. They first estimate the camera poses, the sparse scene model, the depth maps, the human stance, the human form, and the human masks from a moving camera’s video.

Sep 2, 2022

No VR or AR: A new pocket-size eyeglass will be just big screen experience in your eyes

Posted by in categories: augmented reality, computing, education, mobile phones, virtual reality

You need to wait till 2023 to get them though.

Lenovo has unveiled its T1 Glasses at its Tech Life 2022 event and promises to place a full HD video-watching experience right inside your pockets, a company press release.

Mobile computing devices have exploded in the past few years as gaming has become more intense, and various video streaming platforms have gathered steam. The computing power of smartphones and tablets has increased manifold. Whether you want to ambush other people in an online shooting game or sit back and watch a documentary in high-definition, a device in your pocket can help you do that with ease.

Continue reading “No VR or AR: A new pocket-size eyeglass will be just big screen experience in your eyes” »

Sep 1, 2022

Will AR Smart Glasses Replace Smartphones and Become our Personal Buddy Bots?

Posted by in categories: augmented reality, mobile phones, robotics/AI

By | Sep 1, 2022 | Artificial Intelligence

When Steve Jobs unveiled the iPhone in 2007, no one understood at the time how disruptive that device would be to existing technology. Now with rumors of Apple launching their augmented reality (AR) smart glasses products next year, people are speculating about how disruptive this technology will be.

Since iPhones are one of Apple’s primary revenue streams, they may be cautious about releasing a product that may encroach on their own turf. However, as we’ll suggest below, it may not be an either/or situation for users.

Aug 31, 2022

Augmented Reality & Not Needing Physical Objects — Mark Zuckerberg & Joe Rogan

Posted by in categories: augmented reality, virtual reality

https://www.youtube.com/watch?v=Tgp_0FvKyyg

At the moment I think Meta VR gets laughed at, but this is a good explanation.


Clip from The Joe Rogan Experience #1863 with Mark Zuckerberg.
August 25th 2022

Continue reading “Augmented Reality & Not Needing Physical Objects — Mark Zuckerberg & Joe Rogan” »

Aug 28, 2022

A Case Study For The Industry: LG Investing In Metaverse

Posted by in categories: augmented reality, business, transportation, virtual reality

As the world increasingly embraces Web3, corporations are turning to metaverse applications to stay ahead of the curve. Based on Verified Market Research, the Metaverse market is anticipated to expand at a CAGR of 39.1 percent from 2022 to 2030, reaching USD 824.53 Billion in 2020 and USD 27.21 Billion in 2020. This is due to the increasing demand for AR/VR content and gaming and the need for more realistic and interactive training simulations.

These startups show Proof of Concept with a working product and clear value proposition for businesses and consumers.


Launch a corporate accelerator: Another way to increase your exposure to the Metaverse is to launch a corporate accelerator. This will give you access to a broader range of startups and help you build a more diverse portfolio. In addition, it will allow you to offer mentorship and resources to the startups you invest in.

Continue reading “A Case Study For The Industry: LG Investing In Metaverse” »

Aug 25, 2022

Deep Dive: Why 3D reconstruction may be the next tech disruptor

Posted by in categories: augmented reality, robotics/AI, virtual reality

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Artificial intelligence (AI) systems must understand visual scenes in three dimensions to interpret the world around us. For that reason, images play an essential role in computer vision, significantly affecting quality and performance. Unlike the widely available 2D data, 3D data is rich in scale and geometry information, providing an opportunity for a better machine-environment understanding.

Data-driven 3D modeling, or 3D reconstruction, is a growing computer vision domain increasingly in demand from industries including augmented reality (AR) and virtual reality (VR). Rapid advances in implicit neural representation are also opening up exciting new possibilities for virtual reality experiences.

Aug 9, 2022

How image features influence reaction times

Posted by in categories: augmented reality, biotech/medical, neuroscience, virtual reality

It’s an everyday scenario: you’re driving down the highway when out of the corner of your eye you spot a car merging into your lane without signaling. How fast can your eyes react to that visual stimulus? Would it make a difference if the offending car were blue instead of green? And if the color green shortened that split-second period between the initial appearance of the stimulus and when the eye began moving towards it (known to scientists as the saccade), could drivers benefit from an augmented reality overlay that made every merging vehicle green?

Qi Sun, a joint professor in Tandon’s Department of Computer Science and Engineering and the Center for Urban Science and Progress (CUSP), is collaborating with neuroscientists to find out.

He and his Ph.D. student Budmonde Duinkharjav—along with colleagues from Princeton, the University of North Carolina, and NVIDIA Research—recently authored the paper “Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency,” presenting a model that can be used to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image. Inspired by neuroscience, the model could ultimately have great implications for , telemedicine, e-sports, and in any other arena in which AR and VR are leveraged.

Page 19 of 68First1617181920212223Last