AIR (Augmented Intelligent Reality): when AI & AR come together

“Extended reality is dead. Now it’s time for artificial intelligence.” How many times have we heard this statement in recent months? However, from Onirix we want to bring a new point of view: the possibility of joining both worlds to create new solutions that help companies and people.

Because Artificial Intelligence (AI) is not only ChatGPT or generative AI. At its core, we are talking about the emulation of different aspects of human intelligence, from perception and recognition to reaction, planning or problem solving.

And, in this sense, what will happen when AI wants to communicate with us? What do you think will be the communication interface between our intelligent digital world and the human being? Without a doubt, augmented reality (AR) will play a fundamental role in everything related to visual information of the environment.

This is how we have conceived the concept of AIR: Augmented Intelligent Reality.

A glimpse into the future

The term AIR is the result of several use cases in which we have seen that both concepts, Augmented Reality (AR) & Artificial Intelligence (AI), not only can, but are destined to coexist in harmony over the next few years.

In this way, we believe that the concept of augmented reality as the eyes and the canvas of artificial intelligence will gradually take hold.

To see for ourselves, we’ll take an innovative device that all indications are that it will shape the future of the industry: the Ray-Ban Meta Smart Glasses. Although these glasses are still only smart through camera and sound, they have a clear defined roadmap in which they will include augmented reality through screens.

YouTube player

On the one hand, through multimodal AI, these glasses are already able to identify everything they are seeing, such as a type of rock, an island on the horizon, clothes or signs with texts in a given language. In other words, the cameras of the glasses are the eyes of Meta’s AI (but only when we order them to “look”).

With this, these glasses augment our reality back by offering us information: what kind of rock it is, what island I have in front of me or even translating a sign into my native language. In other words, the glasses become the AI’s canvas.

For now, the glasses respond to us in the form of audio. But sooner than we think (watch out for Meta Connect in September), we will be talking about three-dimensional images, tags, videos, avatars, filters on reality… and we will be facing a new augmented reality powered by artificial intelligence.

This is how Meta is showing the future of its AR glasses:

YouTube player

In this future, the major limitation of multimodal AI will be that, although it will achieve a global understanding of the environment, it will not take into account specific considerations of each space. In other words, if we want an AI that understands our business, we will have to feed it specifically for that purpose, so that it can assimilate all available information and help us interact with it through intelligent and even predictive augmented reality experiences.

And this is where companies like Onirix come in to accompany businesses in this transition because, with combined knowledge of AI and AR, we can contribute to the generation of use cases with real impact, whether for internal use by workers (imagine a factory) or for interaction with the end consumer (imagine a store).

AR and AI here and now

But let’s go back to the present for a moment after taking a look at a (possible) future. How can augmented reality and artificial intelligence coexist today?

Over the last few months, we have witnessed several news and launches that have brought AI closer to our reality, such as the announcement of Gemini (multimodal AI) or Project Astra by Google, the launch of Llama 3 by Meta, the release of OpenAI’s GPT4o or the recent presentation of Apple Intelligence.

On the other hand, in the field of extended reality, perhaps those AR glasses we were talking about do not yet exist, but we do have virtual and mixed reality headsets, such as Microsoft Hololens, Magic Leap, Pico, Meta Quest, or Apple Vision Pro.

In this type of viewers, still with a wide range of improvement in both ergonomics and cost, we can already see experiences that make use of the sum of AI and mixed reality, another way of calling augmented reality intervened by a viewer and with interpretation of spaces.

For example, we have experiences developed for Microsoft Hololens and other viewers that allow an industrial company to optimize training or maintenance processes by understanding the space.

YouTube player

Another approach to this type of mixed reality experiences is to make use of really massive and accessible devices, so that projects can scale and to reduce the learning curve in any company.

We are talking about smartphones and tablets, devices that most users have and that enable augmented reality experiences not only from mobile applications, but also from the phone’s own browser.

This is what is known as Web AR (web augmented reality) and is where Onirix has managed to bring the fusion of augmented reality and artificial intelligence.

Onirix AIR: Smart Spaces

Let’s go back to the use case of the Industry sector: Let’s imagine an AI powered by the mapping of several machine rooms, which also receives real-time information from IoT (Internet of Things) sensorization and has the knowledge of a whole series of user manuals for each of the machines.

With this data as a basis, the AI could generate on the fly customized training programs for each worker, suggest maintenance tasks or alert of imminent risks. In other words, we would have before us Smart Spaces, the result of the combination of cutting-edge AR technology with AI. And the same could happen with a tourist destination or a store.

This is exactly what we have achieved with the launch of Spatial AR.

Thanks to this new technology, any company can now scan a space by recording a simple video, observe a virtual mesh of that space in Onirix’s online studio, build augmented reality experiences on the mesh in a simple way and, finally, visualize the projects from any mobile device, iOS or Android. And these projects can already have an AI layer that extends their possibilities.

YouTube player

This type of experience not only has applications in the industrial sector. Here you can see its application in the retail sector, with an example of a real-time wine recommendation using space recognition, artificial intelligence and augmented reality:

At Onirix, we believe that this is just the beginning. That’s why we are promoting all kinds of new experiences that link the worlds of AR and AI, not only through space understanding but also, for example, through AI-powered avatars.

And over the coming months, we will bring you new initiatives that will seek to further foster collaboration between disciplines under the concept of ONIRIX AIR (Augmented Intelligent Reality), a fusion of AR and AI at the service of the business fabric.

Are you interested in knowing everything we have in store? Follow us on LinkedIn because we will soon announce the new features that will be part of ONIRIX AIR.

Growth Manager at Onirix Linkedin