Beta phase for the vision-based AR has started
Vision-based AR is the most advanced type of Augmented Reality (AR) available in Onrix and it is used for advanced scenarios like indoor navigation. It uses scene recognition and doesn’t require an image. In order to be able to understand the environment, it combines a variety of technologies like SLAM.
In order to design a vision-based AR experience, a 3D scan of the location is created. The scan is then pre-processed by Onirix Studio and AR content can be positioned with very high precision inside scenes.
So far, the scan required to create this type of experience could only be performed with Google Tango devices that possess a depth sensor using a special constructor app. Over the last months, we have ported this technology to ARKit/ARcore which makes the technology available on a large variety of devices. Additionally, we have added the capability to access vision-based experiences and the functionality of the constructor app inside the normal Onirix app. This way, only one application is required to access image-based, surface-based and vision-based experiences.