What Does Apple’s New ARKit3 Bring?
(Potential Applications in Technical Training or Service?)
Since its debut of ARKit in 2017, Apple is one of the main industry leaders that provide developers worldwide with tools of Augmented Reality technology. It shows the commitment of a company like Apple to consistently push the envelope and advance the next generation technology that changes how humans interact with digital world.
How has ARKit improved from previous versions?
ARKit had a significant impact on the industry as one of the first developer platforms for AR apps development for IOS devices. It made things relatively easy for developers to jump in and create a simple AR experience. The tool kit had features that allow creators to place augmented digital content into the physical world and experience what it is like to interact with it. That was a big step to take for many first-time users.
Then in 2018, Apple came out with 2nd version: ARKit 2. It added some more features like persistent tracking, image/object recognition, and shared experience. To put it in simple terms, ARKit 2 enhanced the AR experience and made its performance more consistent and sharable. Apple has been experimenting various use cases in different industries. Regardless consumer or enterprise use, ARKit 2 was aiming to fill the gaps yielding a better user experience and wider applications.
In June 2019, Apple unveiled ARKit 3. Now developers can develop a collaborative AR scene, where multiple people can be present next to the digital content augmented into a physical environment. With better tracking of the human body and motion, the augmented digital objects appear much more natural and realistic coexisting with people in the same camera view. There are obvious benefits of the new technologies in consumer applications, such as shopping, gaming, and other entertainment experiences. It is a step forward to enhancing the immersive effect of AR.
What about enterprise applications of these new features?
For the new features of ARKit 3 to be useful for training applications, it would be in situations where multiple users are in the AR scene together. It is possible to watch and learn from other users who are present in the same scene with augmented objects placed there. In the example Apple provided, two people are standing around a table with a machine placed on the table between them. The camera view shows that the body of a person would naturally block the machine if the camera is behind him. Multiple faces detection and motion detection are going to be helpful with multiple people in the scene.
For example, if you were to run a training program showing some 3D AR models of machine augmented in a manufacturing facility, or medical device augmented in a hospital, having random people walking in and out of the camera view would not be interfering with the immersive and realistic presence of the 3D AR models.
Some additional improvements of ARKit 3 include: detection of up to 100 images at a time, more robust 3D-object detection, and machine learning is used to detect planes in the environment faster.
It is exciting to see Apple continue to support the industry with their technology platform. Some of these supporting technologies are definitely going to be useful for development applications specifically for technical training and service of physical products.
To discuss how AR technology can help your organization’s technical training and service applications, or to experience a real demonstration yourself, please contact info@distat.co.
Comments