In the last two decades the internet has profoundly transformed how information is collected and accessed. Generations have been reaping the benefits of instantly accessing vast knowledge base at any given moment.
However, the way information or data is stored or delivered has remained the same all this time – pages on flat screens. We are used to web based motions with current computer functions like scrolling and swiping, from one page to another. And most of the time we don’t think about it.
But when someone is trying to understand something that contains spatial resolution and structural complexity, the issue of such flat 2-D delivery system becomes apparent. Humans are not always good at translating 2-D information into 3-D structure in their head. It requires our brain to process things with a lot of imagination, logic thinking and focus. Mentally translating 2-D flat information that is used in 3-D world is hard.
Augmented Reality closes that mental gap.
By directly overlaying information onto 3-D physical objects, the effort to translate is no longer needed. Mentally and cognitively, we understand a great deal about the information much faster, making learning intuitive.
If we look a little closer at how the human brain processes information, we will understand why it is much easier to learn something with visualization and AR.
Our brain receives information through five senses: Sight, Sound, Smell, Taste, and Touch.
But we receive and process information at different speeds through these senses. Sight is the most dominant, fastest and most commonly used. It is estimated that 80% to 90% of all information humans gain access through is visual. The rest of how we learn is made up by small percentage from sound, smell, taste and touch senses.
However, just seeing something is not enough.
There is this mental distance called cognitive distance, which is between where information is distributed to where information is absorbed. Our brain has to work very hard sometimes to travel this distance to ‘get’ certain things. When we connect the name of an animal to a picture of that animal, we learn what that animal is called. When we relate a recipe we saw on a cooking show to the real ingredients and tools in our kitchen, we learn how to make the same dish.
But the challenge is that our brain has to retain the information before it travels the cognitive distance. If we forget the information (partially or completely) before our brain reaches where the information is applied, then we will not learn effectively.
To truly learn, the information needs to be directly delivered to the object which we want to learn about. So the mental distance to ‘walk’ from the information to the object is minimized to as short as possible. Under such optimal conditions, our brain makes the least effort to ‘relate’ information because everything is in context already.
This is where Augmented Reality can provide the most value.
Because AR delivers information right where the subjects are, it makes information received truly in context.
This delivery method significantly reduces the cognitive distance and makes it much easier for the brain to connect the dots because it does not have to work hard to memorize and translate information from place to place.
For example, the direction on navigation app is placed as lines on digital map on your smart devices. When you are driving, your eyes will need to switch between the road and the screen and your brain will have to translate what is shown on the screen to what’s in reality. If the direction is overlaid onto the road by displaying on the car’s front windshield, then your eyes don’t need to leave the road and your brain doesn’t need to do the translation.
That’s how AR can transform visualized learning into a new model of delivering information in context and on-site.
To get a more in-depth understanding of the examples mentioned in this post, or to experience a real demo yourself, please contact info@distat.co.
Comments