AR (Augmented Reality)
Augmented Reality is the integration of digital information with the user’s environment in real time. Augmented Reality uses the existing environment and overlays new information on the top of it.
It is an enhanced version of reality where live direct or indirect views of physical real-world environments are augmented with superimposed computer generated images over a user’s view of the real world, thus enhancing one’s current perception of reality.
The origin of the word augmented is augment, which means to add or enhance something. In the case of Augmented Reality, graphics, sounds and touch feedback are added into our natural world to create an enhanced user experience.
Types of Augmented Reality
1. Marker Based Augmented Reality
Marker-based augmented reality (also called Image Recognition) uses a camera and some type of visual marker, such as a QR/2D code, to produce a result only when the marker is sensed by a reader. Marker based application use a camera on the device to distinguish a marker from any other real world object. Distinct, but simple patterns (such as a QR Code) are used as the markers, because they can be easily recognized and do not require a lot of processing power to read. The position and orientation is also calculated, in which some type of content and/or information is then over-layered the marker.
2. Markerless Augmented Reality
As one of the most widely implemented applications of augmented reality, markerless (also called location-based, position-based, or GPS) augmented reality, uses a GPS, digital compass, velocity meter, or accelerometer which is embedded in the device to provide data based on your location. A strong force behind markerless augmented reality technology is the wide availability of smartphones and location detection features they provide. It is most commonly used for mapping directions, finding nearby businesses, and other location-centric mobile applications.
3. Projection Based Augmented Reality
Projection based augmented reality works by projecting artificial light onto real world surfaces. Projection based augmented reality application allow for human interaction by sending light onto a real world surface and then sensing the human interaction (i.e. touch) of that projected light. Detecting the user’s interaction is done by differentiating between an expected (or known) projection and the altered projection (caused by the user’s interaction). Another interesting application of projection based augmented reality utilizes laser plasma technology to project a three-dimensional (3D) interactive hologram into mid-air.
4. Superimposition Based Augmented Reality
Superimposition based augmented reality either partially or fully replaces the original view of an object with a newly augmented view of that same object. In superimposition based augmented reality, object recognition plays a vital role because the application cannot replace the original view with an augmented one if it cannot determine what the object is. A strong consumer-facing example of superimposition based augmented reality could be found in the Ikea augmented reality furniture catalogue. By downloading an app and scanning selected pages in their printed or digital catalogue, users can place virtual ikea furniture in their own home with the help of augmented reality.
Key Components of AR Devices
1. Sensors and Cameras
Sensors are usually on the outside of the augmented reality device, and gather a user’s real world interactions and communicate them to be processed and interpreted. Cameras are also located on the outside of the device, and visually scan to collect data about the surrounding area. The devices take this information, which often determines where surrounding physical objects are located, and then formulates a digital model to determine appropriate output. In the case of Microsoft HoloLens, specific cameras perform specific duties, such as depth sensing. Depth sensing cameras work in tandem with two “environment understanding cameras” on each side of the device. Another common type of camera is a standard several megapixel camera (similar to the ones used in smartphones) to record pictures, videos, and sometimes information to assist with augmentation.
While “Projection Based Augmented Reality” is a category in-itself, we are specifically referring to a miniature projector often found in a forward and outward-facing position on wearable augmented reality headsets. The projector can essentially turn any surface into an interactive environment. As mentioned above, the information taken in by the cameras used to examine the surrounding world, is processed and then projected onto a surface in front of the user; which could be a wrist, a wall, or even another person. The use of projection in augmented reality devices means that screen real estate will eventually become a lesser important component. In the future, you may not need an iPad to play an online game of chess because you will be able to play it on the table top in front of you.
Augmented reality devices are basically mini-supercomputers packed into tiny wearable devices. These devices require significant computer processing power and utilize many of the same components that our smartphones do. These components include a CPU, a GPU, flash memory, RAM, Bluetooth/Wifi microchip, global positioning system (GPS) microchip, and more. Advanced augmented reality devices, such as the Microsoft HoloLens utilize an accelerometer (to measure the speed in which your head is moving), a gyroscope (to measure the tilt and orientation of your head), and a magnetometer (to function as a compass and figure out which direction your head is pointing) to provide for truly immersive experience.
Mirrors are used in augmented reality devices to assist with the way your eye views the virtual image. Some augmented reality devices may have “an array of many small curved mirrors” and others may have a simple double-sided mirror with one surface reflecting incoming light to a side-mounted camera and the other surface reflecting light from a side-mounted display to the user’s eye. In the Microsoft HoloLens, the use of “mirrors” involves see-through holographic lenses (Microsoft refers to them as waveguides) that use an optical projection system to beam holograms into your eyes. A so-called light engine, emits the light towards two separate lenses (one for each eye), which consists of three layers of glass of three different primary colors (blue, green, red). The light hits those layers and then enters the eye at specific angles, intensities and colors, producing a final holistic image on the eye’s retina. Regardless of method, all of these reflection paths have the same objective, which is to assist with image alignment to the user’s eye.
How Augmented Reality is Controlled?
Augmented reality devices are often controlled either by touch a pad or voice commands. The touch pads are often somewhere on the device that is easily reachable. They work by sensing the pressure changes that occur when a user taps or swipes a specific spot. Voice commands work very similar to the way they do on our smartphones. A tiny microphone on the device will pick up your voice and then a microprocessor will interpret the commands. Voice commands, such as those on the Google Glass augmented reality device, are pre-programmed from a list of commands that you can use. On the Google Glass, nearly all of them start with “OK, Glass,” which alerts your glasses that a command is soon to follow. For example, “OK, Glass, take a picture” will send a command to the to snap a photo of whatever you’re looking at.
“Simply put, we believe augmented reality is going to change the way we use technology forever. We’re already seeing things that will transform the way you work, play, connect and learn.” —Tim Cook