The long-awaited mixed reality headset poised to introduce ‘spatial computing’ will soon be here. The Apple Vision Pro launches on February 2, 2024. It will feature seamless switching of augmented and virtual reality.
Apple Vision Pro Introduces Spatial Computing
The Apple Vision Pro is poised to be a game-changing innovation in the market as being the first device to introduce spatial computing to consumers. This mixed-reality headset can switch seamlessly between augmented reality and virtual reality by using a dial on the headset.
“The era of spatial computing has arrived,” said Apple CEO Tim Cook. “Apple Vision Pro is the most advanced consumer electronics device ever created. Its revolutionary and magical user interface will redefine how we connect, create, and explore.”
A quick primer on terms: Augmented reality (AR) refers to the overlaying of digital content onto the real world. Virtual reality (VR), creates a fully immersive digital environment that replaces the user’s physical reality. Mixed reality (MR) is a blend of AR and VR.
While Apple is not the first company delving into this space, they are poised to be the first to change how everyone interacts with computers. Like Apple did with the iMac, iPod, and iPhone, once again the company is leading the way to a new frontier.
What Is Spatial Computing?
Spatial computing is a new technology. The simplest explanation is that spatial computing enables computers to blend with the physical world in a natural manner.
From the user’s perspective, spatial computing devices display the real world and embed virtual objects into the scene in a way that appears three-dimensional.
The term “spatial computing” first arose in a 2003 paper attributed to researcher Simon Greenworld. Spatial computing starts with the idea that the human brain thrives in a three-dimensional environment. This is something that the 2D screens on desktop computers and smartphones lack. Spatial technology aims to bring digital content into a landscape that allows it to align with human cognitive abilities.
Spatial computing combines the physical and digital worlds. A mixture of computer vision, artificial intelligence (AI), and extended reality are synthesized together to combine virtual experiences with the physical world.
What makes spatial computing unique is how it allows users to interact with computers in a more immersive and seamless way. Further, interactions and actions are optimized between people, machines, objects, and their environments.
Under the Hood: How Spatial Computing Works
Devices such as the Apple Vision Pro rely on several technologies to combine the digital and physical worlds. Cameras, LiDARs, and other sensors capture visual information about the environment. These include the position, orientation, and movement of objects. Computer vision processes this data. Then, using sensor fusion, the data from multiple sensors is combined to provide an accurate and comprehensive view of the environment. Next, spatial mapping is used to create a 3D model of the environment. This allows for more precise placement and manipulation of digital content. Eye tracking technology monitors the gaze of the user, and both motion sensors and handheld controllers enable them to manipulate virtual objects. Another level of control and capability is speech recognition features. This becomes handy in situations where you cannot use controllers or hand gestures.
One of the great strengths of spatial computing is its ability to understand the depth of the environment. Users can place and manipulate virtual objects in a way that corresponds to the physical world. This allows for both realistic and natural interactions with virtual objects.
For example, a virtual object could be placed on a desk or table. The user could move that object around any way they wish, even hide it behind other objects. Virtual objects can be manipulated in ways that mirror real-world actions.
Apple Vision Pro Overcomes the Challenges of Spatial Computing to Deliver a Consumer-Priced Product
Until the introduction of the Apple Vision Pro, introducing spatial computing to consumers required overcoming some difficult obstacles.
The first hurdle is that spatial computing typically requires sophisticated and advanced hardware and software systems. Developing such equipment requires specialized expertise in a variety of different technologies. Next, delivering such systems means it will be very expensive for the consumer due to its technology components like AI, AR, VR, MR, IoT, and more.
But with the Apple Vision Pro, Apple has overcome these challenges to introduce spatial computing technology to consumers at a price point that isn’t out of reach. In fact, Apple was able to achieve a price point that’s at the same level as some of its high-end Apple Pro products.
Apple Vision Pro: Preorders, Price, and Sales Locations
The Apple Vision Pro will be available for preorder beginning on Friday, January 19 at 5 am Pacific / 8 am Eastern / 1 pm UK.
The official launch date of The Apple Vision Pro is Friday, February 2, 2024. The headset will go on sale at all U.S. Apple Store locations and the online U.S. Apple Store, according to Apple.
Price and Included Items
The Apple Vision Pro will retail at $3,499 with 256GB of storage. Apple Vision Pro comes with a Solo Knit Band and Dual Loop Band, giving users two options for the fit that works best for them. Apple Vision Pro also includes a Light Seal, two Light Seal Cushions, an Apple Vision Pro Cover for the front of the device, a Polishing Cloth, a Battery, a USB-C Charge Cable, and a USB-C Power Adapter.
Available Accessories:
- ZEISS Optical Inserts: Readers will be available for $99 (U.S.).
- ZEISS Optical Inserts: Prescription will be available for $149 (U.S.).