Everything you need to know about ARKit 3
This is a much bigger deal than most people realize right now, and it’s going to be very cool.
It’s a new year, at a new WWDC, and new versions of everything from Apple are flying in from every direction. There’s a lot to be excited about. So much, in fact, that Augmented Reality didn’t get a ton of time on stage. That doesn’t mean Apple is dedicating fewer resources to it, in fact the opposite is true. Apple not only updated the way ARKit functions to make it more useful in more situations, but a pair of powerful new creator tools have been added to make sure building for AR isn’t limited to just technically-minded developers.
And with more accessible AR comes more unique and compelling AR experiences, which is going to be a great thing for everyone.
Everything new in ARKit 3
The number bump at the end of ARKit means a lot of small but important changes, most of which improve on existing capabilities. ARKit can now detect many images simultaneously, and has an automated system for guessing size for objects you want to drop in the real world. Face tracking for things like real-time Animoji in FaceTime can support up to three faces simultaneously, and shared sessions where multiple people can jump into the same AR game is now much easier to build. Lots of small improvements that will all make existing AR experiences feel smoother and more capable.
The two big user-facing features folks will likely not notice as new but super appreciate once they see it working are called People Occlusion and Simultaneous Camera. The first is all about making AR seem more real by making it so an AR thing can’t just phase through a person like it’s a ghost. If a person walks in front of where you have positioned and AR creation, that AR thing will be hidden behind the person as though it were really there. If a person walks behind that AR creation, the person will disappear. Just like a real object in the real world, and when it works it will make AR feel that much more real.
Simultaneous camera support is kind of fascinating. Apple is making it so developers can use the front and rear cameras on an iPhone or iPad at the same time when building for Augmented Reality. There are a lot of very cool applications for this, especially with the great face tracking already available in ARKit. Imagine a Pokemon Go session where you were taking a photo with your latest capture, and the app could use your facial expressions to make the creature more responsive. When you laugh, it does a trick, for example. There’s a ton more to be done with this kind of access, but that’s likely one of the ways we’ll first see it used.
Finally, Apple is using its body tracking AR tech to allow for motion capture in AR. This could be used in a couple of ways, all of which are super important. Creators could quickly record actual people and use their movements to make their AR creations more lifelike, or AR apps could use body position to make interactive experiences with people. Imagine being able to dance with an AR creation and have the movements of the human in the scene impact the movements of the AR creation.
All of this adds up to a more immersive, creative environment for developers and users alike. AR apps are going to continue to feel more and more realistic, and encourage your friends to participate with you while you play. It’ll be a little while before we see all of this used together in a single experience, but Apple has without a doubt expanded the possibilities significantly here.
Better creator tools!
Features and documentation are important, and something we’ve seen Apple work hard to improve upon over the last couple of years, but with ARKit 3 there are two new tools to make things easier for AR creators. RealityKit and Reality Composer are new with ARKit 3, and both add some much-needed ease of use to the creative process.
RealityKit is, as the name suggests, all about helping AR creations look and feel more real when dropped into the world. This includes tools for improved light management, audio deployment, and animation of fully-formed AR creations. If you’ve made an AR experience elsewhere, it should be easier to import into ARKit and start experimenting from here. And if you’ve never worked with AR before but you have a lot of experience in 3D modeling and animation, this new toolkit should help reduce some of the learning time in creating AR experiences.
Where RealityKit is all about giving you developer tools more targeted to your needs, Reality Composer lets you jump straight into the creative process. You can take any USDZ file and drop it into the real world, quickly add simple animations and audio to those files, and then record them for whatever you need. You can quickly build your own AR world and export it for others to experience, and it can all be done start to finish on an iPhone or iPad. You can quickly send these creations to a Mac for more precise tuning if you want, but Apple’s goal here is clearly many people building for AR regardless of skill level. Combined with the existing Swift Playgrounds for ARKit, and it’s clear Apple is ensuring the resources exist for anyone to get started with AR and make something fun.
ARKit 3 and all of its trimmings are currently available as part of the iOS 13 beta, which you can sign up for here to try for yourself if you don’t want to wait for the public release.