Lighting and Shadows in Augmented Reality

Jason Marziani
6 min readJun 8, 2018

Won’t lie, feels strange picking such a specific topic like lighting and shadows for my first piece of writing on augmented reality. It feels like I should be writing about all the future implications of the tech, about persistence and how we’re all going to share the same experience, or about how digital goods are going to shake up retail harder than Amazon. Those are fine topics for another day, but I’ve been stoked about lighting in augmented reality, so that’s where I’m going to start.

But first, dinosaurs!

These scenes were shot on my iPhoneX and feature ARKit’s Light Estimation applied to the same 3D model in Unity3D without any filters or post production.

Look around the space you are in. Is it dark or bright? Is it lit by lamps, by overhead lights, or by windows and light from the sun? What color might you describe the space? Is it warm and soft? Cold and blue?

Pick an object on your desk or on the floor. Where do the shadows lie? Are they directly underneath or fall slanted to the side?

We don’t often think about this stuff. Our eyes start to see within a few weeks after birth, and for those of you reading with kids, you know how intense color, shape, light, and shadow are for their little brains. But 6 months in and it’s all old news. Babies can find things, grip up on ’em, put em in their mouths. It takes about 6 months to get it.

And how far are you removed from that 6 months? A lot. It’s cool, I’m old, too.

Light is how our eyes interpret the objects within our space, and shadows give clues to our eyes about where those objects are positioned in space. We don’t have to think about it, it’s how Earth works. Without light, or more specifically the reflection of light, we cannot see. Without shadows, we struggle with depth perception.

It’s the middle of the night. You wake up, parched. You open your eyes and see a glass of water you thoughtfully placed on your nightstand. You reach for it. And what happens.

You knock the glass over. Water spills everywhere.

There are a couple factors working against you right now. You’re still asleep and half paying attention. Your arm hasn’t moved in a while and it lags. But a big piece of this is that the room is dark, there are no shadows or shading on the glass, and your brain cannot accurately determine the distance to your drink. You obviously had enough clues to reach for the glass, you’ve got the gist of where it is, but there’s not enough information for you to process to keep from knocking over the glass.

Lighting and shading are a difficult phenomenon to study in the real world. We can’t remove shadows without removing light, and once we remove light we can’t observe an object. The opposite is true when working in 3D programs, which is where we create digital assets for augmented reality. Digital 3D tools start with no lighting, and it’s up to the artist to place lights in the scene to create textures, depth, and placement.

Lighting and shadows sell realism and believability. As artists working in augmented reality, we’re tasked with replicating real-world lighting conditions in a 3D digital space in order to render our creations blended in with their physical surroundings. By blending digital assets into the ambiance of the real world, digital goods feel natural and can co-exist alongside physical goods within the same space. Over time, people might accept augmented reality and digital goods much like you accept a clock hanging on a wall or a plant sitting on your table. If it feels natural to your environment, why not? That’s the promise of augmented reality, a future where the digital and the real-world blend in a way where their division becomes blasé.

Much of that will have to do with how accurately we can approximate lighting conditions.

The tech ATM

ARKit and ARCore expose two values via Light Estimation for content creators to work with: a light intensity value and an average color tone value in units of Kelvin. The best way I’ve found to approach lighting with these available values in Unity3D is to set a single real-time directional light’s intensity to the Light Estimation intensity value. After I’ve set up the project’s RenderSettings to use a gradient as the Environment Lighting, I convert Light Estimation’s color tone from Kelvin to RGB and apply the RGB color value at different percentages for the top, equator, and bottom on the gradient. This video shows the difference with and without Light Estimation applied to the render using this technique.

SIDE NOTE 1: These examples use data from ARKit 1.5. Recently released in beta, ARKit 2.0 claims to improve lighting and reflections using image mapping. Stay tuned.

SIDE NOTE 2: The example graphic that follows the light estimation description in Google’s Fundamental Concepts section of its ARCore Overview portrays an effect that is not currently possible to achieve given the data provided by ARCore. Whoops! The graphic shows an example phone screen of two cats with intensity and color variation applied individually to each cat based on their positioning on screen and the light and shadows in the environment. Both ARKit or ARCore provide an average light estimation for each frame of video received from the camera. When applied, these values would affect both cats evenly, as there is no way to logically shade each cat individually from these averaged values.

Forward Progress

There are a couple things we can do right now with the current light estimation values and other outside sources to help sell realism in augmented reality. Unity’s shadows under low light tend to be strong. Apply light estimation intensity to the opacity of shadows to help blend them in darker environments. Morning and evening light casts longer shadows while noon light tends to be overhead with short shadows. Update Unity’s light rotation based on the time of day to help match shadow skews to real-world conditions.

As ARKit and ARCore mature, these services should provide more details about environment light conditions. I expect to soon see vector positions and rotations for light sources and light type estimation for those sources (spot, diffuse, point). This goes along with the idea of Permanence, a concept so dense I’ve got no shot of covering it here, but the short of it is adding history to AR tracking data, and including light source estimation within that history.

Better shadow rendering on mobile is a must. Unity specifically has some issues with shadow artifacts on mobile and will turn off shadows at a defined distance from the camera as part of performance optimization. Augmented reality requires a consistently high level of detail and shadow rendering at distances beyond what these devices typically handle in games. To solve for the high demands of AR on mobile, services that offload rendering to the cloud may become more prominent in this space, especially as faster data speeds like 5G and omnipresent wifi networking become available.

Who’s moving in this space?

Obvious players: Apple and Google.
RNDR, Umbra 3D. These are cloud-based rendering services focused on AR.
6D.AI. They claim to have highly accurate object detection and density recognition, which could play a role in detecting shadows, and if you can detect shadows and know the object’s size and shape you might accurately estimate a light source. Maybe. Probably. I just made that up so we’ll see.

Thanks for reading! If you’ve got ideas to contribute to this conversation please comment. Anyone working in wearables, I’d love to hear your approach and what’s available. If you like what you read and want to see more, clap me some love! Follow here, on twitter @lilwins, or connect with me for updates on the next read.

--

--

Jason Marziani

Habitual shaker upper. Sr. Engineering Manager of 3D Experiences @ Matterport