Meta's Prototype 'Codec Avatars' Now Support Changeable Hairstyles

Wait 5 sec.

Meta's prototype photorealistic 'Codec Avatars' now support changeable hairstyles, separately modeling the head and hair. 0:00 /0:30 1× For around a decade now, Meta has been researching and developing the technology it calls Codec Avatars, photorealistic digital representations of humans driven in real-time by the face and eye tracking of VR headsets. The highest-quality prototype achieves the remarkable feat of crossing the uncanny valley, in our experience.The goal of Codec Avatars is to deliver social presence, the subconscious feeling that you're truly with another person, despite them not physically being there. No shipping technology today can do this. Video calls don't even come close.In this interview, it's likely the avatars were being decoded and rendered by a high-end PC, after both participants underwent a long scan in a multi-camera array.To eventually ship Codec Avatars, Meta has been working on increasing the system's realism and adaptability, reducing the real-time rendering requirements, and making it possible to generate them with a smartphone scan.Generating a Codec Avatar originally required a massive custom capture array of more than 100 cameras and hundreds of lights, but last year Meta moved to only using this to train a 'universal model'. After this, new Codec Avatars can be generated using a selfie video rotating your head. However, for the full quality Codec Avatars, this capture takes around an hour to be processed by a high-end server GPU. 0:00 /0:27 1× A Universal Relightable Gaussian Codec Avatar generated by a phone scan, rendered in real-time on PC VR last year. While Meta had shown off lower-quality Codec Avatars generated by a smartphone scan as early as 2022, last year's work brought this advantage to the higher-quality Codec Avatars, by moving to a Gaussian splatting approach.In recent years, Gaussian splatting has done for realistic volumetric rendering what large language models (LLMs) did for chatbots, propelling the technology from an expensive niche to shipping products like Varjo Teleport and Niantic's Scaniverse.These newer Gaussian Codec Avatars are also inherently relightable, making them highly suitable for practical use in VR and mixed reality.Apple is also using Gaussian splatting for its new Personas in visionOS 26, which aren't quite at the same quality as Meta's research, but are actually available in a shipping product.Meta's latest research, presented in a paper called "HairCUP: Hair Compositional Universal Prior for 3D Gaussian Avatars", builds on the Gaussian Codec Avatars work from last year by adding a compositional split between the head and hair.In a shipping system, this would allow the user to swap out their hairstyle from a library of options, or their own prior scans, without needing to perform a new face scan.By its nature, the new approach also improves the seam between the hair and face, such as the fringe, and could better support hats in future. 0:00 /0:06 1× Meta is getting closer than ever to shipping Codec Avatars as an actual feature of its Horizon OS headsets. However, there are still multiple roadblocks.For starters, neither Quest 3 nor Quest 3S have eye tracking or face tracking, and there's no indication that Meta plans to imminently launch another headset with these capabilities. Quest Pro had both, but was discontinued at the start of this year.The other issue is in the rendering requirements. While Meta showed off lower-quality Codec Avatars rendered by a Quest 2 years ago, the higher quality versions have to date been rendered by PC graphics cards. Apple Vision Pro proves that it's possible to render Gaussian avatars on-device, but Quest 3 is slightly less powerful, and Meta lacks Apple's full end-to-end control of the hardware and software stack.Meta Connect 2025 Takes Place September 17 & 18Meta Connect 2025 will take place on September 17 and 18, promising to “peel back the curtain on tomorrow’s tech”. Here’s what we expect might be announced.UploadVRDavid HeaneyOne possibility is that Meta launches a rudimentary flatscreen version of Codec Avatars first, to let you join WhatsApp and Messenger video calls with a more realistic form than your Meta Avatar.Meta Connect 2025 will take place from September 17, and the company might share more about its progress on Codec Avatars then.