HyperHuman: Learning pose-independent human avatar with enhanced explicit constriant

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Animating virtual avatars with free-view control through implicit neural radiance field rendering has attracted considerable attention. Previous studies have attempted to simulate the dynamic changes of the human body by improving the representation of the neural radiance field. A prevalent method employs a pose-dependent representation and explicit motion space constraints to animate both rigid and non-rigid, vivid human motion. However, pose-driven deformation faces challenges in modeling explicit mesh topology. Topological changes need continuous deformation fields that accurately reflect human motion, complicating the detailed rendering of complex human surfaces. In this work, we propose a novel framework Hyper-Human, which lifts the deformation field into higher dimensional space while maintaining pose-independent. Our key insight is to model the deformation field with explicit constraints, which explicitly leverage the human surface into a higher dimensional representation. Specifically, we first introduce a closest representation that establishes a pose-independent, generalizable deformation field, anchored by an explicit constraint. Then, we use a deep network to design the moving space as a higher continuous topological transformation field. Extensive experiments demonstrate the superiority of our proposed HyperHuman over state-of-the-art methods, and the ablation study illustrates the effectiveness of our method.

Article activity feed