AI Revolutionizes Single-Image 3D Character Animation

Abstract

Reconstructing object deformation from a single image remains a significant challenge in computer vision and graphics. Existing methods typically rely on multi-view video to recover deformation, limiting their applicability under constrained scenarios. To address this, we propose DeformSplat, a novel framework that effectively guides 3D Gaussian deformation from only a single image. Our method introduces two main technical contributions. First, we present Gaussian-to-Pixel Matching which bridges the domain gap between 3D Gaussian representations and 2D pixel observations. This enables robust deformation guidance from sparse visual cues. Second, we propose Rigid Part Segmentation consisting of initialization and refinement. This segmentation explicitly identifies rigid regions, crucial for maintaining geometric coherence during deformation. By combining these two techniques, our approach can reconstruct consistent deformations from a single image. Extensive experiments demonstrate that our approach significantly outperforms existing methods and naturally extends to various applications, such as frame interpolation and interactive object manipulation.

A research team, affiliated with UNIST has reported a new AI technology that can animate 3D characters to mimic the exact movements shown in a single 2D image, all while preserving natural proportions and avoiding distortions. This development could help lower the barriers to creating 3D content for the metaverse, animation, and gaming industries.

Led by Professor Kyungdon Joo from the Graduate School of Artificial Intelligence at UNIST, the team has created DeformSplat - an innovative AI framework that adjusts the pose of 3D characters generated through Gaussian modeling, maintaining their shape and realism even when viewed from different angles.

3D Gaussian Splatting is a technique that reconstructs 3D objects from 2D images, enabling realistic rendering of models from flat photos. However, animating these models-such as in cartoons or video games-has traditionally required multiple images or video footage captured from different angles. Without such data, models often deform unnaturally, with limbs bending oddly or stretching in unrealistic ways.

DeformSplat enables the animation of a 3D character using just a single photograph. In tests, the animated characters maintained their proportions and shape from various perspectives-whether viewed from the side, back, or front-without any noticeable distortion. For example, when a pose where the character raises an arm is input, the AI reproduces this movement accurately, regardless of the viewing angle.

This is made possible through two main technological innovations. First, the Gaussian-to-Pixel Matching links the 3D Gaussian points of the model with 2D pixels from the photo, allowing the system to transfer pose information directly onto the 3D model. Second, the Rigid Part Segmentation automatically identifies and groups rigid regions-like limbs or the torso-ensuring that these parts move naturally without bending or stretching unnaturally during animation. Together, these techniques enable realistic and distortion-free movement of 3D characters based on just a single photograph.

Professor Joo explained, "Previous methods struggled to animate 3D objects from a single image without distortions. Our approach considers the structural properties of objects, distinguishing rigid regions and generating realistic movements. This could make 3D content creation more accessible and affordable, especially for industries like gaming and animation."

The findings of this research was selected for presentation at SIGGRAPH Asia 2025, the world's leading conference on computer graphics and interactive techniques. Organized by the Association for Computing Machinery (ACM), the event took place in Hong Kong from November 15 to 18, 2025. The study has been supported by the Ministry of Science and ICT (MSIT) and UNIST Graduate School of Artificial Intelligence.

Journal Reference

Jinhyeok Kim, Jaehun Bang, Seunghyun Seo, and Kyungdon Joo, "Rigidity-Aware 3D Gaussian Deformation from a Single Image," SA Conference Papers '25, (2025).

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.