Skip to content

cansik/instance-rig

Repository files navigation

instance-rig PyPI

Automated 3D humanoid rigging system that leverages 2D deep learning models to create functional skeletons and skin weights for static meshes.

Process

Instance-rig process of skeleton and skin-weight generation.

Instance-rig simplifies the complex task of 3D rigging by projecting it into the 2D domain. It uses a pre-trained 2D pose estimation model (BodyPix) to analyze a rendered view of the 3D mesh. The insights from the 2D analysis, such as joint locations and body part segmentation, are then mapped back onto the 3D geometry to construct a hierarchical skeleton and generate smooth skin weights.

The method is robust across a wide range of inputs and can operate on almost any mesh that exhibits a bipedal humanoid shape, without requiring manual cleanup or topology specific constraints. This makes it especially well suited for fast, fully automatic processing pipelines.

Instance-rig has been validated in large scale exhibition environments, where thousands of visitors were scanned, reconstructed, and rigged fully automatically. In these real world scenarios, the system consistently delivered complete, animation ready humanoid rigs in under one second for meshes captured in standard T-pose or A-pose configurations.

How it Works

The process is fully automated and consists of two main phases:

1. Skeleton Construction

The system builds a hierarchical skeleton by identifying key anatomical landmarks.

  • 2D Projection: The 3D mesh is rendered into a 2D image from a frontal camera view.
  • Keypoint Detection: BodyPix analyzes the image to find 2D pixel coordinates for 17 key joints (e.g., shoulders, knees).
  • 3D Triangulation: These 2D points are projected back into 3D space by ray-casting against the original mesh.
  • Hierarchy Generation: Missing intermediate joints (like the spine and neck) are interpolated from the detected points to form a complete, parent-child bone hierarchy.

You can learn more about 3D keypoint detection in the muke repository.

2. Skin Weight Generation

The mesh vertices are bound to the skeleton using a "weight painting" process derived from segmentation maps.

  • Part Segmentation: BodyPix generates a 2D map identifying body parts (e.g., torso, left arm).
  • Refinement: The system programmatically subdivides broad regions (like the torso) to align with the more detailed spinal joints of the generated skeleton.
  • Influence Mapping: 3D vertices are mapped to the 2D segmentation map using UV coordinates (derived from world-space XY coordinates).
  • Weight Smoothing: Binary influence maps for each joint are blurred to create soft gradients, ensuring smooth mesh deformation.
  • Binding: Each vertex is assigned weights based on the blurred influence maps, linking it to the most relevant joints.

Installation

You can install the package directly via pip:

pip install instance-rig

Usage

To rig a 3D mesh (e.g., an .obj file), simply run the command line tool. The default output is a .glb file.

instance-rig input_mesh.obj

Options

  • --output <file>: Specify the output file path. Supported formats are .glb, .gltf, and .dae (experimental, mesh-only).
  • --smooth-weights: Enable additional smoothing for skin weights.
  • --smooth-weights-factor <int>: Set the filter size for weight smoothing (default: 20).
  • --t-pose: Applies correction to the mesh to be in T-Pose.
  • --debug: Display debug frames and additional information during processing.
  • --skip-prediction: Attempt to use cached prediction results if available.

Example

Rig a mesh and save it as a GLTF file with custom weight smoothing:

instance-rig character.obj --output character.gltf --smooth-weights --smooth-weights-factor 30

Gradio

To run instance-rig as a service, a basic Gradio interface is already available. It accepts zip files containing 3D assets (OBJ, MTL, and texture files), unpacks and processes the data, performs the automatic rigging, and returns the result as a zipped GLB file suitable for downstream use.

uv run --group demo demo/gradio-demo.py

Model Assets and Caching

The required model files are automatically downloaded on the first run and stored in a user-writable cache directory:

  • Linux/macOS: ~/.cache/instancerig
  • Windows: %LOCALAPPDATA%\instancerig

You can override this location by setting the INSTANCERIG_CACHE_DIR environment variable. For offline environments, pre-download the model files and place them in this directory.

Limitations

  • Pose Assumption: The system assumes the input mesh is centered and facing forward (typically +Z or -Z depending on coordinate system) in a T-pose or A-pose.
  • UV Mapping: UV coordinates for the segmentation lookup are generated by normalizing the X and Y world coordinates. This works best for meshes that are aligned with the world axes.
  • Single View: The analysis relies on a single frontal view, so occluded parts or complex poses may not be rigged correctly.
  • Rig Compatibility: The generated rig does not follow common production standards such as Mixamo style skeletons. As a result, additional custom tooling or conversion steps are required to animate the output reliably in environments like Blender or Unity.
  • Collada Export: The .dae export is currently experimental and only supports mesh data. Joints and skin weights are not included in the export.

Credits

Developed at the Immersive Arts Space, Zurich University of the Arts (ZHdK).
Maintained by Florian Bruggisser.

About

3D Human pose auto rigging by using 2d projections and pose detection.

Topics

Resources

License

Stars

Watchers

Forks

Contributors