Skip to content

Timsty1/MixtureOfHorizons

Repository files navigation

Mixture of Horizons in Action Chunking

Blog Paper Model

📖 Introduction

Trade-off Effect Mixture of Horizons
Figure 1: Trade-off between long-term foresight and short-term precision induced by single horizon Figure 2: Overview of the proposed mixture-of-horizons strategy

  • VLA models' performance is sensitive to the action chunk length (horizon). The single horizon induces an inherent trade-off between long-term foresight and short-term precision.

  • We propose Mixture of Horizons (MoH), a plug-and-play strategy that fuses multiple horizons within a single policy to inherit the strengths of both with minimal training or inference overhead.

  • MoH enables Dynamic Inference, selecting stable actions through cross-horizon consensus for higher efficiency and robustness.

More results on LIBERO

🚀 Quick Start

1. Environment Setup

Clone the repository and set up the conda environment:

git clone git@github.com:Timsty1/MixtureOfHorizons.git
conda create -n moh -y python=3.10
conda activate moh
pip install uv
cd MixtureOfHorizons
uv pip install -r requirements.txt
pip install packages/libero
pip install packages/openpi-client

2. Modify Transformers Library

This implementation requires modifying the transformers library to support PyTorch-type $\pi$ series models, which rely on gemma, paligemma, and siglip.

First, locate your conda environment path:

conda info --base

Then, copy the provided files to the transformers library directory (replace YOUR_CONDA_DIR with the path found above):

cp -r ./src/openpi/models_pytorch/transformers_replace/* YOUR_CONDA_DIR/envs/moh/lib/python3.10/site-packages/transformers/

🔥 Training

Data & Model Preparation

  1. Dataset: Download the LIBERO training set in LeRobot format from HuggingFace.
  2. Base Models: Download the PyTorch base models for the $\pi$ series (converted from JAX) from Here.
  3. Normalization Stats: Run the script 'scripts/compute_norm_stats.py' to compute normalization statistics, or use the pre-computed file provided in our Model Repo.

Configuration

Before training, update the project path in your training scripts (train.py, train_pytorch.py, or train_pytorch_moh.py):

import sys
sys.path.append("YOUR_PROJ_DIR/MixtureOfHorizons/src")

Next, modify specific configurations (e.g., repo_id, pytorch_weight_path) in:

  • scripts/run_train_moh.sh
  • src/openpi/training/config.py

Run Training

We trained our models on 4x A100 (80G) GPUs for 30k iterations with a batch size of 32. To start training with the MoH strategy:

bash scripts/run_train_moh.sh

🦾 Evaluation on LIBERO

We provide scripts for both standard evaluation and evaluation using Dynamic Inference. Our checkpoints are available at HuggingFace.

Prerequisites:

  • Modify scripts/eval_on_libero.sh (or _dynamic.sh) to point to your YOUR_PROJ_DIR and correct checkpoint paths.

Run Evaluation

Once configured, execute the evaluation scripts:

# Standard Evaluation
bash scripts/eval_on_libero.sh

# Evaluation with Dynamic Inference
bash scripts/eval_on_libero_dynamic.sh

Resources: Video records and raw logs of our experiments are available on Google Drive.

❤️ Acknowledgment

We express our gratitude to OpenPi, LIBERO, and RoboTwin for their open-source contributions.

📝 Citation

If you feel that this paper, models, or codes are helpful, please cite our paper, thanks for your support!

@article{jing2025mixture_of_horizons,
  title={Mixture of Horizons in Action Chunking},
  author={Jing, Dong and Wang, Gang and Liu, Jiaqi and Tang, Weiliang and Sun, Zelong and Yao, Yunchao and Wei, Zhenyu and Liu, Yunhui and Lu, Zhiwu and Ding, Mingyu},
  journal={arXiv preprint arXiv:2511.19433},
  year={2025}
}

About

Official Release of "Mixture of Horizons in Action Chunking"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors