ArXiv paper | Python API | C++ API | ROS2
cuVSLAM is the library by NVIDIA, providing various Visual Tracking Camera modes and Simultaneous Localization and Mapping (SLAM) capabilities. Leveraging CUDA acceleration and a rich set of features, cuVSLAM delivers highly accurate, computationally efficient, real-time performance.
The quickest way to get started is to install PyCuVSLAM from a pre-built wheel and explore the examples.
To use cuVSLAM in a ROS2 environment:
cuVSLAM is a highly optimized visual tracking library validated across numerous public datasets and popular robotic camera setups. For detailed benchmarking and validation results, please refer to our technical report.
The accuracy and robustness of cuVSLAM can be influenced by several factors. If you experience performance issues, please check your system against these common causes:
-
Hardware Overload: Hardware overload can negatively impact visual tracking, resulting in dropped frames or insufficient computational resources for cuVSLAM. Disable intensive visualization or image-saving operations to improve performance. For expected performance metrics on Jetson embedded platforms, see our technical report
-
Intrinsic and Extrinsic Calibration: Accurate camera calibration is crucial. Ensure your calibration parameters are precise. For more details, refer to our guide on image undistortion. If you're new to calibration, consider working with experienced vendors
-
Synchronization and Timestamps: Accurate synchronization significantly impacts cuVSLAM performance. Make sure multi-camera images are captured simultaneously—ideally through hardware synchronization—and verify correct relative timestamps across cameras. Refer to our multi-camera hardware assembly guide for building a rig with synchronized RealSense cameras
-
Frame Rate: Frame rate significantly affects performance. The ideal frame rate depends on translational and rotational velocities. Typically, 30 FPS is suitable for most "human-speed" motions. Adjust accordingly for faster movements
-
Resolution: Image resolution matters. VGA resolution or higher is recommended. cuVSLAM efficiently handles relatively high-resolution images due to CUDA acceleration
-
Image Quality: Ensure good image quality by using suitable lenses, correct exposure, and proper white balance to avoid clipping large image regions. For significant distortion or external objects within the camera's field of view, please refer to our guide on static masking
-
Motion Blur: Excessive motion blur can negatively impact tracking. Ensure that exposure times are short enough to minimize motion blur. If avoiding motion blur isn't feasible, consider increasing the frame rate or try the following Mono-Depth or Stereo Inertial tracking modes
See Troubleshooting
PyCuVSLAM is the Python wrapper (bindings) for the cuVSLAM library.
Pre-built wheels are available on the cuVSLAM releases page for the following configurations:
| Ubuntu | Python | CUDA | Architectures |
|---|---|---|---|
| 22.04 | 3.10 | 12, 13 | x86_64, aarch64 |
| 24.04+ | 3.12+ | 12, 13 | x86_64, aarch64 |
Prerequisite: CUDA Toolkit 12 or 13 must be installed separately (not included in the wheels).
To install (virtual environment is recommended):
- Go to the releases page.
- Download the wheel matching your CUDA version (
cu12orcu13), Python version, and platform (x86_64oraarch64). - Install with pip:
pip install cuvslam-*.whlIf a pre-built wheel is not available for your system, see Install from Source below.
Note: cuVSLAM must be built before installing from source.
To install PyCuVSLAM from repository (virtual environment is recommended):
CUVSLAM_BUILD_DIR=<path-to-cuvslam-build> pip install python/CUVSLAM_BUILD_DIR is required for build script to find libcuvslam.so.
Warning: Due to scikit-build-core limitations, bindings must be reinstalled after rebuilding libcuvslam.
Pre-built C++ libraries are available on the releases page for Ubuntu 22.04/24.04 on x86_64 and Jetson(aarch64) with CUDA 12 and CUDA 13.
For Python usage, pre-built wheels are the recommended approach.
- Ubuntu 22+ (22.04 & 24.04 tested) x86_64/aarch64 (Desktop/Laptop & Nvidia Jetson Orin/Thor)
- CUDA Toolkit 12 or 13, Jetpack 6.1/6.2/7.0/7.1
apt update && apt install g++ cmake git git-lfs python3-dev- git + git-lfs to clone this repository
- CMake 3.19+, gcc
- Python 3.9+ (for python bindings, examples and some tools)
In the repository root build C++ code using one of two ways:
- Build manually:
mkdir build
cd build
cmake ..
make -j
-
Set source & build paths for
build_release.shand run it.Important: Before running
build_release.shset paths using one of the options:- Set paths in
~/.bashrc(or equivalent shell login script), which will be useful when switching between cuvslam branches:export CUVSLAM_SRC_DIR=<path-to-cuvslam-src> export CUVSLAM_DST_DIR=<path-to-cuvslam-build>
- Update SRC & DST paths in
build_release.sh
- Set paths in
Requires SSH access to the remote device.
./copy_to_remote.sh <jetson-host>
ssh <jetson-host> 'export CUVSLAM_SRC_DIR=~/cuvslam/src CUVSLAM_DST_DIR=~/cuvslam/build && ~/cuvslam/src/build_release.sh'
./copy_from_remote.sh <jetson-host>- Create virtual environment and install rerun SDK:
python3 -m venv .venv source .venv/bin/activate pip install rerun-sdk==0.22.1 - Specify virtual env with
CUVSLAM_TOOLS_PYENVenvironment variable (defaults to.venvin repository root). - Update
build_release.shand setUSE_RERUNtoON - Run any tool from tools folder
Q: What Python versions are supported by PyCuVSLAM?
A: Pre-built wheels are available for Python 3.10 (Ubuntu 22.04) and Python 3.12 or later (Ubuntu 24.04+). When built from source, PyCuVSLAM supports Python 3.9 and later.
Are you having problems running cuVSLAM or PyCuVSLAM? Do you have any suggestions? We'd love to hear your feedback in the issues tab.
This project is licensed under the NVIDIA Community License, for details refer to the LICENSE file.
If you find this work useful in your research, please consider citing:
@article{korovko2025cuvslam,
title={cuVSLAM: CUDA accelerated visual odometry and mapping},
author={Alexander Korovko and Dmitry Slepichev and Alexander Efitorov and Aigul Dzhumamuratova and Viktor Kuznetsov and Hesam Rabeti and Joydeep Biswas and Soha Pouya},
year={2025},
eprint={2506.04359},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2506.04359},
}

