Skip to content
@vulab-AI

vulab-AI

Welcome to VULab: Visual Understanding Lab at CWRU! 👋

Our research lab is dedicated to advancing visual understanding through cutting-edge AI and machine learning techniques. Our mission is to explore and develop innovative solutions in visual language models (VLMs), 3D vision, and more, to push the boundaries of how machines perceive and interpret the world.

Amazing Projects at VULab

Large Foundation Models (LLMs, VLMs, VLAs)

3D Vision

Embodied AI

Explore these projects and more to learn how we're shaping the future of visual AI!

Lab members

  • Yu Yin (PI and Lab Director)
  • Yiren Lu (Ph.D)
  • Disheng Liu (Ph.D)
  • Jerry Peng (Ph.D)
  • Tuo Liang (Ph.D)
  • Chaoda Song(Ph.D)
  • Yanyan Zhang (Ph.D)
  • Yunlai Zhou (MS)
  • Yichen Duan (MS)

Pinned Loading

  1. Awesome-Spatial-VLMs Awesome-Spatial-VLMs Public

    [Awesome-Spatial-VLMs] This repository is the official, community-maintained resource for the survey paper: Spatial Intelligence in Vision-Language Models: A Comprehensive Survey;

    Python 205 2

  2. BARD-GS BARD-GS Public

    Project page for BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting, accepted to CVPR 2025

    JavaScript 11 1

  3. Segment-then-Splat Segment-then-Splat Public

    Project page of paper Segment-then-Splat: Unified 3D Open-Vocabulary Segmentation via Gaussian Splatting, accepted to NeurIPS 2025.

    JavaScript 3

  4. GSMem GSMem Public

    JavaScript 2

  5. YESBUT-v2 YESBUT-v2 Public

    We introduce the YesBut-v2, a benchmark for assessing AI's ability to interpret juxtaposed comic panels with contradictory narratives. Unlike existing benchmarks, it emphasizes visual understanding…

    JavaScript 1

Repositories

Showing 10 of 10 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…