Skip to content

MacavityT/REF-VLM

Repository files navigation

REF-VLM: Triplet-Based Referring Paradigm for Unified Visual Decoding


Official PyTorch implementation of "REF-VLM: Triplet-Based Referring Paradigm for Unified Visual Decoding" [ICCV 2025 under review].

Updates


This repository contains the official implementation and dataset of the following paper:

REF-VLM: Triplet-Based Referring Paradigm for Unified Visual Decoding

Abstract: Multimodal Large Language Models (MLLMs) demonstrate robust zero-shot capabilities across diverse vision-language tasks after training on mega-scale datasets. However, dense prediction tasks, such as semantic segmentation and keypoint detection, pose significant challenges for MLLMs when represented solely as text outputs. Simultaneously, current MLLMs utilizing latent embeddings for visual task decoding generally demonstrate limited adaptability to both multi-task learning and multi-granularity scenarios. In this work, we present REF-VLM, an end-to-end framework for unified training of various visual decoding tasks. To address complex visual decoding scenarios, we introduce the Triplet-Based Referring Paradigm (TRP), which explicitly decouples three critical dimensions in visual decoding tasks through a triplet structure: concepts, decoding types, and targets. TRP employs symbolic delimiters to enforce structured representation learning, enhancing the parsability and interpretability of model outputs. Additionally, we construct Visual-Task Instruction Following Dataset (VTInstruct), a large-scale multi-task dataset containing over 100 million multimodal dialogue samples across 25 task types. Beyond text inputs and outputs, VT-Instruct incorporates various visual prompts such as point, box, scribble, and mask, and generates outputs composed of text and visual units like box, keypoint, depth and mask. The combination of different visual prompts and visual units generates a wide variety of task types, expanding the applicability of REF-VLM significantly. Both qualitative and quantitative experiments demonstrate that our REF-VLM outperforms other MLLMs across a variety of standard benchmarks.

Todo

  1. Release the training and inference code.
  2. Release the checkpoints.
  3. Release the VT-Instruct dataset.
  4. Release the demo.

Get Start

Install

Dependencies

  1. This project is built on Xtuner. Please refer to the official documents of these toolkits for installation guidance.
  2. Dataset load is base on detectron2.
  3. MMDetection
  4. COCO 2018 Panoptic Segmentation Task API

configure accelerate

accelerate config

Dataset

Coming soon.

REV-VLM/
├── checkpoints
    ├── vicuna_7b
        ├──stage1
            ├──instances.json
            ├──refs(unc).p
        ├── stage2
        ├── hf_model

Checkpoint

Coming soon.

REV-VLM/
├── checkpoints
    ├── vicuna_7b
        ├──stage1
            ├──instances.json
            ├──refs(unc).p
        ├── stage2
        ├── hf_model

Demo

To launch a Gradio web demo, use the following command. Please note that the model evaluates in the torch.float16 format, which requires a GPU with at least 16GB of memory.

python demo/app.py --config /path/to/config

Train

After preparing data, you can train the model using the command:

Stage1

NPROC_PER_NODE=8 xtuner train configs/train_stage1.py --deepspeed deepspeed_zero2

Stage2

NPROC_PER_NODE=8 xtuner train configs/train_stage2.py --deepspeed deepspeed_zero2

Stage3

NPROC_PER_NODE=8 xtuner train configs/train_stage3_keypoint.py --deepspeed deepspeed_zero2

Cite

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published