Texture Vector-Quantization and Reconstruction Aware Prediction for Generative Super-Resolution (ICLR 2026)
Qifan Li, Jiale Zou, Jinhua Zhang, Wei Long, Xingyu Zhou, Shuhang Gu
⭐If you like this work, please help star this repo. Thanks!🤗
# git clone this repository
git clone https://github.com/CVL-UESTC/TVQ-RAP.git
cd TVQ-RAP
# create new anaconda env
conda create -n TVQRAP python -y
conda activate TVQRAP
# install python dependencies
pip install -r requirements.txt
Download the pretrained SR model from Releases and place it in the trained_weights folder.
You can place any testing images in the test_images folder. Then specify the corresponding data path in options/TVQRAP_test.yml.
Then, you can get the SR outputs by running the following command:
bash infer.sh
Download and the testing data (ImageNet-Test + RealSR + RealSet65).
Run inference on each dataset one by one to obtain the SR results.
After generating the SR images, compute the evaluation metrics with:
#non-reference-metrics
python test-non-reference-metrics.py
#non-reference-metrics
python test-reference-metrics.py
Download training dataset: ImageNet.
Download and the testing data ImageNet-Test.
Complete training/testing data path in the configuration file in option/xxx.yml.
Pretrained weights of all stgaes can be found in the Hugggingface. You can choose to train any individual stage based on our released weights or all stages by yourself.
Training tokeizer (4 x 24GB GPUs):
bash train1.sh
Specify the path to the pretrained Stage I weights (either the ones we provide or those trained by yourself) in the corresponding field of options/TVQRAP_stage2.yml.
Then, training Predictor using cross-entropy loss (2 x 24GB GPUs):
bash train2.sh
Specify the path to the pretrained Stage II weights (either the ones we provide or those trained by yourself) in the corresponding field of options/TVQRAP_stage3.yml.
Then, training Predictor using cross-entropy loss (2 x 24GB GPUs):
bash train3.sh
Please cite us if our work is useful for your research.
@article{li2025texture,
title={Texture Vector-Quantization and Reconstruction Aware Prediction for Generative Super-Resolution},
author={Li, Qifan and Zou, Jiale and Zhang, Jinhua and Long, Wei and Zhou, Xingyu and Gu, Shuhang},
journal={arXiv preprint arXiv:2509.23774},
year={2025}
}
This project is based on BasicSR and CodeFormer.
If you have any questions, feel free to approach me at qifanli.lqf@gmail.com

