Abstract: Adverse weather conditions such as haze and fog often significantly reduce the performance of crowd counting models. An intuitive solution is to preprocess degraded images by applying image restoration techniques prior to crowd counting. However, this solution introduces additional computational complexity and may produce restored images with noise and artifacts that is harmful to the subsequent crowd counting task. To mitigate the two issues, we integrate an image restoration module (IRM) into a unified framework to propose an effective network for crowd counting and localization in haze and rain. The lightweight IRM is designed to guide the network to learn haze-aware knowledge in feature space, which is removed in the inference phase without increasing the computational cost. In addition, two new datasets are constructed to evaluate the crowd counting methods in haze and rain. Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of the proposed method.
| Crowd Dataset | Link |
|---|---|
| Hazy-JHU | [Google] | [BaiduNetdisk] |
| Hazy-ShanghaiTech | [Google] | [BaiduNetdisk] |
| Hazy-ShanghaiTechRGBD | [Google] |
| Rainy-ShanghaiTechRGBD | [Google] |
|
|
|
|
|
|
|
|
|
|
|
|
Input |
Ours |
Restored Image |
| Image Name | Predict | Ground-truth |
|---|---|---|
| Rainy-ShanghaiTechRGBD/IMG_0895.jpg | 15 | 14 |
| Hazy-ShanghaiTechRGBD/IMG_3.jpg | 91 | 85 |
| Hazy-ShanghaiTech/PartA/IMG_160.jpg | 117 | 121 |
| Hazy-JHU/IMG_0895.jpg | 1200 | 945 |
- Hazy-JHU → Hazy_JHU_best.pth
- Hazy-ShanghaiTech PartA → DH_SHTA_best.pth
- Hazy-ShanghaiTech PartB → DH_SHTB_best.pth
- Hazy-ShanghaiTechRGBD → Hazy_SHTRGBD_best.pth
- Rainy-ShanghaiTechRGBD → Rainy_SHTRGBD_best.pth
- vgg16_bn-6c64b313.pth
torch
torchvision
tensorboardX
easydict
pandas
numpy
scipy
matplotlib
Pillow
opencv-pythongit clone https://github.com/lizhangray/Dehaze-P2PNet.git
pip install -r requirements.txtDehaze-P2PNet
|- assets
|- crowd_datasets
|- datasets
|- Hazy_JHU
|- test_data
|- train_data
|- val
|- Hazy_ShanghaiTech
|- PartA
|- test_data
|- train_data
|- val
....
|- Hazy_ShanghaiTechRGBD
....
|- models
|- util
|- weights
|- DH_SHTA_best.pth
|- DH_SHTB_best.pth
....
|- vgg16_bn-6c64b313.pth
|- engine.py
|- run_test.pypython run_test.py --dataset_file NAME_OF_DATASET --weight_path CHECKPOINT_PATH
# e.g., Hazy-JHU
python run_test.py --dataset_file Hazy_JHU --weight_path weights/Hazy_JHU_best.pth
# e.g., Hazy-ShanghaiTech PartA
python run_test.py --dataset_file Hazy_SHTA --weight_path weights/DH_SHTA_best.pth
# e.g., Hazy-ShanghaiTech PartB
python run_test.py --dataset_file Hazy_SHTB --weight_path weights/DH_SHTB_best.pth
# e.g., Hazy-ShanghaiTechRGBD
python run_test.py --dataset_file Hazy_SHARGBD --weight_path weights/Hazy_SHTRGBD_best.pth
# e.g., Rainy-ShanghaiTechRGBD
python run_test.py --dataset_file Rainy_SHARGBD --weight_path weights/Rainy_SHTRGBD_best.pthThere are two parameters that must be provided:
'--dataset_file', help='(Hazy_JHU | Hazy_SHARGBD | Hazy_SHTA | Hazy_SHTB | Rainy_SHARGBD)'
'--weight_path', help='load pretrained weight from checkpoint', such as 'weights/Hazy_JHU_best.pth'
Please cite this paper in your publications if it is helpful for your tasks.
@InProceedings{yuan2024crowd,
author = {Yuan, Weijun and Li, Zhan and Li, Xiaohan and Fang, Liangda and Zhang, Qingfeng and Qiu, Zhixiang},
title = {Crowd Counting and Localization in Haze and Rain},
booktitle = {2024 IEEE International Conference on Multimedia and Expo (ICME)},
year = {2024}
}











