📄 Paper: Seeing Beyond – Extrapolative Domain Adaptive Panoramic Segmentation
Yuanfan Zheng1, Kunyu Peng2, Xu Zheng3, Kailun Yang*1
1Hunan University · 2IAR, Karlsruher Institut für Technologie · 3Hong Kong University of Science and Technology (HKUST)
Download the following datasets:
Convert label IDs and generate class indices for RCS:
# =================================================================================
# 1. Open-Set PIN2PAN (Cityscapes, WildPASS2K -> DensePASS)
# =================================================================================
# Source Domain (Cityscapes)
python tools/convert_datasets_pass/cityscapes_13_train.py /path/to/Cityscapes --nproc 8
# Target Domain (WildPASS2K - Empty Label)
python tools/convert_datasets_pass/target_empoty.py /path/to/WildPASS2K --nproc 8
# Test Domain (DensePASS)
python tools/convert_datasets_pass/DensePASS_13.py /path/to/DensePASS --nproc 8
# =================================================================================
# 2. Open-Set SynPASS, WildPASS2K -> DensePASS
# =================================================================================
# Source Domain (SynPASS)
python tools/convert_datasets_pass/SynPASS_13.py /path/to/SynPASS --nproc 8 --split train --mapping train
# Test Domain (DensePASS)
python tools/convert_datasets_pass/DensePASS_11.py /path/to/DensePASS --nproc 8
# =================================================================================
# 3. Open-Set GTA → SynPASS
# =================================================================================
# Source Domain (GTA5)
python tools/convert_datasets_pass/gta_13.py /path/to/GTA5 --nproc 8
# Test Domain (SynPASS Val & Test)
python tools/convert_datasets_pass/SynPASS_13.py /path/to/SynPASS --nproc 8 --split val --mapping test
python tools/convert_datasets_pass/SynPASS_13.py /path/to/SynPASS --nproc 8 --split test --mapping test
# =================================================================================
# 4. Open-Set SynPASS → ACDC
# =================================================================================
# ACDC Dataset (Train, Val & Test)
python tools/convert_datasets_pass/ACDC_13.py /path/to/ACDC --nproc 8 --split train
python tools/convert_datasets_pass/ACDC_13.py /path/to/ACDC --nproc 8 --split val
python tools/convert_datasets_pass/ACDC_13.py /path/to/ACDC --nproc 8 --split testpip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt # Download mmcv-1.3.7.zip
wget https://github.com/zyfone/EDA-PSeg/releases/download/0.0/mmcv-1.3.7.zip
unzip mmcv-1.3.7.zip
cd mmcv-1.3.7
pip install -e . -v# Cityscapes → DensePASS
CUDA_VISIBLE_DEVICES=0 python run_experiments.py --config configs/daformer/city2dense_uda_openset_graph.py
# SynPASS → DensePASS
CUDA_VISIBLE_DEVICES=0 python run_experiments.py --config configs/daformer/syn2dense_uda_openset_graph.py
# GTA → SynPASS
CUDA_VISIBLE_DEVICES=0 python run_experiments.py --config configs/daformer/gta2syn_uda_openset_graph.py
# SynPASS → ACDC
CUDA_VISIBLE_DEVICES=0 python run_experiments.py --config configs/daformer/syn2acdc_uda_openset_graph.py# MiT-B5 Weights
wget https://github.com/zyfone/EDA-PSeg/releases/download/0.0/mit_b5.pth
# MobileSAM Weights
wget https://github.com/zyfone/EDA-PSeg/releases/download/0.0/mobile_sam.ptFile: configs/_base_/models/daformer_conv1_mitb5.py
model = dict(
type='EncoderDecoder',
pretrained='/path/mit_b5.pth', # Path to MiT-B5 weights
)File: mmseg/models/uda/dacs.py
# Load MobileSAM checkpoint
sam_checkpoint = "/path/mobile_sam.pt" # Path to MobileSAM weights# Cityscapes → Dense
CUDA_VISIBLE_DEVICES=0 python run_experiments.py --config configs/daformer/city2dense_uda_openset_graph.py
# # Synth → Dense
CUDA_VISIBLE_DEVICES=0 python run_experiments.py --config configs/daformer/syn2dense_uda_openset_graph.py
# # GTA → Synth
CUDA_VISIBLE_DEVICES=0 python run_experiments.py --config configs/daformer/gta2syn_uda_openset_graph.py
# # Synth → ACDC
CUDA_VISIBLE_DEVICES=0 python run_experiments.py --config configs/daformer/syn2acdc_uda_openset_graph.pypython -m tools.test ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--eval h_score --show-dir ${SHOW_DIR} --opacity 1- Path:
mmseg/models/decode_heads/daformer_head_graph.py - Key Functions:
node_sample()→_node_completion()→update_seed()→_forward_aff()→_forward_qu()
- Path:
mmseg/models/decode_heads/euler_margin.py - Key Functions:
Euler_Attention()→EulerFormer()→NeuralSort()
Our work builds upon and integrates ideas from:
For questions or collaboration: Email: 478756030@qq.com
Please consider referencing this paper if you use the code or data from our work.
Thanks a lot :)
@inproceedings{zheng2026seeing,
title={Seeing Beyond: Extrapolative Domain Adaptive Panoramic Segmentation},
author={Zheng, Yuanfan and Peng, Kunyu and Zheng, Xu and Yang, Kailun},
booktitle={2026 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2026}
}