Skip to content

Commit 26d6d94

Browse files
committed
radseg research post added
1 parent ec61768 commit 26d6d94

3 files changed

Lines changed: 24 additions & 1 deletion

File tree

_bibliography/references.bib

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ @inproceedings{alama2026radseg
2626
title = {RADSeg: Unleashing Parameter and Compute Efficient Zero-Shot Open-Vocabulary Segmentation Using Agglomerative Models},
2727
author = {Alama, Omar and Jariwala, Darshil and Bhattacharya, Avigyan and Kim, Seungchan and Wang, Wenshan and Scherer, Sebastian},
2828
year = {2026},
29-
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Findings},
29+
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Findings},
3030
url = {https://arxiv.org/abs/2511.19704},
3131
abstract = {Open-vocabulary semantic segmentation (OVSS) underpins many vision and robotics tasks that require generalizable semantic understanding. Existing approaches either rely on limited segmentation training data, which hinders generalization, or apply zero-shot heuristics to vision-language models (e.g CLIP), while the most competitive approaches combine multiple models to improve performance at the cost of high computational and memory demands. In this work, we leverage an overlooked agglomerative vision foundation model, RADIO, to improve zero-shot OVSS along three key axes simultaneously: mIoU, latency, and parameter efficiency. We present the first comprehensive study of RADIO for zero-shot OVSS and enhance its performance through self-correlating recursive attention, self-correlating global aggregation, and computationally efficient mask refinement. Our approach, RADSeg, achieves 6-30% mIoU improvement in the base ViT class while being 3.95x faster and using 2.5x fewer parameters. Surprisingly, RADSeg-base (105M) outperforms previous combinations of huge vision models (850-1350M) in mIoU, achieving state-of-the-art accuracy with substantially lower computational and memory cost.}
3232
}

_posts/2026-02-27-radseg.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
---
2+
layout: post
3+
title: "RADSeg: Unleashing Parameter and Compute Efficient Zero-Shot Open-Vocabulary Segmentation Using Agglomerative Models"
4+
date: 2026-02-27 10:33:01
5+
categories: research
6+
description: "RADSeg is a dense, language-aligned feature encoder that enables low-parameter, low-latency open-vocabulary semantic segmentation in 2D and 3D."
7+
author: "Seungchan Kim"
8+
published: true
9+
redirect: "https://radseg-ovss.github.io/"
10+
show_sidebar: false
11+
# slim_content_width: true
12+
permalink: /radseg/
13+
image: /img/posts/2026-02-27-radseg/radseg.png
14+
datatable: true
15+
title_image: None
16+
hero_image: /img/posts/2026-02-27-radseg/radseg.png
17+
hero_height: is-large
18+
remove_hero_title: false
19+
menubar_toc: false
20+
tags: Perception, Learning
21+
---
22+
23+
RADSeg is a dense, language-aligned feature encoder that enables low-parameter, low-latency open-vocabulary semantic segmentation in 2D and 3D. By enhancing spatial locality of RADIO features, RADSeg outperforms previous state-of-the-art methods in accuracy while remaining highly efficient.
333 KB
Loading

0 commit comments

Comments
 (0)