-
Notifications
You must be signed in to change notification settings - Fork 8
Expand file tree
/
Copy patharchived-news.html
More file actions
190 lines (187 loc) · 12.5 KB
/
archived-news.html
File metadata and controls
190 lines (187 loc) · 12.5 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
<!doctype html>
<html>
<head>
<title>MultiX Archived News</title>
<meta charset="utf-8" name="viewport" content="width=device-width, initial-scale=1">
<link href="css/frame.css" media="screen" rel="stylesheet" type="text/css" />
<link href="css/controls.css" media="screen" rel="stylesheet" type="text/css" />
<link href="css/custom.css" media="screen" rel="stylesheet" type="text/css" />
<link href='https://fonts.googleapis.com/css?family=Open+Sans:400,700' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700' rel='stylesheet' type='text/css'>
<link href="https://fonts.googleapis.com/css?family=Source+Sans+Pro:400,700" rel="stylesheet">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="js/menu.js"></script>
</head>
<body>
<div class="menu-container"></div>
<div class="content-container">
<div class="content">
<div class="content-table flex-column">
<div class="flex-row">
<div class="flex-item flex-column publications">
<h2>Archived News 2024</h2>
<hr>
<ul>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[June 2024]</span>
Tim Alpherts' paper "<a href="https://dl.acm.org/doi/10.1145/3630106.3658976">Perceptive Visual Urban Analytics is Not (Yet) Suitable for Municipalities</a>" has been accepted and published at the ACM FAccT Conference.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[February 2024]</span>
Yen-Chia Hsu's project "Citizen Science for Spotting Toxic Clouds with AI" is funded by the Dutch NWO's Open Science Fund (€41721).
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[January 2024]</span>
UvA will host the <a href="https://www.mmm2024.org/">MMM 2024 conference</a>, and the MultiX lab plays an important role in the organization!
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[December 2023]</span>
MultiX is in NeurIPS (<a href="https://arxiv.org/abs/2310.18713">"Episodic Multi-Task Learning with Heterogeneous Neural Processes"</a>).
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[December 2023]</span>
Yen-Chia Hsu's previous work is on the EenVandaag Dutch TV (<a href="https://eenvandaag.avrotros.nl/item/eenvandaag-15-12-2023/">link to the TV program 00:08 to 11:40</a> and <a href="https://eenvandaag.avrotros.nl/item/gezondheid-omwonenden-verbetert-meteen-als-kooksfabriek-zoals-die-van-tata-steel-sluit-toont-amerikaans-onderzoek-aan/">link to the news article</a>).
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[October 2023]</span>
MultiX is represented in ICCV 2023 (<a href="https://openaccess.thecvf.com/content/ICCV2023/html/van_Noord_Protoype-based_Dataset_Comparison_ICCV_2023_paper.html">"Prototype-based Dataset Comparison"</a> and <a href="https://openaccess.thecvf.com/content/ICCV2023/html/Long_Cross-modal_Scalable_Hierarchical_Clustering_in_Hyperbolic_space_ICCV_2023_paper.html">"Cross-modal Scalable Hyperbolic Hierarchical Clustering"</a>).
</p>
</li>
</ul>
<h2>Archived News 2023</h2>
<hr>
<ul>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[June 2023]</span>
Two papers (<a href="https://openaccess.thecvf.com/content/CVPR2023W/XAI4CV/html/Gulshad_Hierarchical_Explanations_for_Video_Action_Recognition_CVPRW_2023_paper.html">"Hierarchical Explanations for Video Action Recognition"</a>, <a href="https://openaccess.thecvf.com/content/CVPR2023W/NFVLR/html/Huang_Causalainer_Causal_Explainer_for_Automatic_Video_Summarization_CVPRW_2023_paper.html">"Causalainer: Causal Explainer for Automatic Video Summarization"</a>) have been published by
Sadaf Gulshad and Jia-Hong Huang in the workshops of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[June 2023]</span>
A paper (<a href="https://ieeexplore.ieee.org/document/10144782">"PanorAMS: Automatic Annotation for Detecting Objects in Urban Context"</a>) has been published by Inske Groenen in the IEEE Transactions on Multimedia journal.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[March 2023]</span>
We released an <a href="https://multix.io/data-science-book-uva/">open source data science course</a> for the UvA Bachelor Informatiekunde Program.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[February 2023]</span>
A paper (<a href="https://www.sciencedirect.com/science/article/pii/S1077314223000024">"MATTE: Multi-task multi-scale attention"</a>) has been published by Gjorgji Strezoski in the Computer Vision and Image Understanding journal.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[February 2023]</span>
A paper (<a href="https://openreview.net/forum?id=3oWo92cQyxL">"Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"</a>) has been published by Ivona Najdenkoska in the ICLR conference.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[January 2023]</span>
A paper (<a href="https://openaccess.thecvf.com/content/WACV2023/html/Wu_Expert-Defined_Keywords_Improve_Interpretability_of_Retinal_Image_Captioning_WACV_2023_paper.html">"Expert-Defined Keywords Improve Interpretability of Retinal Image Captioning"</a>), has been published by Jia-Hong Huang in the WACV conference.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[January 2023]</span>
A paper (<a href="https://arxiv.org/abs/2210.06980">"Probabilistic Integration of Object Level Annotations in Chest X-ray Classification"</a>) has been published by Tom van Sonsbeek in the WACV conference.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[January 2023]</span>
A paper (<a href="https://arxiv.org/abs/2210.04637">"Association Graph Learning for Multi-Task Classification with Category Shifts"</a>) has been published by Jiayi Shen in the NeurIPS conference.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
</ul>
</div>
</div>
<div class="flex-row">
<div class="flex-item flex-column publications">
<h2>Archived News 2022</h2>
<hr>
<ul>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[October 2022]</span>
A paper (<a href="https://www.sciencedirect.com/science/article/pii/S1361841522002341">"Uncertainty-aware report generation for chest X-rays by variational topic inference"</a>) published by Ivona Najdenkoska has received runner-up for the best paper award in the Medical Image Analysis journal.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[September 2022]</span>
Marcel Worring received a grant (€1.5M) from NWO about the project of "AI4Intelligence: from Multimodal Data to Trustworthy Evidence in Court". <a href="https://ivi.uva.nl/content/news/2022/05/ai4intelligence-project-granted.html?origin=AbIBW%2F3BT%2FqqidVpq41UEg">More information here</a>.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[September 2022]</span>
Our group has moved from Science Park 904 to the new LAB42 building nearby (Science Park 900).
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[September 2022]</span>
Nanne van Noord received a grant (€387K) from ClickNL about the AI4FILM project to develop novel AI techniques that are tailored to film by learning from analysis, production practice, and theory.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[September 2022]</span>
Yen-Chia Hsu participated in the <a href="https://www.heidelberg-laureate-forum.org/forum/9th-hlf-2022.html">2022 Heidelberg Laureate Forum</a>.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
<li>
<p class="text-small-margin">
<span style="color: #1b7677;">[September 2022]</span>
Yen-Chia Hsu was invited to a panel discussion at the <a href="https://intgovforum.org/en/content/igf-2022-town-hall-37-beyond-the-opacity-excuse-ai-transparency-and-communities">2022 Internet Governance Forum</a>, where he will participate in the discussion of AI transparency and communities.
</p>
</li>
<!-------------------------------------------------------------------------------------------->
</ul>
</div>
</div>
</div>
</div>
</div>
</body>
</html>