-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
executable file
·535 lines (525 loc) · 36 KB
/
index.html
File metadata and controls
executable file
·535 lines (525 loc) · 36 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Iro Armeni</title>
<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=Playfair+Display:wght@600&family=Inter:wght@400;600&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Inter', sans-serif;
}
h1, h2, h3 {
font-family: 'Playfair Display', serif;
}
</style>
</head>
<body class="bg-white text-gray-800 leading-relaxed">
<!-- Navbar -->
<header class="sticky top-0 bg-white shadow z-10">
<nav class="max-w-6xl mx-auto flex justify-between items-center px-6 py-4">
<h1 class="text-xl font-bold tracking-tight">Iro Armeni</h1>
<ul class="flex space-x-4 text-sm font-medium">
<li><a href="#about" class="hover:text-blue-600">About</a></li>
<li><a href="#teaching" class="hover:text-blue-600">Teaching</a></li>
<li><a href="#publications" class="hover:text-blue-600">Publications</a></li>
<li><a href="#datasets" class="hover:text-blue-600">Datasets</a></li>
<li><a href="#contact" class="hover:text-blue-600">Contact</a></li>
</ul>
</nav>
</header>
<!-- Hero -->
<section class="bg-gray-100 py-20 px-6 text-center">
<div class="max-w-3xl mx-auto">
<img src="files/images/iro.jpeg" alt="Iro Armeni" class="w-40 mx-auto rounded-full mb-4">
<h2 class="text-3xl font-semibold mb-4">Assistant Professor, Stanford University</h2>
<p class="text-gray-700 text-lg">Leading the <a href="https://gradientspaces.stanford.edu/" class="text-blue-600 underline">Gradient Spaces</a> group. I work at the intersection of civil engineering, architecture, and machine perception to design and construct data-driven environments across physical and digital space.</p>
<img src="files/images/GradientSpacesLogo.png" style="width:50%;margin-left: 25%">
</div>
</section>
<!-- About -->
<section id="about" class="py-16 px-6">
<div class="max-w-4xl mx-auto space-y-6 text-lg">
<h2 class="text-2xl font-semibold mb-4">About</h2>
<p>My area of focus is on developing quantitative and data-driven methods that learn from real-world visual data to generate, predict, and simulate new or renewed built environments that place the human in the center. My goal is to create sustainable, inclusive, and adaptive built environments that can support our current and future physical and digital needs.</p>
<p>As part of my research vision, I am particularly interested in creating spaces that blend from the 100% physical (real reality) to the 100% digital (virtual reality) and anything in between, with the use of Mixed Reality.</p>
<p>Read more in <a href="https://medium.com/@iroarmeni/a-day-in-the-life-of-an-architect-in-the-gradient-world-70c1996710fe" class="text-blue-600 underline">A Day in the Life of an Architect in the Gradient World</a>.</p>
<!-- Education -->
<section id="about" class="bg-gray-100 py-16 px-6">
<div class="max-w-4xl mx-auto space-y-6 text-lg">
<h2 class="text-2xl font-semibold mb-4">Education</h2>
<ul class="text-sm space-y-4">
<li class="text-sm"><strong>Postdoctoral Researcher (2023)</strong>, ETH Zurich, DBAUG and DINFK, <i>w/ Prof. Daniel Hall, Prof. Catherine de Wolf, and Prof. Marc Pollefeys</i></li>
<li class="text-sm" style="margin:0px"><strong>Ph.D. (2020)</strong>, Civil and Environmental Engineering with Minor in Computer Science, Stanford University, <i>w/ Prof. Martin Fischer and Prof. Silvio Savarese</i></li>
<li class="text-sm" style="margin:0px"><strong>MSc (2013)</strong>, Computer Science, Ionian University</li>
<li class="text-sm" style="margin:0px"><strong>MEng (2011)</strong>, Architectural Engineering, University of Tokyo</li>
<li class="text-sm" style="margin:0px"><strong>Diploma (2009)</strong>, Architectural Engineering, National Technical University of Athens</li>
<li class="text-sm" style="margin:0px">Worked as an architect and consultant for both the private and public sector.</li>
</ul>
</div>
<div class="max-w-4xl mx-auto space-y-6 text-lg" style="margin-top:20pt">
<h2 class="text-2xl font-semibold mb-4">Awards</h2>
<ul class="text-sm space-y-4">
<li class="text-sm"><strong>U. V. Helava Award - Best Paper 2025</strong>, ISPRS Journal of Photogrammetry and Remote Sensing journal-wide award for best paper in 2025</li>
<li class="text-sm" style="margin:0px"><strong>2026 BuiltWorlds Maverick Award on Influence and Education</strong>, Professional recognition for achievements in influence and education in the AEC industry from BuiltWorlds</li>
<li class="text-sm" style="margin:0px"><strong>NVIDIA Academic Grant Program (Jan-Jun 2026)</strong>, World-wide academic computing grant for research on "Multi-agent Video World Models"</li>
<li class="text-sm" style="margin:0px"><strong>Google Research Scholar Program (2025-26)</strong>, World-wide early-career faculty funding for research on Machine Perception</li>
<li class="text-sm" style="margin:0px"><strong>ETH Zurich Postdoctoral Fellowship (2020-22)</strong>, University-level funding for postdoctoral studies on Machine Perception for Architecture, Construction, and Facility Management</li>
<li class="text-sm" style="margin:0px"><strong>Google Ph.D. Fellowship (2017-20)</strong>, Competitive funding across North America and Europe, for Ph.D. studies on Machine Perception</li>
<li class="text-sm" style="margin:0px"><strong>Stanford CIFE Seed Research Award (2016-17)</strong>, Department-level funding, for research on "Automated Semantic Understanding of Buildings"</li>
<li class="text-sm" style="margin:0px"><strong>Stanford School of Engineering Fellowship, Rick & Melinda Reed Grad. Fellowship (2015-16)</strong>, University-level funding, for Ph.D. studies</li>
<li class="text-sm" style="margin:0px"><strong>EU Marie-Curie Fellowship (2014-15)</strong>, For the project "Automated As-Built Modelling of the Built Infrastructure"</li>
<li class="text-sm" style="margin:0px"><strong>EU Marie-Curie Fellowship (2013-14)</strong>, For the project "BIMAutoGen"</li>
<li class="text-sm" style="margin:0px"><strong>Japanese Government Scholarship (MEXT) (2009-11)</strong>, Competitive, nation-level funding, for MEng degree</li>
<li class="text-sm" style="margin:0px"><strong>Erasmus Scholarship (Jan-Jun 2007)</strong>, The State Scholarships Foundation, EU, University-level funding, foreign exchange studies in ETSAM, Spain</li>
</ul>
</div>
</section>
<!-- Current Teaching -->
<section id="teaching" class="py-16 px-6">
<div class="max-w-4xl mx-auto">
<h2 class="text-2xl font-semibold mb-6">Current Teaching</h2>
<ul class="text-sm space-y-4">
<li><strong>Designing for Gradient Spaces</strong> — CEE342, Stanford, Spring [<a href="https://gradientspaces.stanford.edu/lectures/designing-gradient-spaces" class="text-blue-600 underline">Website</a>]</li>
<li><strong>Computer Vision for the Built Environment</strong> — CEE 247C, Stanford, Winter [<a href="https://gradientspaces.stanford.edu/computer-vision-built-environment" class="text-blue-600 underline">Website</a>]</li>
<li><strong>AI Applications in AEC</strong> — CEE 329, Stanford, Spring</li>
</ul>
</div>
</section>
<!-- Publications -->
<section id="publications" class="bg-gray-100 py-16 px-6">
<div class="max-w-5xl mx-auto">
<h2 class="text-2xl font-semibold mb-6">Academic Publications</h2>
<div class="space-y-10">
<!-- ReScene4D -->
<div>
<h3 class="text-lg font-semibold">ReScene4D: Temporally Consistent Semantic Instance Segmentation of Evolving Indoor 3D Scenes</h3>
<p class="text-sm text-gray-700">Emily Steiner, Jianhao Zheng, Henry Howard-Jenkins, Chris Xie, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">CVPR 2026</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2601.11508" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- WildPose -->
<div>
<h3 class="text-lg font-semibold">Gaussfusion: Improving 3d reconstruction in the wild with geometry-informed video generator</h3>
<p class="text-sm text-gray-700">Liyuan Zhu, Manjunath Narayana, Michal Stary, Will Hutchcroft, Gordon Wetzstein, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">CVPR 2026</p>
<div class="space-x-4 text-sm mt-2">
<!--a href="" class="text-blue-600 underline">PDF</a-->
</div>
</div>
<!-- GaussFusion -->
<div>
<h3 class="text-lg font-semibold">Wildpose: A unified framework for robust pose estimation in the wild</h3>
<p class="text-sm text-gray-700">Jianhao Zheng, Liyuan Zhu, Zihan Zhu, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">CVPR 2026</p>
<div class="space-x-4 text-sm mt-2">
<!--a href="" class="text-blue-600 underline">PDF</a-->
</div>
</div>
<!-- Deep Sketch -->
<div>
<h3 class="text-lg font-semibold">Deep sketch-based 3dbmodeling: A survey</h3>
<p class="text-sm text-gray-700">Alberto Tono, Jiajun Wu, Gordon Wetzstein, <strong>Iro Armeni</strong>, Hariharan Subramonyam, James Landay, and Martin Fischer</p>
<p class="text-sm text-gray-700">Computer Graphics Forum 2026</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2603.03287" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- Deep Sketch -->
<div>
<h3 class="text-lg font-semibold">Do 3d large language models really understand 3d spatial relationships?</h3>
<p class="text-sm text-gray-700">Xianzheng Ma, Tao Sun, Shuai Chen, Yash Bhalgat Sanjay, Jindong Gu, Angel X. Chang, <strong>Iro Armeni</strong>, Iro Laina, Songyou Peng, and Victor Adrian Prisacariu,</p>
<p class="text-sm text-gray-700">ICLR 2026</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2603.23523" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- Rectified Point Flow -->
<div>
<h3 class="text-lg font-semibold">Rectified Point Flow: Generic Point Cloud Pose Estimation</h3>
<p class="text-sm text-gray-700">Tao Sun*, Liyuan Zhu*, Shengyu Huang, Shuran Song, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">NeurIPS 2025 <i>[Spotlight]</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/abs/2506.05282" class="text-blue-600 underline">PDF</a>
<a href="https://rectified-pointflow.github.io/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- GuideFlow3D -->
<div>
<h3 class="text-lg font-semibold">GuideFlow3D: Optimization-guided Rectified Flow for Appearance Transfer</h3>
<p class="text-sm text-gray-700">Sayan Deb Sarkar, Sinisa Stekovic, Vincent Lepetit, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">NeurIPS 2025</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2510.16136" class="text-blue-600 underline">PDF</a>
<a href="https://sayands.github.io/guideflow3d/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- Facade Segmentation Solar PV -->
<div>
<h3 class="text-lg font-semibold">Facade Segmentation for Solar Photovoltaic Suitability</h3>
<p class="text-sm text-gray-700">Ayca Duran, Christoph Waibel, Bernd Bickel, <strong>Iro Armeni</strong>, Arno Schlueter</p>
<p class="text-sm text-gray-700">Tackling Climate Change with Machine Learning, Workshop in NeurIPS 2025</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2511.18882" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- HouseTour -->
<div>
<h3 class="text-lg font-semibold">HouseTour: A Virtual Real Estate A(I)gent</h3>
<p class="text-sm text-gray-700">Ata Celen, Marc Pollefeys, Daniel Bela Barath,, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">ICCV 2025</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2510.18054" class="text-blue-600 underline">PDF</a>
<a href="https://house-tour.github.io/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- ReSpace -->
<div>
<h3 class="text-lg font-semibold">ReSpace: Text-Driven 3D Scene Synthesis and Editing with Preference Alignment</h3>
<p class="text-sm text-gray-700">Martin JJ. Bucher, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">arXiv preprint</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/abs/2506.02459" class="text-blue-600 underline">PDF</a>
<a href="https://respace.mnbucher.com/" class="text-blue-600 underline">Website</a>
<a href="https://www.youtube.com/watch?v=2IMHWJqgDPg&ab_channel=GradientSpacesResearchGroup" class="text-blue-600 underline">Video</a>
</div>
</div>
<!-- ReStyle3D -->
<div>
<h3 class="text-lg font-semibold">ReStyle3D: Scene-level Appearance Transfer with Semantic Correspondences</h3>
<p class="text-sm text-gray-700">Liyuan Zhu, Shengqu Cai*, Shengyu Huang*, Gordon Wetzstein, Naji Khosravan, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">ACM SIGGRAPH 2025</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/abs/2502.10377" class="text-blue-600 underline">PDF</a>
<a href="https://restyle3d.github.io/" class="text-blue-600 underline">Website</a>
<a href="https://www.youtube.com/watch?v=93FkXriWv2w&ab_channel=GradientSpacesResearchGroup" class="text-blue-600 underline">Video</a>
</div>
</div>
<!-- WildGS-SLAM -->
<div>
<h3 class="text-lg font-semibold">WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments</h3>
<p class="text-sm text-gray-700">Jianhao Zheng*, Zihan Zhu*, Valentin Bieri, Marc Pollefeys, Songyou Peng, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">CVPR 2025</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/abs/2504.03886" class="text-blue-600 underline">PDF</a>
<a href="https://wildgs-slam.github.io/" class="text-blue-600 underline">Website</a>
<a href="https://www.youtube.com/watch?v=xXuolzFvddQ&t=11s&ab_channel=GradientSpacesResearchGroup" class="text-blue-600 underline">Video</a>
</div>
</div>
<!-- CrossOver -->
<div>
<h3 class="text-lg font-semibold">CrossOver: Scene Cross-Modal Alignment</h3>
<p class="text-sm text-gray-700">Sayan Deb Sarkar, Ondrej Miksik, Marc Pollefeys, Dániel Barath, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">CVPR 2025 <i>[Highlight]</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2502.15011" class="text-blue-600 underline">PDF</a>
<a href="https://sayands.github.io/crossover/" class="text-blue-600 underline">Website</a>
<a href="https://www.youtube.com/watch?v=8SEoQyaHuKs" class="text-blue-600 underline">Video</a>
</div>
</div>
<!-- LoopSplat -->
<div>
<h3 class="text-lg font-semibold">LoopSplat: Loop Closure by Registering 3D Gaussian Splats</h3>
<p class="text-sm text-gray-700">Liyuan Zhu, Yue Li, Erik Sandström, Shengyu Huang, Konrad Schindler, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">3DV 2025 <i>[Oral Presentation]</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/abs/2408.10154" class="text-blue-600 underline">PDF</a>
<a href="https://loopsplat.github.io/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- Multi-Hexplanes -->
<div>
<h3 class="text-lg font-semibold">Multi-Hexplanes: A Lightweight Map Representation for Rendering and 3D Reconstruction</h3>
<p class="text-sm text-gray-700">Jianhao Zheng, Gabor Valasek, Daniel Barath, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">WACV 2025 <i>[Oral Presentation]</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="https://openaccess.thecvf.com/content/WACV2025/papers/Zheng_Multi-HexPlanes_A_Lightweight_Map_Representation_for_Rendering_and_3D_Reconstruction_WACV_2025_paper.pdf" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- MAP-ADAPT -->
<div>
<h3 class="text-lg font-semibold">MAP-ADAPT: Real-Time Quality-Adaptive Semantic 3D Maps</h3>
<p class="text-sm text-gray-700">Jianhao Zheng, Daniel Barath, Marc Pollefeys, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">ECCV 2024</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2406.05849" class="text-blue-600 underline">PDF</a>
<a href="https://www.youtube.com/watch?v=MB2D2j-rJ8E&ab_channel=JianhaoZheng" class="text-blue-600 underline">Video</a>
<a href="https://map-adapt.github.io/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- Where Am I? -->
<div>
<h3 class="text-lg font-semibold">"Where am I?" Scene Retrieval with Language</h3>
<p class="text-sm text-gray-700">Jiaqi Chen, Daniel Barath, <strong>Iro Armeni</strong>, Marc Pollefeys, Hermann Blum</p>
<p class="text-sm text-gray-700">ECCV 2024</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2404.02838" class="text-blue-600 underline">PDF</a>
<a href="https://www.youtube.com/watch?v=Qx2Z3rPb5k0&feature=youtu.be" class="text-blue-600 underline">Video</a>
<a href="https://atcelen.github.io/I-Design/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- I-Design -->
<div>
<h3 class="text-lg font-semibold">I-Design: Personalized LLM Interior Designer</h3>
<p class="text-sm text-gray-700">Ata Çelen, Guo Han, Konrad Schindler, Luc Van Gool, <strong>Iro Armeni*</strong>, Anton Obukhov*, Xi Wang*</p>
<p class="text-sm text-gray-700">CV4Metaverse, Workshop in ECCV 2024</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2404.02838" class="text-blue-600 underline">PDF</a>
<a href="https://www.youtube.com/watch?v=Qx2Z3rPb5k0&feature=youtu.be" class="text-blue-600 underline">Video</a>
<a href="https://atcelen.github.io/I-Design/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- NSS -->
<div>
<h3 class="text-lg font-semibold">Nothing Stands Still: A Spatiotemporal Benchmark on 3D Point Cloud Registration Under Large Geometric and Temporal Change</h3>
<p class="text-sm text-gray-700">Tao Sun, Yan Hao, Shengyu Huang, Silvio Savarese, Konrad Schindler, Marc Pollefeys, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">ISPRS Journal of Photogrammetry and Remote Sensing 2025</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2311.09346" class="text-blue-600 underline">PDF</a>
<a href="https://www.nothing-stands-still.com/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- Living Scenes -->
<div>
<h3 class="text-lg font-semibold">Living Scenes: Multi-object Relocalization and Reconstruction in Changing 3D Environments</h3>
<p class="text-sm text-gray-700">Liyuan Zhu, Shengyu Huang, Konrad Schindler, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">CVPR 2024 <i>[Highlight]</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/abs/2312.09138" class="text-blue-600 underline">PDF</a>
<a href="https://www.youtube.com/watch?v=U4tCXFGDhWk&ab_channel=LiyuanZhu" class="text-blue-600 underline">Video</a>
<a href="https://www.zhuliyuan.net/livingscenes" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- Multiway PC -->
<div>
<h3 class="text-lg font-semibold">Multiway Point Cloud Mosaicking with Diffusion and Global Optimization</h3>
<p class="text-sm text-gray-700">Shengze Jin, <strong>Iro Armeni</strong>, Marc Pollefeys, Daniel Barath</p>
<p class="text-sm text-gray-700">CVPR 2024</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2404.00429" class="text-blue-600 underline">PDF</a>
<a href="https://www.youtube.com/watch?v=dnzhKfPIoWg&ab_channel=ShengzeJin" class="text-blue-600 underline">Video</a>
</div>
</div>
<!-- Semantically Guided Feature Matching for Visual SLAM -->
<div>
<h3 class="text-lg font-semibold">Semantically Guided Feature Matching for Visual SLAM</h3>
<p class="text-sm text-gray-700">Oguzhan Ilter, <strong>Iro Armeni</strong>, Marc Pollefeys, Daniel Barath</p>
<p class="text-sm text-gray-700">ICRA 2024</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://ieeexplore.ieee.org/document/10610238" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- Volumetric Semantically Consistent 3D Panoptic Mapping -->
<div>
<h3 class="text-lg font-semibold">Volumetric Semantically Consistent 3D Panoptic Mapping</h3>
<p class="text-sm text-gray-700">Yang Miao, <strong>Iro Armeni</strong>, Marc Pollefeys, Daniel Barath</p>
<p class="text-sm text-gray-700">IROS 2024 <i>Oral Presentation</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="https://browse.arxiv.org/pdf/2309.14737.pdf" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- QReg -->
<div>
<h3 class="text-lg font-semibold">Q-REG: End-to-End Trainable Point Cloud Registration with Surface Curvature</h3>
<p class="text-sm text-gray-700">Shengze Jin, Daniel Barath, Marc Pollefeys, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">3DV 2024</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://browse.arxiv.org/pdf/2309.16023.pdf" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- SGAligner -->
<div>
<h3 class="text-lg font-semibold">SGAligner: 3D Scene Alignment with Scene Graphs</h3>
<p class="text-sm text-gray-700">Sayan Deb Sarkar, Ondrej Miksik, Marc Pollefeys, Daniel Barath, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">ICCV 2023</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2304.14880.pdf" class="text-blue-600 underline">PDF</a>
<a href="https://sayandebsarkar.com/sgaligner" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- ARrow -->
<div>
<h3 class="text-lg font-semibold">ARrow: A Real-Time AR Rowing Coach</h3>
<p class="text-sm text-gray-700">Elena Iannuci, Zhu-Tian Chen, <strong>Iro Armeni</strong>, Marc Pollefeys, Hanspeter Pfister, Johanna Beyer</p>
<p class="text-sm text-gray-700">EuroVis 2023 <i>[Best Short Paper Honorable Mention Award]</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="http://arxiv.org/abs/2305.02398" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- Learning-Based Relational Object Matching Across Views -->
<div>
<h3 class="text-lg font-semibold">Learning-Based Relational Object Matching Across Views</h3>
<p class="text-sm text-gray-700">Cathrin Elich, <strong>Iro Armeni</strong>, Martin R. Oswald, Marc Pollefeys, Joerg Stueckler</p>
<p class="text-sm text-gray-700">ICRA 2023</p>
<div class="space-x-4 text-sm mt-2">
<a href="http://arxiv.org/abs/2305.02398" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- HoloLabel -->
<div>
<h3 class="text-lg font-semibold">HoloLabel: Augmented Reality User-In-The-Loop Online Annotation Tool for As-Is Building Information</h3>
<p class="text-sm text-gray-700">Dhruv Agrawal*, Janik Lobsiger*, Jessica Bo, Véronique Kaufmann, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">EC3 2022</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://ec-3.org/publications/conferences/EC32022/papers/EC32022_174.pdf" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- SemSpray -->
<div>
<h3 class="text-lg font-semibold">SemSpray: Virtual Reality As-Is Semantic Information Labeling Tool for 3D Spatial Data</h3>
<p class="text-sm text-gray-700">Yiming Zhao*, Cyprien Fol*, Yuchang Jiang, Tianyu Wu, <strong>Iro Armeni</strong></p>
<p class="text-sm text-gray-700">EC3 2022</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://ec-3.org/publications/conferences/EC32022/papers/EC32022_175.pdf" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- Implicity -->
<div>
<h3 class="text-lg font-semibold">ImpliCity: City Modeling From Satellite Images with Deep Implicit Occupancy Fields</h3>
<p class="text-sm text-gray-700">Corinne Stucker, Bingxin Ke, Yuanwen Yue, Shengyu Huang, <strong>Iro Armeni</strong>, Konrad Schindler</p>
<p class="text-sm text-gray-700">ISPRS Congress 2022 <i>[Best Young Author Award]</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2201.09968.pdf" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- Robust Policies -->
<div>
<h3 class="text-lg font-semibold">Robust Policies via Mid-Level Visual Representations: An Experimental Study in Manipulation and Navigation</h3>
<p class="text-sm text-gray-700">Bryan Chen*, Alexander Sax*, Gene Lewis, <strong>Iro Armeni</strong>, Silvio Savarese, Amir Zamir, Jitendra Malik, Lerrel Pinto</p>
<p class="text-sm text-gray-700">CoRL 2020</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/2011.06698.pdf" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- 3DSG -->
<div>
<h3 class="text-lg font-semibold">3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera</h3>
<p class="text-sm text-gray-700"><strong>Iro Armeni</strong>, Jerry Zhi-Yang He, JunYoung Gwak, Amir R. Zamir, Martin Fischer, Jitendra Malik, Silvio Savarese</p>
<p class="text-sm text-gray-700">ICCV 2019</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://3dscenegraph.stanford.edu/images/3DSceneGraph.pdf" class="text-blue-600 underline">PDF</a>
<a href="https://3dscenegraph.stanford.edu/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- SegCloud -->
<div>
<h3 class="text-lg font-semibold">SEGCloud: Semantic Segmentation of 3D Point Clouds</h3>
<p class="text-sm text-gray-700">Lyne P. Tchapmi, Christopher B. Choy, <strong>Iro Armeni</strong>, JunYoung Gwak, Silvio Savarese</p>
<p class="text-sm text-gray-700">3DV 2017 <i>[Spotlight Presentation]</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="https://arxiv.org/pdf/1710.07563.pdf" class="text-blue-600 underline">PDF</a>
<a href="http://segcloud.stanford.edu" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- 2D-3D-S -->
<div>
<h3 class="text-lg font-semibold">Joint 2D-3D-Semantic Data for Indoor Scene Understanding</h3>
<p class="text-sm text-gray-700"><strong>Iro Armeni*</strong>, Alexander Sax*, Amir R. Zamir, Silvio Savarese</p>
<p class="text-sm text-gray-700">Technical Report 2017</p>
<div class="space-x-4 text-sm mt-2">
<a href="http://buildingparser.stanford.edu/images/2D-3D-S_2017.pdf" class="text-blue-600 underline">PDF</a>
<a href="http://buildingparser.stanford.edu/dataset.html" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- Building Parser -->
<div>
<h3 class="text-lg font-semibold">3D Semantic Parsing of Large-Scale Indoor Spaces</h3>
<p class="text-sm text-gray-700"><strong>Iro Armeni</strong>, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fisher, Silvio Savarese</p>
<p class="text-sm text-gray-700">CVPR 2016 <i>[Oral Presentation]</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="http://buildingparser.stanford.edu/images/3D_Semantic_Parsing.pdf" class="text-blue-600 underline">PDF</a>
<a href="https://youtu.be/PB-Ach697Bc" class="text-blue-600 underline">Video</a>
<a href="http://buildingparser.stanford.edu/" class="text-blue-600 underline">Website</a>
</div>
</div>
<!-- State of Research in Automatic As-Built Modelling -->
<div>
<h3 class="text-lg font-semibold">State of Research in Automatic As-Built Modelling</h3>
<p class="text-sm text-gray-700">Viorica Pătrăucean, <strong>Iro Armeni</strong>, Mohammad Nahangi, Jamie Yeung, Ioannis Brilakis, Carl Haas</p>
<p class="text-sm text-gray-700">Advanced Engineering Informatics 2015</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://www.sciencedirect.com/science/article/pii/S1474034615000026" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- Dynamic Identification -->
<div>
<h3 class="text-lg font-semibold">A dynamic identification of a historical building using accelerometers with interface modules and a digital synchronization method</h3>
<p class="text-sm text-gray-700">Luigi Spedicato, <strong>Iro Armeni</strong>, Nicola Ivan Giannoccaro, Markos Avlonitis, Sozon Papavlasopoulos</p>
<p class="text-sm text-gray-700">Periodical of Key Engineering Materials 2015</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://www.scientific.net/kem.628.204" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- Pedestrian Navigation -->
<div>
<h3 class="text-lg font-semibold">Pedestrian navigation and shortest path: Preference versus distance</h3>
<p class="text-sm text-gray-700"><strong>Iro Armeni</strong>, Konstantinos Chorianopoulos</p>
<p class="text-sm text-gray-700">Workshop, International Conference on Intelligent Environments 2013</p>
<div class="space-x-4 text-sm mt-2">
<a href="https://ebooks.iospress.nl/doi/10.3233/978-1-61499-286-8-647" class="text-blue-600 underline">PDF</a>
</div>
</div>
<!-- Coderch -->
<div>
<h3 class="text-lg font-semibold">Resume of the thesis "More than a machine" J. A. Coderch</h3>
<p class="text-sm text-gray-700"><strong>Iro Armeni*</strong>, Telesilla Bristogianni*</p>
<p class="text-sm text-gray-700">Technical Chronicles, Technical Chamber of Greece 2020</p>
</div>
<!-- Add more publications similarly -->
</div>
</div>
</section>
<!-- Books&Chapters -->
<section id="chapters" class="py-16 px-6">
<div class="max-w-5xl mx-auto">
<h2 class="text-2xl font-semibold mb-6">Books & Chapters</h2>
<div class="space-y-10">
<!-- AI for Reuse -->
<div>
<h3 class="text-lg font-semibold">Artificial Intelligence for Predicting Reuse Patterns</h3>
<p class="text-sm text-gray-700"><strong>Iro Armeni</strong>, Deepika Raghu, Catherine De Wolf</p>
<p class="text-sm text-gray-700"><i>in "A Circular Built Environment in the Digital Age"</i></p>
<p class="text-sm text-gray-700">Eds. Catherine De Wolf, Sultan Çetin, Nancy Bocken</p>
<p class="text-sm text-gray-700"><i>Springer Nature</i></p>
<div class="space-x-4 text-sm mt-2">
<a href="https://link.springer.com/book/10.1007/978-3-031-39675-5" class="text-blue-600 underline">link</a>
</div>
</div>
</div>
</div>
</section>
<!-- Datasets -->
<section id="datasets" class="bg-gray-100 py-16 px-6">
<div class="max-w-5xl mx-auto">
<h2 class="text-2xl font-semibold mb-6">Datasets</h2>
<ul class="text-sm space-y-4">
<li><strong>HouseTour</strong>, 2025 [<a href="https://house-tour.github.io/" class="text-blue-600 underline">Website</a>]</li>
<li><strong>Nothing Stands Still (NSS)</strong>, 2024 [<a href="https://www.nothing-stands-still.com/" class="text-blue-600 underline">Website</a>]</li>
<li><strong>3D Scene Graph</strong>, 2019 [<a href="https://3dscenegraph.stanford.edu/" class="text-blue-600 underline">Website</a>]</li>
<li><strong>Stanford 2D-3D-Semantics Dataset (2D-3D-S)</strong>, 2017 [<a href="http://buildingparser.stanford.edu/dataset.html" class="text-blue-600 underline">Website</a>]</li>
<li><strong>Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS)</strong>, 2016 [<a href="http://buildingparser.stanford.edu/dataset.html" class="text-blue-600 underline">Website</a>]</li>
</ul>
</div>
</section>
<!-- Contact -->
<section id="contact" class="py-16 px-6">
<div class="max-w-xl mx-auto text-center">
<h2 class="text-2xl font-semibold mb-4">Contact</h2>
<p class="text-sm mb-2">📧 <a href="mailto:iarmeni@stanford.edu" class="text-blue-600 underline">iarmeni@stanford.edu</a></p>
<p class="text-sm mb-2">📍 473 Via Ortega, Stanford, Room 233</p>
<div class="mt-4 flex justify-center space-x-6 text-gray-600">
<a href="https://scholar.google.com/citations?hl=en&user=m2oTZkIAAAAJ" class="text-sm hover:text-black">Google Scholar</a>
<a href="https://twitter.com/ir0armeni" class="text-sm hover:text-black">Twitter</a>
<a href="https://www.linkedin.com/in/iro-armeni-a4414861" class="text-sm hover:text-black">LinkedIn</a>
<a href="https://bsky.app/profile/ir0armeni.bsky.social" class="text-sm hover:text-black">BlueSky</a>
</div>
</div>
</section>
<footer class="text-center text-xs py-6 text-gray-500">
© 2025 Iro Armeni. All rights reserved.
</footer>
</body>
</html>