Selected Publications

In this paper, we introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our algorithm is able to produce accurate, semi-dense depth maps and is computationally very efficient (runs in real-time on a CPU or even a smartphone processor).
In RA-L’17

This presents the world's first collection of datasets with an event-based camera for high-speed robotics. The data also include intensity images, inertial measurements, and ground truth from a motion-capture system. An event-based camera is a revolutionary vision sensor with three key advantages: a measurement rate that is almost 1 million times faster than standard cameras, a latency of 1 microsecond, and a high dynamic range of 130 decibels (standard cameras only have 60 dB). These properties enable the design of a new class of algorithms for high-speed robotics, where standard cameras suffer from motion blur and high latency. All the data are released both as text files and binary (i.e., rosbag) files.
In IJRR’17

In this paper, we introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our algorithm is able to produce accurate, semi-dense depth maps and is computationally very efficient (runs in real-time on a CPU or even a smartphone processor).
In BMVC’16

In contrast to standard cameras, which produce frames at a fixed rate, event cameras respond asynchronously to pixel-level brightness changes, thus enabling the design of new algorithms for high-speed applications with latencies of microseconds. However, this advantage comes at a cost: because the output is composed by a sequence of events, traditional computer-vision algorithms are not applicable, so that a new paradigm shift is needed. We present an event-based approach for ego-motion estimation, which provides pose updates upon the arrival of each event, thus virtually eliminating latency. Our method is the first work addressing and demonstrating event-based pose tracking in six degrees-of-freedom (DOF) motions in realistic and natural scenes, and it is able to track high-speed motions. The method is successfully evaluated in both indoor and outdoor scenes.
In arXiv’16

The transition of visual-odometry technology from research demonstrators to commercial applications naturally raises the question: "what is the optimal camera for vision-based motion estimation?" This question is crucial as the choice of camera has a tremendous impact on the robustness and accuracy of the employed visual odometry algorithm. While many properties of a camera (e.g. resolution, frame-rate, global-shutter/rolling-shutter) could be considered, in this work we focus on evaluating the impact of the camera field-of-view (FoV) and optics (i.e., fisheye or catadioptric) on the quality of the motion estimate. Since the motion-estimation performance depends highly on the geometry of the scene and the motion of the camera, we analyze two common operational environments in mobile robotics: an urban environment and an indoor scene.
In ICRA’16

Recent Publications

  • EVO: A Geometric Approach to Event-based 6-DOF Parallel Tracking and Mapping in Real-time

    Details PDF Video

  • The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM

    Details PDF Video

  • EMVS: Event-based Multi-View Stereo

    Details PDF Video

  • Event-based, 6-DOF Camera Tracking for High-Speed Applications

    Details PDF Video

  • Benefit of Large Field-of-View Cameras for Visual Odometry

    Details PDF Video Research page

Recent Posts

Our paper EVO: A Geometric Approach to Event-based 6-DOF Parallel Tracking and Mapping in Real-time has been accepted for publication in the Robotics and Automation Letters (RA-L), and for presentation at ICRA’17!

Read more

Our paper EMVS: Event-based Multi-View Stereo, receives the BMVC’16 Best Industry Paper Award!

BMVC Best industry paper award

Read more

Our paper EMVS: Event-based Multi-View Stereo about monocular 3D reconstruction using an event camera has been accepted for oral presentation at BMVC’16!

Read more

Teaching

I am a teaching assistant for the course Vision Algorithms for Mobile Robotics given at ETH Zürich during the Fall Semester 2016.

I also occasionally supervise student projects. The list of projects currently available can be found here.

Contact