Selected Publications

In this paper, we introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our algorithm is able to produce accurate, semi-dense depth maps and is computationally very efficient (runs in real-time on a CPU or even a smartphone processor).
In IJCV’17

In contrast to standard cameras, which produce frames at a fixed rate, event cameras respond asynchronously to pixel-level brightness changes, thus enabling the design of new algorithms for high-speed applications with latencies of microseconds. However, this advantage comes at a cost: because the output is composed by a sequence of events, traditional computer-vision algorithms are not applicable, so that a new paradigm shift is needed. We present an event-based approach for ego-motion estimation, which provides pose updates upon the arrival of each event, thus virtually eliminating latency. Our method is the first work addressing and demonstrating event-based pose tracking in six degrees-of-freedom (DOF) motions in realistic and natural scenes, and it is able to track high-speed motions. The method is successfully evaluated in both indoor and outdoor scenes.
In PAMI’17

In this paper, we present the first state estimation pipeline that leverages the complementary advantages of a standard camera and an event camera by fusing, in a tightly-coupled manner events, standard frames, and inertial measurements. Furthermore, we use our pipeline to demonstrate - to the best of our knowledge - the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high-dynamic range scenes.
In arXiv

We propose a novel, accurate tightly-coupled visual-inertial odometry pipeline for such cameras that leverages the outstanding properties of event cameras to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframe-based, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose and velocity.
In BMVC’17

In this paper, we introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our algorithm is able to produce accurate, semi-dense depth maps and is computationally very efficient (runs in real-time on a CPU or even a smartphone processor).
In RA-L’17

This presents the world’s first collection of datasets with an event-based camera for high-speed robotics. The data also include intensity images, inertial measurements, and ground truth from a motion-capture system. An event-based camera is a revolutionary vision sensor with three key advantages: a measurement rate that is almost 1 million times faster than standard cameras, a latency of 1 microsecond, and a high dynamic range of 130 decibels (standard cameras only have 60 dB). These properties enable the design of a new class of algorithms for high-speed robotics, where standard cameras suffer from motion blur and high latency. All the data are released both as text files and binary (i.e., rosbag) files.
In IJRR’17

In this paper, we introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our algorithm is able to produce accurate, semi-dense depth maps and is computationally very efficient (runs in real-time on a CPU or even a smartphone processor).
In BMVC’16

The transition of visual-odometry technology from research demonstrators to commercial applications naturally raises the question: “what is the optimal camera for vision-based motion estimation?” This question is crucial as the choice of camera has a tremendous impact on the robustness and accuracy of the employed visual odometry algorithm. While many properties of a camera (e.g. resolution, frame-rate, global-shutter/rolling-shutter) could be considered, in this work we focus on evaluating the impact of the camera field-of-view (FoV) and optics (i.e., fisheye or catadioptric) on the quality of the motion estimate. Since the motion-estimation performance depends highly on the geometry of the scene and the motion of the camera, we analyze two common operational environments in mobile robotics: an urban environment and an indoor scene.
In ICRA’16

Recent Publications

Recent Posts

More Posts

My paper EMVS: Event-Based Multi-View Stereo - 3D Reconstruction with an Event Camera in Real-Time about semi-dense 3D reconstruction with an event camera has been accepted to the International Journal of Computer Vision!

This work is the first to show that event cameras can be used to provide accurate, semi-dense 3D maps of a given environment, without explicitly trying to solve data association. You can watch the video here!

CONTINUE READING

I am happy to announce today that my team achieved the first ever closed-loop autonomous flight using an event camera for state estimation! Watch the video here! This achievement is the product of several years of research, and I am very proud of the result. Thanks to the event camera, out quadrotor can “see” in high-speed, even in dark environments. The algorithm running onboard the quadrotor is largely based on my recent paper: Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization, which we extended to use standard frames as an additional sensing modality in a following paper: Hybrid, Frame and Event based Visual Inertial Odometry for Robust, Autonomous Navigation of Quadrotors.

CONTINUE READING

Our paper Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization about visual-inertial odometry using an event camera has been accepted at BMVC’17 for oral presentation (acceptance rate: 5.6 %)!

You can watch the video here!

CONTINUE READING

Our paper EVO: A Geometric Approach to Event-based 6-DOF Parallel Tracking and Mapping in Real-time has been accepted for publication in the Robotics and Automation Letters (RA-L), and for presentation at ICRA’17!

CONTINUE READING

Our paper EMVS: Event-based Multi-View Stereo, receives the BMVC’16 Best Industry Paper Award!

BMVC Best industry paper award

CONTINUE READING

Teaching

I am a teaching assistant for the course Vision Algorithms for Mobile Robotics given at ETH Zürich.

I also occasionally supervise student projects. The list of projects currently available can be found here.

Contact