pith. machine review for the scientific record. sign in

arxiv: 1802.06898 · v4 · submitted 2018-02-19 · 💻 cs.CV · cs.RO

Recognition: unknown

EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras

Authors on Pith no claims yet
classification 💻 cs.CV cs.RO
keywords flowcamerasopticalself-supervisedestimationeventevent-basedevents
0
0 comments X
read the original abstract

Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Match-Any-Events: Zero-Shot Motion-Robust Feature Matching Across Wide Baselines for Event Cameras

    cs.CV 2026-04 unverdicted novelty 7.0

    A single attention-based model trained on synthetic wide-baseline event data achieves zero-shot feature matching across unseen datasets with a reported 37.7% improvement over prior event matching methods.

  2. SNNF: An SNN-based Near-Sensor Noise Filter for Dynamic Vision Sensors

    cs.NE 2026-05 unverdicted novelty 4.0

    SNNF uses an event-based binary image and single-layer SNN to achieve 0.89 AUC in distinguishing signal from noise in DVS while using only 11-40% of the resources of prior filters.