[C95] All-Digital Event-based Vision Sensor with Scene Adaptive Power-Saving Pixels and Three-Layer Neural Network for Object Detection

Abstract

We present an all-digital synchronous event-based vision sensor (EVS) with adaptive power-saving pixels and in-sensor object-of-interest (OOI) extraction. The 200×128 single-photon avalanche diode (SPAD) array employs three scene-adaptive power saving (PS) schemes saturation-based (SAT-PS), temporal (T-PS), and patial (S-PS) to suppress power consumption under saturation, short-window abrupt changes, and spatial redundancy. A three-layer hybrid spiking/binary neural network generates 8×8 tile-wise OOI signals to gate informative-event-focused readout. Fabricated in 90 nm, the prototype consumes 9.85 mW at 200 lux, achieving 47.6% pixel- and 21.6% core-power savings, with up to 53.77% pixel saving and 51.4% total saving in composite scenes. The measured throughput is 5.58 kfps and 714 Meps (DAQ-limited). Due to IR-drop induced margin loss, real time OOI detection was validated on an FPGA (Dice score= 0.7452)

Publication
Symposium on VLSI Technology & Circuits (VLSI)
Chanwook Hwang (황찬욱)
Chanwook Hwang (황찬욱)
Combined MS-PhD students
Woosung Chung (정우성)
Woosung Chung (정우성)
Combined MS-PhD student