Vision Sensors with Pixel-Parallel Cellular Processor Arrays
Piotr Dudek
The University of Manchester

Jan. 24, 2013, 11:10 a.m.


The lecture overviews the design and implementation of vision sensors - microelectronic devices which combine image sensing and processing on single silicon die. In a way somewhat resembling the vertebrate retina these 'vision chips' perform preliminary image processing directly on the sensory plane and are capable of very high processing speed at very low power consumption. This makes them particularly suitable for embedded machine vision in applications such as autonomous robots, automated surveillance, or high-speed industrial inspection systems.

The key technologies, and concepts behind vision sensor devices will be introduced, and the lecture will be illustrated with case studies of actual CMOS vision chips implementations. The principles of using massively parallel fine-grain cellular processor arrays for low-level image processing will be presented. The analogy to topographic sensory processing networks in the mammalian brain will be elucidated. The device architectures and fundamental circuit design issues will be overviewed, and programming techniques used to map image rocessing algorithms onto fine grain massively parallel processor arrays will be discussed. The presented devices will include the SCAMP-5 chip, based on 256x256 array of "analogue microprocessors" and PAV-3D chip integrating, sensing, analogue and digital asynchronous/synchronous processing in a separate layers of a stacked 3D CMOS technology with through-silicon vias (TSVs). The talk will include experimental results (videos) obtained with a smart-camera system based on the SCAMP vision chip in a number of vision applications including image filtering, active contour techniques, object recognition, neural networks, high-speed object tracking (image analysis at 100,000 frames per second), and ultra low-power surveillance systems.



Share
Vision Sensors with Pixel-Parallel Cellular Processor Arrays
Piotr Dudek
The University of Manchester

Jan. 24, 2013, 11:10 a.m.


The lecture overviews the design and implementation of vision sensors - microelectronic devices which combine image sensing and processing on single silicon die. In a way somewhat resembling the vertebrate retina these 'vision chips' perform preliminary image processing directly on the sensory plane and are capable of very high processing speed at very low power consumption. This makes them particularly suitable for embedded machine vision in applications such as autonomous robots, automated surveillance, or high-speed industrial inspection systems.

The key technologies, and concepts behind vision sensor devices will be introduced, and the lecture will be illustrated with case studies of actual CMOS vision chips implementations. The principles of using massively parallel fine-grain cellular processor arrays for low-level image processing will be presented. The analogy to topographic sensory processing networks in the mammalian brain will be elucidated. The device architectures and fundamental circuit design issues will be overviewed, and programming techniques used to map image rocessing algorithms onto fine grain massively parallel processor arrays will be discussed. The presented devices will include the SCAMP-5 chip, based on 256x256 array of "analogue microprocessors" and PAV-3D chip integrating, sensing, analogue and digital asynchronous/synchronous processing in a separate layers of a stacked 3D CMOS technology with through-silicon vias (TSVs). The talk will include experimental results (videos) obtained with a smart-camera system based on the SCAMP vision chip in a number of vision applications including image filtering, active contour techniques, object recognition, neural networks, high-speed object tracking (image analysis at 100,000 frames per second), and ultra low-power surveillance systems.



Share