DONATE MAINS

Vision and natural computation

Return

Team leader : Ryad Benosman

  • Vision Slide 1
  • Vision Slide 2
  • Vision Slide 3
  • Vision Slide 4
  • Vision Slide 5

Introduction

Rather than studying biological retinas, our team is designing and creating artificial retinas through silicon semiconductor technologies, and we develop corresponding biology-inspired processing and computation techniques

Presentation

The research carried out by the team is motivated and driven by the conviction that artificial vision and imaging systems can only progress if they leave behind the unnatural limitation of frame-based acquisition and start being driven and controlled by the dynamics of the observed visual scene itself. This conviction is supported by studying the obvious superiority of biological vision systems that function this way.

We assert that artificial vision based on frame-free biomimetic vision acquisition and processing devices has the potential to reach entirely new levels of performance and functionality, comparable to the ones found in biological systems. Based on recent research, it can be predicted with high confidence that future artificial vision systems, if they want to succeed in demanding applications (such as e.g autonomous robot navigation, high-speed motor control, visual feedback loops or efficient stimulation of retinal cells) MUST exploit the power of the biomimetic, asynchronous, frame-free, approach.

The “Neuromorphic Integrated Circuits” sub-team exploits principles of biological neural systems in the design of full-custom CMOS vision and image sensors, so-called “silicon retinas. These artificial retinas in real-time, generate outputs that correspond directly to signals observed in the corresponding levels of biological retinas. Visual information is encoded in asynchronous spike signals and is delivered already in a form the visual cortex can directly understand. Also in terms of compactness, energy consumption and autonomy, this technology opens the door for a vast variety of applications, from consumer devices to implantable intraocular prosthesis. The direct modeling of retinal functions and operational principles in highly integrated electronic circuits allows reproducing, studying and experimentally assessing retina operation, including defects and diseases, and can potentially help in devising novel ways of medical diagnosis and treatment.Besides medical applications, the technology has the potential to fuel progress in the fields of sensor-based robotics and computer vision. Fast sensorimotor action through visual feedback loops, based on the biological frame-free, event-driven style of vision, supports e.g. autonomous robotics navigation as well as medical robots in micro-manipulation and image-guided intervention. Related developments can touch diverse fields like human-machine systems involving e.g. gesture recognition.

In computer vision and robotics, the bio-inspired event-driven vision technology offers striking advantages over conventional frame-based image acquisition techniques:

  •  Microsecond temporal resolution (corresponding to tens- to hundreds-of-thousands of frames per second)
  •  Ultra-wide dynamic range (more than 6 orders of magnitude of illumination within a scene can be acquired and processed)
  •  On-chip video compression and redundancy suppression (hardware-based lossless video compression factors of up to 1000 are realized directly in the sensor chip).

Aims of the research

The research efforts aim towards major advancements and definition of future trends in bio-inspired, event-based vision, both in sensing and in processing of the visual information.

The first major target is to develop and establish theoretical /mathematical foundations of event-driven, asynchronous, frame-free vision and formulate a general biomimetic vision paradigm. Based on this groundwork, a novel species of vision and image sensors and associated asynchronous data processing hardware is going to be implemented in state-of-the art VLSI technology.

For the processing of the sensor data, space-time algorithms that take advantage of the real-time frame-free event information need to be developed. Ideally, a translational framework, connecting the fundamentally different realms of frame-based and event-based representation of visual information is realized, allowing to easily transferring image and vision processing algorithms between the two worlds.

The results of this research will provide fundamentally novel means of sensing and processing visual information for diverse computer vision applications and will eventually lead to establishing new benchmarks for artificial vision systems. Data compression, dynamic range, temporal resolution and power efficiency at the sensor hardware level, and increased throughput and processing performance at the system/application level will enable realizing advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time and at limited power budgets. Biomedical applications of the technology in the fields of prosthetics, optogenetics and bio-hybrid artefacts are investigated.


Research areas

  • Establish a theoretical/mathematical foundation for bio-inspired, event-based, frame-free artificial vision and formulate a vision paradigm that can serve as the basis for the design and implementation of a new generation of artificial vision systems.

  • Address the challenge of devising novel biomimetic vision sensor and pixel architectures that are sensitive to, and hence get activated not only by illumination change or spatio-temporal contrast – the current state-of-the-art – but also by absolute illumination, patterns, textures, color etc. In conventional image sensors, these features can only be sensed passively and in a frame-based manner and hence are not available to actively drive a vision device. Pixel circuits that strive to meeting this challenge foreseeably will require a substantial amount of focal-plane intelligence and processing capability, based on asynchronous, events-driven electronic circuits.

  • Design and build groundbreaking vision sensors with advanced event-based pixel-level/focal-plane processing that will outperform current state-of-the-art in multiple respects such as temporal resolution, processing speed, dynamic range, sparse data encoding, etc.

  • Apply the sensors in artificial vision systems that presumably will excel in various robotics-related disciplines such as motion detection and analysis, sensorimotor control, visual feedback loops, etc.

  • Address the challenge of transferring the biomimetic vision paradigm into technically – and also economically – feasible technology e.g. requiring area-efficient pixels. This quest likely will require the inclusion of non-mainstream fabrication technologies like 3D silicon, die-stacking, wafer-stacking, backlighting, etc.)

  • For the processing of the sensor data, space-time algorithms that take advantage of the real-time, frame-free event-based vision information are to be developed. Ideally, a translational framework, connecting the fundamentally different realms of frame-based and event-based representation of visual information is realized, allowing to easily transferring image and vision processing algorithms between the two worlds.

  • Investigate applications in the field of prosthetics and bio-hybrid artifacts.

 


Publications





Support

  • ANR
    ANR
  • Marie curie actions
    Marie curie actions
  • Voir et entendre
    Voir et entendre
  • Horizon 2020
    Horizon 2020
  • bpi france
    bpi france
  • erc
    erc
  • Alten
    Alten
  • GenSight
    GenSight
  • Pixium
    Pixium


SUPPORT  Coeur mains