Cain: Automatic Code Generation for Simultaneous Convolutional Kernels on Focal-plane Sensor-processors

Edward Stow*, Abrar Ahsan**, Yingying Li**, Ali Babaei**, Riku Murai*, Sajad Saeedi**, Paul H.J. Kelly*

* Imperial College London ** Toronto Metropolitan University


Abstract

Focal-plane Sensor-processors (FPSPs) are a camera technology that enables low power, high frame rate computation in the image sensor itself, making them suitable for edge computation. To fit into the sensor array, FPSPs are highly resource-constrained, with limited instruction set and few registers - which makes developing complex algorithms difficult. In this work, we present Cain, a compiler for convolutional filters that targets SCAMP-5, a general-purpose FPSP. Cain generates code to evaluate multiple convolutional kernels at the same time. It generates code that avoids the need for hardware multipliers, while orchestrating the exploitation of common sub-terms -- leading to a large reduction in instruction count compared to both straightforward and prior optimized approaches. We demonstrate the capability enabled by Cain on SCAMP-5 with robotic navigation for near-sensor high-speed and low-power computation, by using Cain to implement a neural network on the focal plane.

Cain is open-source: https://github.com/ed741/cain

What is the Project about

In a focal-plane sensor-processor (FPSP), like the SCAMP-5 device used in this work, you can compute the convolutions that make up the earliest stages of a convolutional neural network (CNN) without moving the data from the sensor to a host processor. You can use Cain to program FPSPs and implement your neural networks on the vision chip. The video below shows an obstacle avoidance task, done on the SCAMP-5 FPSP. The algorithm is based on the convolutional neural networks. The kernel convolutions is performed on the vision chip, and there is no need to transfer the images to a remote processor.

Navigation in a Corridor

To demonstrate the use-cases for Cain and Focal-Plane Sensor-Processors like SCAMP-5, we present Analog- NavNet, a convolutional neural-network based model for collision avoidance and robot navigation.

Navigation inside a Race-track

This video demonstrates how a robot can navigate inside a track while most of the processing is done on the focal plane of the camera. The robots runs on a neural-network on the focal plane, enabled by Cain.

Navigation with Different Frame Rates

At 20 FPS , top left, the robot crashed in less than 8 seconds. At 40 FPS, top right, the robot crashed in 16 seconds. At 60 FPS and 80 FPS, the robot navigates successfully. A higher FPS will allow the robot to respond in a timely manner.

Contact

If you have any questions, feel free to reach out to us at the following email us at: edward.stow16@imperial.ac.uk

BibTex

@article{Stow-AR-2022,
author = {Edward Stow and Abrar Ahsan and Yinyying Li and Ali Babaei and Riku Murai and Sajad Saeedi and Paul H. J. Kelly},
title = {
Compiling {CNN}s with {C}ain: focal-plane processing for robot navigation},
booktitle = {{
Autonomous Robots}},
vol = {46},
no = {8},
pages = {893-910},
year = {202
2} }


@inproceedings{Stow-LCPC-2021,
author = {Edward Stow and Riku Murai and Sajad Saeedi and Paul H. J. Kelly},
title = {Cain: Automatic Code Generation for Simultaneous Convolutional Kernels on Focal-plane Sensor-processors},
booktitle = {{International Workshop on Languages and Compilers for Parallel Computing (LCPC)}},
year = {2021} }

Related Papers

AUKE Compiler

Thomas Debrunner, Sajad Saeedi, Paul H J Kelly,
"AUKE: Automatic Kernel Code Generation for an Analogue SIMD Focal-Plane Sensor-Processor Array"
High Performance and Embedded Architecture and Compilation (HiPEAC),

Valencia, Spain, January 21-23, 2019


AnalogeNet

Matthew Z Wong, Benoit Guillard, Riku Murai, Sajad Saeedi, and Paul HJ Kelly

"AnalogNet: Convolutional Neural Network Inference on Analog Focal Plane Sensor Processors"

arXiv:2006.01765

[PDF][CODE]


Camera Tracking

Thomas Debrunner, Sajad Saeedi, Laurie Bose, Andrew J Davison, Paul H J Kelly,

"Camera Tracking on Focal-Plane Sensor-Processor Arrays"

High Performance and Embedded Architecture and Compilation (HiPEAC),
Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG)
Valencia, Spain, January 21-23, 2019
[
PDF][VIDEO1][VIDEO2]


BIT-VO

Riku Murai, Sajad Saeedi, and Paul HJ Kelly

"BIT-VO: Visual Odometry at 300 FPS using Binary Features from the Focal Plane"

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),

Las Vegas, NV, USA, Oct 25-29, 2020

https://arxiv.org/abs/2004.11186

[VIDEO][LINK]