Download PDFOpen PDF in browser

C^3Net: End-to-End Deep Learning for Efficient Real-Time Visual Active Camera Control

EasyChair Preprint no. 7986

16 pagesDate: May 21, 2022


The need for automated real-time visual systems in applications such as smart camera surveillance, smart environments, and drones necessitates the improvement of methods for visual active monitoring and control. Traditionally, the active monitoring task has been handled through a pipeline of modules such as detection, filtering, and control. However, such methods are difficult to jointly optimize and tune their various parameters for real-time processing in resource constraint systems. In this paper a deep Convolutional Camera Controller Neural Network is proposed to go directly from visual information to camera movement to provide an efficient solution to the active vision problem. It is trained end-to-end without bounding box annotations to control a camera and follow multiple targets from raw pixel values. Evaluation through both a simulation framework and real experimental setup, indicate that the proposed solution is robust to varying conditions and able to achieve better monitoring performance than traditional approaches both in terms of number of targets monitored as well as in effective monitoring time. The advantage of the proposed approach is that it is computationally less demanding and can run at over 10 FPS (∼4× speedup) on an embedded smart camera providing a practical and affordable solution to real-time active monitoring.

Keyphrases: deep learning, end-to-end learning, Real-Time Active Vision, Smart Camera

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Christos Kyrkou},
  title = {C^3Net: End-to-End Deep Learning for Efficient Real-Time Visual Active Camera Control},
  howpublished = {EasyChair Preprint no. 7986},

  year = {EasyChair, 2022}}
Download PDFOpen PDF in browser