Initial background estimation in video processing serves as bootstrapping model for moving object detection based on background subtraction. In long-term videos, the initial background model may require a constant update as long as the environmental changes happen (slowly or suddenly). In this paper we approach the background initialization together with its constant updating in time by modeling video background as ever-changing states of weightless neural networks. The result is a background estimation method based on a weightless neural network, called BEWiS. The approach proposed in this work is simple: background estimation at each pixel is carried out by weightless neural networks designed to learn pixel color frequency during video play, and all networks share the same rule for memory retention during training. This approach has the advantage of providing a useful background model at the very beginning of the video, since it operates in unsupervised mode. On the other hand, depending on the video scene, the pixel-level learning rule can be tuned to tackle the specificities and difficulties of the scene. The approach presented in this work has been experimented on the public Scene Background Initialization 2015 dataset and on the Scene Background Modeling Contest 2016 dataset, and it showed a performance comparable or superior to state-of-the-art methods.

Background estimation by weightless neural networks

Massimo De Gregorio;Maurizio Giordano
2017

Abstract

Initial background estimation in video processing serves as bootstrapping model for moving object detection based on background subtraction. In long-term videos, the initial background model may require a constant update as long as the environmental changes happen (slowly or suddenly). In this paper we approach the background initialization together with its constant updating in time by modeling video background as ever-changing states of weightless neural networks. The result is a background estimation method based on a weightless neural network, called BEWiS. The approach proposed in this work is simple: background estimation at each pixel is carried out by weightless neural networks designed to learn pixel color frequency during video play, and all networks share the same rule for memory retention during training. This approach has the advantage of providing a useful background model at the very beginning of the video, since it operates in unsupervised mode. On the other hand, depending on the video scene, the pixel-level learning rule can be tuned to tackle the specificities and difficulties of the scene. The approach presented in this work has been experimented on the public Scene Background Initialization 2015 dataset and on the Scene Background Modeling Contest 2016 dataset, and it showed a performance comparable or superior to state-of-the-art methods.
2017
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
Istituto di Scienze Applicate e Sistemi Intelligenti "Eduardo Caianiello" - ISASI
Background model
Video surveillance
Weightless neural networks
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/332001
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 32
  • ???jsp.display-item.citation.isi??? ND
social impact