This paper presents an Omnidirectional Distributed Vision System that learns to navigate a robot in an office-like environment without any knowledge about the calibration of the cameras or the robot control law. The system is composed of several omnidirectional Vision Agents (implemented with an omnidirectional camera and a computer). The first Vision Agent learns to control the robot with SARSA(?) reinforcement learning, using the LEM strategy to speed-up learning. Once the first Vision Agent learnt the correct policy, it transfers its knowledge to the other Vision Agents. The other Vision Agents might have different intrinsic and extrinsic camera parameters (that are unknown), so a certain amount of re-learning is needed. Reinforcement learning is well suited for this. In this paper, we present the structure of the learning system and the discussion about the optimal values for the learning parameters. During the experimentation the learning phase of the first agent has been carried out, then the knowledge propagation and the re-learning stage of three different agents have been tested. The experimental results demonstrate the feasibility of the approach and the possibility to port the system on the actual robot and cameras.

Explicit knowledge distribution in an omnidirectional distributed vision system

Cicirelli G;D'Orazio T;
2004

Abstract

This paper presents an Omnidirectional Distributed Vision System that learns to navigate a robot in an office-like environment without any knowledge about the calibration of the cameras or the robot control law. The system is composed of several omnidirectional Vision Agents (implemented with an omnidirectional camera and a computer). The first Vision Agent learns to control the robot with SARSA(?) reinforcement learning, using the LEM strategy to speed-up learning. Once the first Vision Agent learnt the correct policy, it transfers its knowledge to the other Vision Agents. The other Vision Agents might have different intrinsic and extrinsic camera parameters (that are unknown), so a certain amount of re-learning is needed. Reinforcement learning is well suited for this. In this paper, we present the structure of the learning system and the discussion about the optimal values for the learning parameters. During the experimentation the learning phase of the first agent has been carried out, then the knowledge propagation and the re-learning stage of three different agents have been tested. The experimental results demonstrate the feasibility of the approach and the possibility to port the system on the actual robot and cameras.
2004
Istituto di Studi sui Sistemi Intelligenti per l'Automazione - ISSIA - Sede Bari
Inglese
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004)
3
2743
2749
0780384636
http://www.scopus.com/record/display.url?eid=2-s2.0-14044250905&origin=inward
Sì, ma tipo non specificato
2004
Sendai
omnidirectional vision system
5
none
Menegatti, E; Cicirelli, G; Simionato, C; D'Orazio, T; Ishiguro, H
273
info:eu-repo/semantics/conferenceObject
04 Contributo in convegno::04.01 Contributo in Atti di convegno
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/104238
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact