The "universal accessibility" concept is acquiring an important role in the research area of human-computer interaction (HCI). This phenomenon is guided by the need to simplify the access to technological devices, such as mobile phones, PDAs and portable PCs, by making human-computer interaction more similar to human-human communication. In this direction, multimodal interaction has emerged as a new paradigm of human-computer interaction, which advances the implementation of universal accessibility. The main challenge of multimodal interaction, that is also the main topic of this paper, lies in developing a framework that is able to acquire information derived from whatever input modalities, to give these inputs an appropriate representation with a common meaning, to integrate these individual representations into a joint semantic interpretation, and to understand which is the better way to react to the interpreted multimodal sentence by activating the appropriate output devices. A detailed description of this framework and its functionalities will be given in this paper, along with some preliminary application details.

Toward the Development of an Integrative Framework for Multimodal Dialogue Processing

DUlizia Arianna;Ferri Fernando;Grifoni Patrizia
2008

Abstract

The "universal accessibility" concept is acquiring an important role in the research area of human-computer interaction (HCI). This phenomenon is guided by the need to simplify the access to technological devices, such as mobile phones, PDAs and portable PCs, by making human-computer interaction more similar to human-human communication. In this direction, multimodal interaction has emerged as a new paradigm of human-computer interaction, which advances the implementation of universal accessibility. The main challenge of multimodal interaction, that is also the main topic of this paper, lies in developing a framework that is able to acquire information derived from whatever input modalities, to give these inputs an appropriate representation with a common meaning, to integrate these individual representations into a joint semantic interpretation, and to understand which is the better way to react to the interpreted multimodal sentence by activating the appropriate output devices. A detailed description of this framework and its functionalities will be given in this paper, along with some preliminary application details.
2008
Istituto di Ricerche sulla Popolazione e le Politiche Sociali - IRPPS
Multimodal languages
System Architecture
Human-Computer Interaction
File in questo prodotto:
File Dimensione Formato  
prod_42014-doc_59085.pdf

solo utenti autorizzati

Descrizione: LNCS5333
Dimensione 321.57 kB
Formato Adobe PDF
321.57 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/41156
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact