This paper presents the IMAGACT annotation infrastructure which uses both corpus - based and competence - based methods for the simultaneous extraction of a language independent Action ontology from English and Italian spontaneous speech corpora. The infrastructure relies on an innovative methodology based on images of prototypical scenes and will identify high frequency action concepts in everyday life, suitable for the implementation of an open set of languages.

IMAGACT: Deriving an Action Ontology from Spoken Corpora

Frontini Francesca;Russo Irene;Monachini Monica
2012

Abstract

This paper presents the IMAGACT annotation infrastructure which uses both corpus - based and competence - based methods for the simultaneous extraction of a language independent Action ontology from English and Italian spontaneous speech corpora. The infrastructure relies on an innovative methodology based on images of prototypical scenes and will identify high frequency action concepts in everyday life, suitable for the implementation of an open set of languages.
Campo DC Valore Lingua
dc.authority.orgunit Istituto di linguistica computazionale "Antonio Zampolli" - ILC -
dc.authority.people Moneglia Massimo it
dc.authority.people Gagliardi Gloria it
dc.authority.people Panunzi Alessandro it
dc.authority.people Frontini Francesca it
dc.authority.people Russo Irene it
dc.authority.people Monachini Monica it
dc.collection.id.s 71c7200a-7c5f-4e83-8d57-d3d2ba88f40d *
dc.collection.name 04.01 Contributo in Atti di convegno *
dc.contributor.appartenenza Istituto di linguistica computazionale "Antonio Zampolli" - ILC *
dc.contributor.appartenenza.mi 918 *
dc.date.accessioned 2024/02/16 05:20:15 -
dc.date.available 2024/02/16 05:20:15 -
dc.date.issued 2012 -
dc.description.abstracteng This paper presents the IMAGACT annotation infrastructure which uses both corpus - based and competence - based methods for the simultaneous extraction of a language independent Action ontology from English and Italian spontaneous speech corpora. The infrastructure relies on an innovative methodology based on images of prototypical scenes and will identify high frequency action concepts in everyday life, suitable for the implementation of an open set of languages. -
dc.description.affiliations [1] Università di Firenze; [2] CNR-ILC Pisa -
dc.description.allpeople Moneglia, Massimo; Gagliardi, Gloria; Panunzi, Alessandro; Frontini, Francesca; Russo, Irene; Monachini, Monica -
dc.description.allpeopleoriginal Moneglia, Massimo [1]; Gagliardi, Gloria [1]; Panunzi, Alessandro [1]; Frontini, Francesca [2]; Russo, Irene [2]; Monachini, Monica [2] -
dc.description.fulltext none en
dc.description.note ID_PUMA: /cnr.ilc/2012-A2-021 -
dc.description.numberofauthors 6 -
dc.identifier.isbn 978-90-74029-00-1 -
dc.identifier.uri https://hdl.handle.net/20.500.14243/122911 -
dc.language.iso eng -
dc.relation.alleditors Bunt H. -
dc.relation.conferencedate 3-5 October 2012 -
dc.relation.conferencename Eighth Joint ISO - ACL SIGSEM Workshop on Interoperable Semantic Annotation (ISA-8) -
dc.relation.conferenceplace Pisa, Italy -
dc.relation.firstpage 42 -
dc.relation.ispartofbook Proceedings of the Eight Joint ISO - ACL SIGSEM Workshop on Interoperable Semantic Annotation ISA-8 -
dc.relation.lastpage 47 -
dc.subject.keywords Action verbs; Ontology; imagery -
dc.subject.singlekeyword Action verbs *
dc.subject.singlekeyword Ontology *
dc.subject.singlekeyword imagery *
dc.title IMAGACT: Deriving an Action Ontology from Spoken Corpora en
dc.type.driver info:eu-repo/semantics/conferenceObject -
dc.type.full 04 Contributo in convegno::04.01 Contributo in Atti di convegno it
dc.type.miur 273 -
dc.type.referee Sì, ma tipo non specificato -
dc.ugov.descaux1 220262 -
iris.orcid.lastModifiedDate 2024/04/04 14:50:15 *
iris.orcid.lastModifiedMillisecond 1712235015985 *
iris.sitodocente.maxattempts 1 -
Appare nelle tipologie: 04.01 Contributo in Atti di convegno
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/122911
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact