Reading aloud involves the complex interplay of visual, motor and lexical processes. While eye movements have been extensively investigated in the reading literature, less is known about the coordination of voice, eye and finger movements in oral and finger-point reading. Here we propose a multimodal perspective on these dynamics, emphasising the contribution of integrating eye-tracking, finger-tracking, and voice recording to a more comprehensive understanding of reading proficiency. Our results show that finger and eye movements are strongly coupled in early readers. Conversely, skilled readers show a more flexible coordination of sensorimotor signals and a more adaptive sensitivity to prosodic structures, with voice articulation slowing at key structural points, such as chunk heads and sentence-final boundaries. These findings provide novel insights into how multimodal coordination evolves with reading expertise, contributing to a more fine-grained understanding of reading fluency.

Oral text reading as a multi-sensory task

Marzi C.;Nadalini A.;Lento A.;Srivastava M.;Todesco A.;Pirrelli V.;Ferro M.
2025

Abstract

Reading aloud involves the complex interplay of visual, motor and lexical processes. While eye movements have been extensively investigated in the reading literature, less is known about the coordination of voice, eye and finger movements in oral and finger-point reading. Here we propose a multimodal perspective on these dynamics, emphasising the contribution of integrating eye-tracking, finger-tracking, and voice recording to a more comprehensive understanding of reading proficiency. Our results show that finger and eye movements are strongly coupled in early readers. Conversely, skilled readers show a more flexible coordination of sensorimotor signals and a more adaptive sensitivity to prosodic structures, with voice articulation slowing at key structural points, such as chunk heads and sentence-final boundaries. These findings provide novel insights into how multimodal coordination evolves with reading expertise, contributing to a more fine-grained understanding of reading fluency.
Campo DC Valore Lingua
dc.authority.ancejournal LINGUE E LINGUAGGIO en
dc.authority.orgunit Istituto di linguistica computazionale "Antonio Zampolli" - ILC en
dc.authority.orgunit Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI en
dc.authority.people Marzi C. en
dc.authority.people Nadalini A. en
dc.authority.people Lento A. en
dc.authority.people Srivastava M. en
dc.authority.people Todesco A. en
dc.authority.people Pirrelli V. en
dc.authority.people Ferro M. en
dc.collection.id.s b3f88f24-048a-4e43-8ab1-6697b90e068e *
dc.collection.name 01.01 Articolo in rivista *
dc.contributor.appartenenza Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI *
dc.contributor.appartenenza Istituto di linguistica computazionale "Antonio Zampolli" - ILC *
dc.contributor.appartenenza.mi 918 *
dc.contributor.appartenenza.mi 973 *
dc.contributor.area Non assegn *
dc.contributor.area Non assegn *
dc.contributor.area Non assegn *
dc.contributor.area Non assegn *
dc.contributor.area Non assegn *
dc.contributor.area Non assegn *
dc.contributor.area Non assegn *
dc.date.accessioned 2025/07/15 17:19:30 -
dc.date.available 2025/07/15 17:19:30 -
dc.date.firstsubmission 2025/07/15 16:11:33 *
dc.date.issued 2025 -
dc.date.submission 2025/07/15 17:16:45 *
dc.description.abstracteng Reading aloud involves the complex interplay of visual, motor and lexical processes. While eye movements have been extensively investigated in the reading literature, less is known about the coordination of voice, eye and finger movements in oral and finger-point reading. Here we propose a multimodal perspective on these dynamics, emphasising the contribution of integrating eye-tracking, finger-tracking, and voice recording to a more comprehensive understanding of reading proficiency. Our results show that finger and eye movements are strongly coupled in early readers. Conversely, skilled readers show a more flexible coordination of sensorimotor signals and a more adaptive sensitivity to prosodic structures, with voice articulation slowing at key structural points, such as chunk heads and sentence-final boundaries. These findings provide novel insights into how multimodal coordination evolves with reading expertise, contributing to a more fine-grained understanding of reading fluency. -
dc.description.allpeople Marzi, C.; Nadalini, A.; Lento, A.; Srivastava, M.; Todesco, A.; Pirrelli, V.; Ferro, M. -
dc.description.allpeopleoriginal Marzi C.; Nadalini A.; Lento A.; Srivastava M.; Todesco A.; Pirrelli V.; Ferro M. en
dc.description.fulltext restricted en
dc.description.international no en
dc.description.numberofauthors 7 -
dc.identifier.doi 10.1418/117447 en
dc.identifier.isi WOS:001573429600001 -
dc.identifier.scopus 2-s2.0-105011840282 -
dc.identifier.source manual *
dc.identifier.uri https://hdl.handle.net/20.500.14243/549321 -
dc.identifier.url https://www.rivisteweb.it/doi/10.1418/117447 en
dc.language.iso eng en
dc.relation.firstpage 141 en
dc.relation.issue 1 en
dc.relation.lastpage 156 en
dc.relation.medium STAMPA en
dc.relation.numberofpages 16 en
dc.relation.volume XXIV en
dc.subject.keywordseng reading development -
dc.subject.keywordseng multimodal integration -
dc.subject.keywordseng eye-voice span -
dc.subject.keywordseng finger-voice span -
dc.subject.keywordseng adaptive reading. -
dc.subject.singlekeyword reading development *
dc.subject.singlekeyword multimodal integration *
dc.subject.singlekeyword eye-voice span *
dc.subject.singlekeyword finger-voice span *
dc.subject.singlekeyword adaptive reading. *
dc.title Oral text reading as a multi-sensory task en
dc.type.circulation Internazionale en
dc.type.driver info:eu-repo/semantics/article -
dc.type.full 01 Contributo su Rivista::01.01 Articolo in rivista it
dc.type.impactfactor si en
dc.type.miur 262 -
dc.type.referee Esperti anonimi en
iris.isi.extIssued 2025 -
iris.isi.extTitle ORAL TEXT READING AS A MULTI-SENSORY TASK -
iris.isi.ideLinkStatusDate 2026/04/20 15:00:02 *
iris.isi.ideLinkStatusMillisecond 1776690002600 *
iris.isi.metadataErrorDescription 0 -
iris.isi.metadataErrorType ERROR_NO_MATCH -
iris.isi.metadataStatus ERROR -
iris.mediafilter.data 2025/11/25 03:55:56 *
iris.orcid.lastModifiedDate 2026/04/20 15:00:02 *
iris.orcid.lastModifiedMillisecond 1776690002591 *
iris.scopus.extIssued 2025 -
iris.scopus.extTitle ORAL TEXT READING AS A MULTI-SENSORY TASK -
iris.scopus.ideLinkStatusDate 2026/04/20 15:00:00 *
iris.scopus.ideLinkStatusMillisecond 1776690000516 *
iris.sitodocente.maxattempts 1 -
iris.unpaywall.metadataCallLastModified 28/04/2026 05:03:42 -
iris.unpaywall.metadataCallLastModifiedMillisecond 1777345423012 -
iris.unpaywall.metadataErrorDescription 0 -
iris.unpaywall.metadataErrorType ERROR_NO_MATCH -
iris.unpaywall.metadataStatus ERROR -
isi.authority.ancejournal LINGUE E LINGUAGGIO###1720-9331 *
isi.category OY *
isi.contributor.affiliation Consiglio Nazionale delle Ricerche (CNR) -
isi.contributor.affiliation Consiglio Nazionale delle Ricerche (CNR) -
isi.contributor.affiliation Consiglio Nazionale delle Ricerche (CNR) -
isi.contributor.affiliation Consiglio Nazionale delle Ricerche (CNR) -
isi.contributor.affiliation Consiglio Nazionale delle Ricerche (CNR) -
isi.contributor.affiliation Consiglio Nazionale delle Ricerche (CNR) -
isi.contributor.affiliation Consiglio Nazionale delle Ricerche (CNR) -
isi.contributor.country Italy -
isi.contributor.country Italy -
isi.contributor.country Italy -
isi.contributor.country Italy -
isi.contributor.country Italy -
isi.contributor.country Italy -
isi.contributor.country Italy -
isi.contributor.name Claudia -
isi.contributor.name Andrea -
isi.contributor.name Alessandro -
isi.contributor.name Manu -
isi.contributor.name Alice -
isi.contributor.name Vito -
isi.contributor.name Marcello -
isi.contributor.researcherId C-8034-2012 -
isi.contributor.researcherId DKG-8281-2022 -
isi.contributor.researcherId LII-9266-2024 -
isi.contributor.researcherId NEK-1073-2025 -
isi.contributor.researcherId OXZ-1833-2025 -
isi.contributor.researcherId DXW-8155-2022 -
isi.contributor.researcherId HZR-5504-2023 -
isi.contributor.subaffiliation Inst Computat Linguist CNR ILC -
isi.contributor.subaffiliation Inst Computat Linguist CNR ILC -
isi.contributor.subaffiliation Inst Computat Linguist CNR ILC -
isi.contributor.subaffiliation Inst Computat Linguist CNR ILC -
isi.contributor.subaffiliation Inst Computat Linguist CNR ILC -
isi.contributor.subaffiliation Inst Computat Linguist CNR ILC -
isi.contributor.subaffiliation Inst Computat Linguist CNR ILC -
isi.contributor.surname Marzi -
isi.contributor.surname Nadalini -
isi.contributor.surname Lento -
isi.contributor.surname Srivastava -
isi.contributor.surname Todesco -
isi.contributor.surname Pirrelli -
isi.contributor.surname Ferro -
isi.date.issued 2025 *
isi.description.abstracteng Reading aloud involves the complex interplay of visual, motor and lexical processes. While eye movements have been extensively investigated in the reading literature, less is known about the coordination of voice, eye and finger movements in oral and finger-point reading. Here we propose a multimodal perspective on these dynamics, emphasising the contribution of integrating eye-tracking, finger-tracking, and voice recording to a more comprehensive understanding of reading proficiency. Our results show that finger and eye movements are strongly coupled in early readers. Conversely, skilled readers show a more flexible coordination of sensorimotor signals and a more adaptive sensitivity to prosodic structures, with voice articulation slowing at key structural points, such as chunk heads and sentence-final boundaries. These findings provide novel insights into how multimodal coordination evolves with reading expertise, contributing to a more fine-grained understanding of reading fluency. *
isi.description.allpeopleoriginal Marzi, C; Nadalini, A; Lento, A; Srivastava, M; Todesco, A; Pirrelli, V; Ferro, M; *
isi.document.sourcetype WOS.ESCI *
isi.document.type Article *
isi.document.types Article *
isi.identifier.isi WOS:001573429600001 *
isi.journal.journaltitle LINGUE E LINGUAGGIO *
isi.journal.journaltitleabbrev LINGUE LINGUAGGIO *
isi.language.original English *
isi.publisher.place STRADA MAGGIORE 37, 40125 BOLOGNA, ITALY *
isi.relation.firstpage 141 *
isi.relation.issue 1 *
isi.relation.lastpage 156 *
isi.relation.volume 24 *
isi.title ORAL TEXT READING AS A MULTI-SENSORY TASK *
scopus.authority.ancejournal LINGUE E LINGUAGGIO###1720-9331 *
scopus.category 1203 *
scopus.category 3310 *
scopus.contributor.affiliation Institute for Computational Linguistics (CNR-ILC) -
scopus.contributor.affiliation Institute for Computational Linguistics (CNR-ILC) -
scopus.contributor.affiliation Institute for Computational Linguistics (CNR-ILC) -
scopus.contributor.affiliation Institute for Computational Linguistics (CNR-ILC) -
scopus.contributor.affiliation Institute for Computational Linguistics (CNR-ILC) -
scopus.contributor.affiliation Institute for Computational Linguistics (CNR-ILC) -
scopus.contributor.affiliation Institute for Computational Linguistics (CNR-ILC) -
scopus.contributor.afid 60021199 -
scopus.contributor.afid 60021199 -
scopus.contributor.afid 60021199 -
scopus.contributor.afid 60021199 -
scopus.contributor.afid 60021199 -
scopus.contributor.afid 60021199 -
scopus.contributor.afid 60021199 -
scopus.contributor.auid 36621334800 -
scopus.contributor.auid 57192941272 -
scopus.contributor.auid 58941792900 -
scopus.contributor.auid 60017833000 -
scopus.contributor.auid 60017255400 -
scopus.contributor.auid 14833305800 -
scopus.contributor.auid 15759406100 -
scopus.contributor.country Italy -
scopus.contributor.country Italy -
scopus.contributor.country Italy -
scopus.contributor.country Italy -
scopus.contributor.country Italy -
scopus.contributor.country Italy -
scopus.contributor.country Italy -
scopus.contributor.dptid -
scopus.contributor.dptid -
scopus.contributor.dptid -
scopus.contributor.dptid -
scopus.contributor.dptid -
scopus.contributor.dptid -
scopus.contributor.dptid -
scopus.contributor.name Claudia -
scopus.contributor.name Andrea -
scopus.contributor.name Alessandro -
scopus.contributor.name Manu -
scopus.contributor.name Alice -
scopus.contributor.name Vito -
scopus.contributor.name Marcello -
scopus.contributor.subaffiliation National Research Council; -
scopus.contributor.subaffiliation National Research Council; -
scopus.contributor.subaffiliation National Research Council; -
scopus.contributor.subaffiliation National Research Council; -
scopus.contributor.subaffiliation National Research Council; -
scopus.contributor.subaffiliation National Research Council; -
scopus.contributor.subaffiliation National Research Council; -
scopus.contributor.surname Marzi -
scopus.contributor.surname Nadalini -
scopus.contributor.surname Lento -
scopus.contributor.surname Srivastava -
scopus.contributor.surname Todesco -
scopus.contributor.surname Pirrelli -
scopus.contributor.surname Ferro -
scopus.date.issued 2025 *
scopus.description.abstracteng Reading aloud involves the complex interplay of visual, motor and lexical processes. While eye movements have been extensively investigated in the reading literature, less is known about the coordination of voice, eye and finger movements in oral and finger-point reading. Here we propose a multimodal perspective on these dynamics, emphasising the contribution of integrating eye-tracking, finger-tracking, and voice recording to a more comprehensive understanding of reading proficiency. Our results show that finger and eye movements are strongly coupled in early readers. Conversely, skilled readers show a more flexible coordination of sensorimotor signals and a more adaptive sensitivity to prosodic structures, with voice articulation slowing at key structural points, such as chunk heads and sentence-final boundaries. These findings provide novel insights into how multimodal coordination evolves with reading expertise, contributing to a more fine-grained understanding of reading fluency. *
scopus.description.allpeopleoriginal Marzi C.; Nadalini A.; Lento A.; Srivastava M.; Todesco A.; Pirrelli V.; Ferro M. *
scopus.differences scopus.subject.keywords *
scopus.differences scopus.relation.volume *
scopus.document.type ar *
scopus.document.types ar *
scopus.identifier.doi 10.1418/117447 *
scopus.identifier.eissn 2612-0488 *
scopus.identifier.pui 2039738145 *
scopus.identifier.scopus 2-s2.0-105011840282 *
scopus.journal.sourceid 19700200936 *
scopus.language.iso eng *
scopus.publisher.name Societa Editrice Il Mulino *
scopus.relation.firstpage 141 *
scopus.relation.issue 1 *
scopus.relation.lastpage 156 *
scopus.relation.volume 24 *
scopus.subject.keywords adaptive reading; eye-voice span; finger-voice span; multimodal integration; reading development; *
scopus.title ORAL TEXT READING AS A MULTI-SENSORY TASK *
scopus.titleeng ORAL TEXT READING AS A MULTI-SENSORY TASK *
Appare nelle tipologie: 01.01 Articolo in rivista
File in questo prodotto:
File Dimensione Formato  
09_Marzi_et_al_2025_1.pdf

solo utenti autorizzati

Descrizione: Oral text reading as a multi-sensory task
Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 892.46 kB
Formato Adobe PDF
892.46 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/549321
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact