In recent years, the rise of deep learning has transformed the field of Natural Language Processing (NLP), thus, producing models based on neural networks with impressive achievements in various tasks, such as language modeling (Devlin et al., 2019), syntactic parsing (Pota et al., 2019), machine translation (Artetxe et al., 2017), sentiment analysis (Fu et al., 2019), and question answering (Zhang et al., 2019). This progress has been accompanied by a myriad of new end-to-end neural network architectures able to map input text to some output prediction. On the other hand, architectures inspired by human cognition have recently appeared (Dominey, 2013; Hinaut and Dominey, 2013; Golosio et al., 2015), this is aimed at modeling language comprehension and learning by means of neural models built according to current knowledge on how verbal information is stored and processed in the human brain. Despite the success of deep learning in different NLP tasks and the interesting attempts of cognitive systems, natural language understanding still remains an open challenge for machines. The goal of this Research Topic is to describe novel and very interesting theoretical studies, models, and case studies in the areas of NLP as well as Cognitive and Artificial Intelligence (AI) systems, based on knowledge and expertise coming from heterogeneous but complementary disciplines (machine/deep learning, robotics, neuroscience, psychology).

Editorial: Language Representation and Learning in Cognitive and Artificial Intelligence Systems

Esposito Massimo;
2020

Abstract

In recent years, the rise of deep learning has transformed the field of Natural Language Processing (NLP), thus, producing models based on neural networks with impressive achievements in various tasks, such as language modeling (Devlin et al., 2019), syntactic parsing (Pota et al., 2019), machine translation (Artetxe et al., 2017), sentiment analysis (Fu et al., 2019), and question answering (Zhang et al., 2019). This progress has been accompanied by a myriad of new end-to-end neural network architectures able to map input text to some output prediction. On the other hand, architectures inspired by human cognition have recently appeared (Dominey, 2013; Hinaut and Dominey, 2013; Golosio et al., 2015), this is aimed at modeling language comprehension and learning by means of neural models built according to current knowledge on how verbal information is stored and processed in the human brain. Despite the success of deep learning in different NLP tasks and the interesting attempts of cognitive systems, natural language understanding still remains an open challenge for machines. The goal of this Research Topic is to describe novel and very interesting theoretical studies, models, and case studies in the areas of NLP as well as Cognitive and Artificial Intelligence (AI) systems, based on knowledge and expertise coming from heterogeneous but complementary disciplines (machine/deep learning, robotics, neuroscience, psychology).
2020
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
Natural Language Processing (NLP)
artificial intelligence
cognitive systems
robotics
deep learning
machine learning
language representation and language processing
File in questo prodotto:
File Dimensione Formato  
prod_438143-doc_157116.pdf

solo utenti autorizzati

Descrizione: Editorial: Language Representation and Learning in Cognitive and Artificial Intelligence Systems
Tipologia: Versione Editoriale (PDF)
Licenza: Nessuna licenza dichiarata (non attribuibile a prodotti successivi al 2023)
Dimensione 175.89 kB
Formato Adobe PDF
175.89 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/381099
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact