The development of artificial intelligence systems capable of understanding and reasoning about complex real-world scenarios is a significant challenge. In this work we present a novel approach to enhance and exploit LLM re- active capability to address complex problems and interpret deeply contex- tual real-world meaning. We introduce a method and a tool for creating a multimodal, knowledge-augmented formal representation of meaning that combines the strengths of large language models with structured semantic representations. Our method begins with an image input, utilizing state- of-the-art large language models to generate a natural language description. This description is then transformed into an Abstract Meaning Representa- tion (AMR) graph, which is formalized and enriched with logical design pat- terns, and layered semantics derived from linguistic and factual knowledge bases. The resulting graph is then fed back into the LLM to be extended with implicit knowledge activated by complex heuristic learning, including semantic implicatures, moral values, embodied cognition, and metaphorical representations. By bridging the gap between unstructured language models and formal semantic structures, our method opens new avenues for tackling intricate problems in natural language understanding and reasoning.

Neurosymbolic Graph Enrichment for Grounded World Models

Stefano De Giorgis
Primo
;
Aldo Gangemi;Alessandro Russo
In corso di stampa

Abstract

The development of artificial intelligence systems capable of understanding and reasoning about complex real-world scenarios is a significant challenge. In this work we present a novel approach to enhance and exploit LLM re- active capability to address complex problems and interpret deeply contex- tual real-world meaning. We introduce a method and a tool for creating a multimodal, knowledge-augmented formal representation of meaning that combines the strengths of large language models with structured semantic representations. Our method begins with an image input, utilizing state- of-the-art large language models to generate a natural language description. This description is then transformed into an Abstract Meaning Representa- tion (AMR) graph, which is formalized and enriched with logical design pat- terns, and layered semantics derived from linguistic and factual knowledge bases. The resulting graph is then fed back into the LLM to be extended with implicit knowledge activated by complex heuristic learning, including semantic implicatures, moral values, embodied cognition, and metaphorical representations. By bridging the gap between unstructured language models and formal semantic structures, our method opens new avenues for tackling intricate problems in natural language understanding and reasoning.
In corso di stampa
Istituto di Scienze e Tecnologie della Cognizione - ISTC
Istituto di Scienze e Tecnologie della Cognizione - ISTC - Sede Secondaria Catania
Neurosymbolic AI
Knowledge Representation
Knowledge Extraction
Large Language Models
Graph KAG
Hybrid Reasoning
File in questo prodotto:
File Dimensione Formato  
_Information_Processing___Management__iTAF_2025_post_review-2.pdf

solo utenti autorizzati

Tipologia: Documento in Pre-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.83 MB
Formato Adobe PDF
2.83 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/539442
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact