The pervasiveness of objects equipped with sensors and actuators in daily environments has enabled the possibility of creating automations able to connect their behaviour according to trigger-action rules. However, in a smart space, there may be multiple active automations, even created by different people, and thus the resulting behaviour can be different from that expected for several reasons. Thus, there is a need for explanations able to indicate why such problems occur, and how to modify the relevant automations in order to better achieve the current user goals. To make such explanations effective, it is important that they be context-dependent because the actual results may depend on the current state of environmental variables. We introduce an approach able to support such adaptive explanations and two possible front-ends to present them (one visual and one conversational), with the associated feedback received in a preliminary user study that can be useful to derive general suggestions about how to present such explanations.

Context-dependent explainable daily automations

Gallo S.;Maenza S.;Mattioli A.;Paterno' F.
2025

Abstract

The pervasiveness of objects equipped with sensors and actuators in daily environments has enabled the possibility of creating automations able to connect their behaviour according to trigger-action rules. However, in a smart space, there may be multiple active automations, even created by different people, and thus the resulting behaviour can be different from that expected for several reasons. Thus, there is a need for explanations able to indicate why such problems occur, and how to modify the relevant automations in order to better achieve the current user goals. To make such explanations effective, it is important that they be context-dependent because the actual results may depend on the current state of environmental variables. We introduce an approach able to support such adaptive explanations and two possible front-ends to present them (one visual and one conversational), with the associated feedback received in a preliminary user study that can be useful to derive general suggestions about how to present such explanations.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
End-user development, Explainability, Trigger-action programming, Internet of things
File in questo prodotto:
File Dimensione Formato  
AXAI-paper07.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 717.55 kB
Formato Adobe PDF
717.55 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/543763
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact