Continuous-time Markov decision processes are an important class of models in a wide range of applications, ranging from cyberphysical systems to synthetic biology. A central problem is how to devise a policy to control the system in order to maximise the probability of satisfying a set of temporal logic specifications. Here we present a novel approach based on statistical model checking and an unbiased estimation of a functional gradient in the space of possible policies. The statistical approach has several advantages over conventional approaches based on uniformisation, as it can also be applied when the model is replaced by a black box, and does not suffer from state-space explosion. The use of a stochastic gradient to guide our search considerably improves the efficiency of learning policies. We demonstrate the method on a proof-ofprinciple non-linear population model, showing strong performance in a non-trivial task.

Policy learning for time-bounded reachability in continuous-time Markov decision processes via doubly-stochastic gradient ascent

Bortolussi L;
2016

Abstract

Continuous-time Markov decision processes are an important class of models in a wide range of applications, ranging from cyberphysical systems to synthetic biology. A central problem is how to devise a policy to control the system in order to maximise the probability of satisfying a set of temporal logic specifications. Here we present a novel approach based on statistical model checking and an unbiased estimation of a functional gradient in the space of possible policies. The statistical approach has several advantages over conventional approaches based on uniformisation, as it can also be applied when the model is replaced by a black box, and does not suffer from state-space explosion. The use of a stochastic gradient to guide our search considerably improves the efficiency of learning policies. We demonstrate the method on a proof-ofprinciple non-linear population model, showing strong performance in a non-trivial task.
2016
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
978-3-319-43424-7
Embedded systems
Markov processes
Model checking
Stochastic systems
File in questo prodotto:
File Dimensione Formato  
prod_424323-doc_151310.pdf

non disponibili

Descrizione: Policy learning for time-bounded reachability in continuous-time Markov decision processes via doubly-stochastic gradient ascent
Tipologia: Versione Editoriale (PDF)
Dimensione 421.72 kB
Formato Adobe PDF
421.72 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
prod_424323-doc_157722.pdf

accesso aperto

Descrizione: postprint
Tipologia: Versione Editoriale (PDF)
Dimensione 325.1 kB
Formato Adobe PDF
325.1 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/411678
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact