Many frameworks have been proposed for the management of uncertainty in logic programs (see, e.g. [1] for a list of references). Roughly, they can be classified into annotation based (AB) and implication based (IB) approaches. In the AB approach, a rule has form A: f(? 1,..., ? n) <- B 1:? 1,..., B n:? n, asserting "the certainty of atom A is at least (or is in) f(? 1,..., ? n), whenever the certainty of atom Bi is at least (or is in) ? i, 1 <= i <= n" (f is an n-ary computable function and ? i is either a constant or a variable ranging over an certainty domain). In the (IB) approach, a rule has form A <-? B 1,..., B n. Computationally, given an assignment v of certainties to the Bi s, the certainty of A is computed by taking the "conjunction" of the certainties v(Bi) and then somehow "propagating" it to the rule head. While the way implication is treated in the AB approach is closer to classical logic, the way rules are fired in the IB approach are more intuitive. Broadly, the IB is considered easier to use and more amenable for efficient implementation. Anyway, a common feature of both approaches is that the assumption made about the atoms whose logical values cannot be inferred is equal for all atoms: in the AB approach the Open World Assumption (OWA) is used (the default truth value of any atom is unknown), while in the IB approach this default value is the bottom element of a truth lattice, e.g. false. We believe that we should be able to associate to a logic program a semantics based on any given hypothesis, which represents our default or assumed knowledge.
Non-uniform hypothesis in deductive databases with uncertainty
Straccia U
2002
Abstract
Many frameworks have been proposed for the management of uncertainty in logic programs (see, e.g. [1] for a list of references). Roughly, they can be classified into annotation based (AB) and implication based (IB) approaches. In the AB approach, a rule has form A: f(? 1,..., ? n) <- B 1:? 1,..., B n:? n, asserting "the certainty of atom A is at least (or is in) f(? 1,..., ? n), whenever the certainty of atom Bi is at least (or is in) ? i, 1 <= i <= n" (f is an n-ary computable function and ? i is either a constant or a variable ranging over an certainty domain). In the (IB) approach, a rule has form A <-? B 1,..., B n. Computationally, given an assignment v of certainties to the Bi s, the certainty of A is computed by taking the "conjunction" of the certainties v(Bi) and then somehow "propagating" it to the rule head. While the way implication is treated in the AB approach is closer to classical logic, the way rules are fired in the IB approach are more intuitive. Broadly, the IB is considered easier to use and more amenable for efficient implementation. Anyway, a common feature of both approaches is that the assumption made about the atoms whose logical values cannot be inferred is equal for all atoms: in the AB approach the Open World Assumption (OWA) is used (the default truth value of any atom is unknown), while in the IB approach this default value is the bottom element of a truth lattice, e.g. false. We believe that we should be able to associate to a logic program a semantics based on any given hypothesis, which represents our default or assumed knowledge.| File | Dimensione | Formato | |
|---|---|---|---|
|
prod_91574-doc_122990.pdf
solo utenti autorizzati
Descrizione: Non-uniform hypothesis in deductive databases with uncertainty
Tipologia:
Versione Editoriale (PDF)
Dimensione
56.81 kB
Formato
Adobe PDF
|
56.81 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


