From standard reasoning problems to non-standard reasoning problems and
- ne step further
Uli Sattler University of Manchester, University of Oslo uli.sattler@manchester.ac.uk
From standard reasoning problems to non-standard reasoning - - PowerPoint PPT Presentation
From standard reasoning problems to non-standard reasoning problems and one step further Uli Sattler University of Manchester, University of Oslo uli.sattler@manchester.ac.uk Some Advertisement Lutz, Sattler Baader, Horrocks,
Uli Sattler University of Manchester, University of Oslo uli.sattler@manchester.ac.uk
Cover illustration: The Description Logic logo. Courtesy of Enrico Franconi.
Baader, Horrocks, Lutz, Sattler
An Introduction to Description Logic
Description Logics (DLs) have a long tradition in computer science and knowledge representation, being designed so that domain knowledge can be described and so that computers can reason about this knowledge. DLs have recently gained increased importance since they form the logical basis of widely used ontology languages, in particular the web ontology language OWL. Written by four renowned experts, this is the first textbook on Description Logic. It is suitable for self-study by graduates and as the basis for a university course. Starting from a basic DL, the book introduces the reader to their syntax, semantics, reasoning problems and model theory, and discusses the computational complexity of these reasoning problems and algorithms to solve them. It then explores a variety of reasoning techniques, knowledge-based applications and tools, and describes the relationship between DLs and OWL. Franz Baader is a professor in the Institute of Theoretical Computer Science at TU Dresden. Ian Horrocks is a professor in the Department of Computer Science at the University of Oxford. Carsten Lutz is a professor in the Department of Computer Science at the University of Bremen. Uli Sattler is a professor in the Information Management Group within the School of Computer Science at the University of Manchester.
Designed by Zoe Naylor.
An Introduction to
Franz Baader Ian Horrocks Carsten Lutz Uli Sattler
Cover illustration: The Description Logic logo. Courtesy of Enrico Franconi.
Baader, Horrocks, Lutz, Sattler
An Introduction to Description Logic
Description Logics (DLs) have a long tradition in computer science and knowledge representation, being designed so that domain knowledge can be described and so that computers can reason about this knowledge. DLs have recently gained increased importance since they form the logical basis of widely used ontology languages, in particular the web ontology language OWL. Written by four renowned experts, this is the first textbook on Description Logic. It is suitable for self-study by graduates and as the basis for a university course. Starting from a basic DL, the book introduces the reader to their syntax, semantics, reasoning problems and model theory, and discusses the computational complexity of these reasoning problems and algorithms to solve them. It then explores a variety of reasoning techniques, knowledge-based applications and tools, and describes the relationship between DLs and OWL. Franz Baader is a professor in the Institute of Theoretical Computer Science at TU Dresden. Ian Horrocks is a professor in the Department of Computer Science at the University of Oxford. Carsten Lutz is a professor in the Department of Computer Science at the University of Bremen. Uli Sattler is a professor in the Information Management Group within the School of Computer Science at the University of Manchester.
Designed by Zoe Naylor.
An Introduction to
Franz Baader Ian Horrocks Carsten Lutz Uli Sattler
Get 20% Discount:
www.cambridge.org/9780521695428
and enter the code BAADER2017 at the checkout
15
algorithms; 5. Complexity; 6. Reasoning in the ε family of Description Logics; 7. Query answering; 8. Ontology languages and applications; Appendix A. Description Logic terminology; References; Index.
£59.99 £47.99 $79.99 $63.99 £29.99 £23.99 $39.99 $31.99
Hardback 978-0-521-87361-1 Paperback 978-0-521-69542-8
April
228 x 152 mm 260pp 30 b/w illus.
we all know them: given decide/compute
C, D, O, T , A, . . .
we all know them: given
C, D, O, T , A, . . . Justspα, Oq, PinPointpα, Oq, . . . matchpC, P, Oq, unifypP1, P2, Oq, . . . x-modpΣ, Oq, . . . mscpa, Oq, lcspC, D, Oq, . . .
we all know them: given
C, D, O, T , A, . . . Justspα, Oq, PinPointpα, Oq, . . . matchpC, P, Oq, unifypP1, P2, Oq, . . . x-modpΣ, Oq, . . . mscpa, Oq, lcspC, D, Oq, . . . Are
also standard reasoning problems?
understand problems:
– worst case – data – parametrised – …
understand solutions:
– see above
– worst case complexity ≠ best case complexity – amenable to optimisation – empirical evaluation
fact hermit jfact pellet 10 1000 100000 10000000 10 1000 100000 10000000
Ontology/Reasoner pairs Number of tests
"ST" N*log(N) N^2 ST
from our empirical evaluation: how many subsumption does classification involve?
x-modpΣ, Oq, . . .
x-modpΣ, Oq, . . . Extension/variants of DLs
are problems that are based on
are problems that are based on
SROIQ : ComSubs(C, D, { C v 8R.(A u C), D v 8R.(A u D)})
are problems that are based on
“ t @R.A, @R.@R.A, @R.@R.@R.A, . . .u
SROIQ : ComSubs(C, D, { C v 8R.(A u C), D v 8R.(A u D)})
are problems that are based on
“ t @R.A, @R.@R.A, @R.@R.@R.A, . . .u
SROIQ : ComSubs(C, D, { C v 8R.(A u C), D v 8R.(A u D)})
TBox ABox Learner axiom axiom axiom axiom axiom Do not confuse with (exact) learning of TBoxes (via probing queries)
– taking background knowledge in KB into account – unbiased: let the data speak! – unsupervised (no positive/negative examples) – Semantic Data Mining
TBox ABox Learner axiom axiom axiom axiom axiom Hypotheses
– taking background knowledge in KB into account – unbiased: let the data speak! – unsupervised (no positive/negative examples) – Semantic Data Mining
TBox ABox Learner axiom axiom axiom axiom axiom Hypotheses
correlations in KB?
axiom axiom axiom axiom
axiom
Hypotheses
– a small set of short axioms
– in a suitable DL: – free of redundancy
✓preferred laconic justifications nmax
`max
ALCHI . . . SROIQ
axiom axiom axiom axiom
axiom
Hypotheses
✓ informative:
✓we want to mine new axioms
✓ consistent: ✓ non-redundant among all hypotheses:
8α 2 H : O 6| = α O [ H 6| = > v ? axiom axiom axiom axiom
axiom
Hypotheses
H0, H 2 H : H 6= H0 and H0 ⌘ H
✓ informative:
✓we want to mine new axioms
✓ consistent: ✓ non-redundant among all hypotheses:
✓ logical strength:
? maximally strong?
? minimally strong?
✓ reconciliatory power
far only loosely related
8α 2 H : O 6| = α O [ H 6| = > v ? axiom axiom axiom axiom
axiom
Hypotheses
H0, H 2 H : H 6= H0 and H0 ⌘ H
– learn from association rule mining (ARM):
– count instances, neg instances, non-instances
– make sure you treat a GCI as an axiom and not as a rule
– coverage, support, …, lift
Some useful notation:
C1 C2 C3 C4 … Ind1 X X X ? … Ind2 X X … Ind3 ? ? X ? … Ind4 ? ? ? … … … … … … …
Inst(C, O) := {a | O | = C(a)} UnKnpC, Oq :“ InstpJ, OqzpInstpC, Oq Y Instp C, Oqq P(C, O) := # Inst(C, O)/# Inst(>, O)
some axiom measures easily adapted from ARM: for a GCI define its metrics as follows:
basic relativized Coverage Support Contradiction Assumption … Confidence Lift …
# Inst(C, O) # Inst(C u D, O) # Inst(C u ¬D, O) # Inst(C, O) ∩ UnKn(D, O) P(C u ¬D, O) P(C u D, O) P(C, O) P(C u D, O)/P(C, O) P(C u D, O)/P(C, O)P(D, O)
where P(X, O) = # Ind(X, O)/# Ind(>, O) C v D
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
relativized Coverage Support Assumption … Confidence Lift
P(C u D, O) P(C, O) P(C u D, O)/P(C, O) P(C u D, O)/P(C, O)P(D, O)
200/400 180/400 180/400 180/400 180/400 180/400 20/400 180/200 180/180 180/180 400/200 400/200 400/180
A v B B v C1 B v C2
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
relativized Coverage Support Assumption … Confidence Lift
P(C u D, O) P(C, O) P(C u D, O)/P(C, O) P(C u D, O)/P(C, O)P(D, O)
200/400 180/400 180/400 180/400 180/400 180/400 20/400 180/200 180/180 180/180 400/200 400/200 400/180
A v B B v C1 B v C2
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
relativized Coverage Support Assumption … Confidence Lift
P(C u D, O) P(C, O) P(C u D, O)/P(C, O) P(C u D, O)/P(C, O)P(D, O)
200/400 180/400 180/400 180/400 180/400 180/400 20/400 180/200 180/180 180/180 400/200 400/200 400/180
A v B B v C1 B v C2
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
relativized Coverage Support Assumption … Confidence Lift
P(C u D, O) P(C, O) P(C u D, O)/P(C, O) P(C u D, O)/P(C, O)P(D, O)
200/400 180/400 180/400 180/400 180/400 180/400 20/400 180/200 180/180 180/180 400/200 400/200 400/180
A v B B v C1 B v C2
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
relativized Coverage Support Assumption … Confidence Lift
P(C u D, O) P(C, O) P(C u D, O)/P(C, O) P(C u D, O)/P(C, O)P(D, O)
0.5 0.45 0.45 0.45 0.45 0.45 0.05 0.45 1 1 2 2 2.22
A v B B v C1 B v C2
Oooops!
– contrapositive!
read below as ‘the resulting LHS’… read below as ‘the resulting RHS’…
X t ¬Y v Y t ¬X X v Y
C
main relativized Coverage Support Contradiction Assumption … Confidence Lift …
# Inst(C, O) # Inst(C u D, O) # Inst(C u ¬D, O) # Inst(C, O) ∩ UnKn(D, O) P(C u ¬D, O) P(C u D, O) P(C, O) P(C u D, O)/P(C, O) P(C u D, O)/P(C, O)P(D, O)
D
Oooops!
– contrapositive!
read below as ‘the resulting LHS’… read below as ‘the resulting RHS’…
X t ¬Y v Y t ¬X X v Y
C
main relativized Coverage Support Contradiction Assumption … Confidence Lift …
# Inst(C, O) # Inst(C u D, O) # Inst(C u ¬D, O) # Inst(C, O) ∩ UnKn(D, O) P(C u ¬D, O) P(C u D, O) P(C, O) P(C u D, O)/P(C, O) P(C u D, O)/P(C, O)P(D, O)
D Axiom measures are semantically faithful, i.e., A s s ( A v B , O ) = A s s ( ¬ B v ¬ A , O )
Oooops!
– contrapositive!
read below as ‘the resulting LHS’… read below as ‘the resulting RHS’…
X t ¬Y v Y t ¬X X v Y
C
main relativized Coverage Support Contradiction Assumption … Confidence Lift …
# Inst(C, O) # Inst(C u D, O) # Inst(C u ¬D, O) # Inst(C, O) ∩ UnKn(D, O) P(C u ¬D, O) P(C u D, O) P(C, O) P(C u D, O)/P(C, O) P(C u D, O)/P(C, O)P(D, O)
D Axiom measures are semantically faithful, i.e., A s s ( A v B , O ) = A s s ( ¬ B v ¬ A , O ) Axiom measures are not semantically faithful, e.g., SupportpA Ñ B, Oq ‰ SupportpJ Ñ A \ B, Oq
Goal: mine small sets of (short) axioms
Goal: learn small sets of (short) axioms
H1 Coverage 0.5 0.45 0.45 1 always! Support 0.45 0.45 0.45 0.45 min Assumption 0.05 0.55 ? Confidence 0.45 1 1 0.45 support! Lift 2 2 2.22 1 always!
A v B B v C1 B v C2
H1
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
= {A v B, B v C1} ⌘ {> v (¬A t B) u (¬B t C1)}
H1 Coverage 0.5 0.45 0.45 1 always! Support 0.45 0.45 0.45 0.45 min Assumption 0.05 0.55 ? Confidence 0.45 1 1 0.45 support! Lift 2 2 2.22 1 always!
A v B B v C1 B v C2
H1
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
= {A v B, B v C1} ⌘ {> v (¬A t B) u (¬B t C1)}
Goal: learn small sets of (short) axioms
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
H1 = {A v B, B v C1}
H1 H2 Coverage 0.5 0.45 0.45 0.475? 0.475? Support 0.45 0.45 0.45 0.45 0.45 Assumption 0.05 0.05 0.05 Confidence 0.45 1 1 ? ? Lift 2 2 2.22 ? ?
A v B B v C1 B v C2
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
H1 = {A v B, B v C1}
H1 H2 Coverage 0.5 0.45 0.45 0.475? 0.475? Support 0.45 0.45 0.45 0.45 0.45 Assumption 0.05 0.05 0.05 Confidence 0.45 1 1 ? ? Lift 2 2 2.22 ? ?
A v B B v C1 B v C2
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
H1 H2 Coverage 0.5 0.45 0.45 0.475? 0.475? Support 0.45 0.45 0.45 0.45 0.45 Assumption 0.05 0.05 0.05 Confidence 0.45 1 1 ? ? Lift 2 2 2.22 ? ?
A v B B v C1 B v C2 = {A v B, B v C2}
H2 H1 = {A v B, B v C1}
A B C1 C2 … Ind1 X X X X … … … … … … … Ind180 X X X X … Ind181 X ? X ? … … … … … … … Ind200 X ? X ? … Ind201 ? ? ? ? … … … … … … … Ind400 ? ? ? ? …
H1 H2 Coverage 0.5 0.45 0.45 0.475? 0.475? Support 0.45 0.45 0.45 0.45 0.45 Assumption 0.05 0.05 0.05 Confidence 0.45 1 1 ? ? Lift 2 2 2.22 ? ?
A v B B v C1 B v C2 = {A v B, B v C2}
H2 H1 = {A v B, B v C1}
Goal: learn small sets of (short) axioms
Goal: learn small sets of (short) axioms
– concepts , closed under negation – roles
C R
– concepts , closed under negation – roles
C R
π(O, C, R) = {C(a) | O | = C(a) ∧ C ∈ C} ∪ {R(a, b) | O | = C(a) ∧ R ∈ R}
– concepts , closed under negation – roles
C R
dLen(A, O) = min{`(A0) | A0 ∪ O ≡ A ∪ O}
π(O, C, R) = {C(a) | O | = C(a) ∧ C ∈ C} ∪ {R(a, b) | O | = C(a) ∧ R ∈ R}
– concepts , closed under negation – roles
C R
dLen(A, O) = min{`(A0) | A0 ∪ O ≡ A ∪ O}
fitn(H, O, C, R) = dLen(π(O, C, R), T )− dLen(π(O, C, R), T ∪ H) π(O, C, R) = {C(a) | O | = C(a) ∧ C ∈ C} ∪ {R(a, b) | O | = C(a) ∧ R ∈ R}
π(O, C, R) = {C(a) | O | = C(a) ∧ C ∈ C} ∪ {R(a, b) | O | = C(a) ∧ R ∈ R}
– concepts , closed under negation – roles
C R
Ass(O, H, C, R) = π(O ∪ H, C, R) \ π(O, C, R) π(O, C, R) = {C(a) | O | = C(a) ∧ C ∈ C} ∪ {R(a, b) | O | = C(a) ∧ R ∈ R}
– concepts , closed under negation – roles
C R
Ass(O, H, C, R) = π(O ∪ H, C, R) \ π(O, C, R)
brave(H, O, C, R) = dLen(Ass(O, H, C, R), O) π(O, C, R) = {C(a) | O | = C(a) ∧ C ∈ C} ∪ {R(a, b) | O | = C(a) ∧ R ∈ R}
– concepts , closed under negation – roles
C R
Ass(O, H, C, R) = π(O ∪ H, C, R) \ π(O, C, R)
brave(H, O, C, R) = dLen(Ass(O, H, C, R), O) π(O, C, R) = {C(a) | O | = C(a) ∧ C ∈ C} ∪ {R(a, b) | O | = C(a) ∧ R ∈ R}
Axiom set measures are semantically faithful, i.e., H ≡ H0 ⇒ fitn(H, O, C, R) = fitn(H0, O, C, R) brave(H, O, C, R) = brave(H0, O, C, R)
– concepts , closed under negation – roles
C R
A B C1 C2 Ind1 X X X X … … … … … Ind180 X X X X Ind181 X ? X ? … … … … … Ind200 X ? X ? Ind201 ? ? ? ? … … … … … Ind400 ? ? ? ?
= {A v B, B v C2}
H2 H1 = {A v B, B v C1}
fitn(H1, A, . . .) = dLen(π(A, . . .), ∅) − dLen(π(A, . . .), H1) = 760 − 380 = 380 fitn(H2, A, . . .) = dLen(π(A, . . .), ∅) − dLen(π(A, . . .), H2) = 760 − 400 = 360 brave(H1, A, . . .) = dLen(Ass(A, H1, . . .), A) = 20 brave(H2, A, . . .) = dLen(Ass(A, H2, . . .), A) = 40
A B C1 C2 Ind1 X X X X … … … … … Ind180 X X X X Ind181 X ? X ? … … … … … Ind200 X ? X ? Ind201 ? ? ? ? … … … … … Ind400 ? ? ? ?
= {A v B, B v C2}
H2 H1 = {A v B, B v C1}
fitn(H1, A, . . .) = dLen(π(A, . . .), ∅) − dLen(π(A, . . .), H1) = 760 − 380 = 380 fitn(H2, A, . . .) = dLen(π(A, . . .), ∅) − dLen(π(A, . . .), H2) = 760 − 400 = 360 brave(H1, A, . . .) = dLen(Ass(A, H1, . . .), A) = 20 brave(H2, A, . . .) = dLen(Ass(A, H2, . . .), A) = 40
A B C1 C2 Ind1 X X X X … … … … … Ind180 X X X X Ind181 X ? X ? … … … … … Ind200 X ? X ? Ind201 ? ? ? ? … … … … … Ind400 ? ? ? ?
= {A v B, B v C2}
H2 H1 = {A v B, B v C1}
fitn(H1, A, . . .) = dLen(π(A, . . .), ∅) − dLen(π(A, . . .), H1) = 760 − 380 = 380 fitn(H2, A, . . .) = dLen(π(A, . . .), ∅) − dLen(π(A, . . .), H2) = 760 − 400 = 360 brave(H1, A, . . .) = dLen(Ass(A, H1, . . .), A) = 20 brave(H2, A, . . .) = dLen(Ass(A, H2, . . .), A) = 40
A B C1 C2 Ind1 X X X X … … … … … Ind180 X X X X Ind181 X ? X ? … … … … … Ind200 X ? X ? Ind201 ? ? ? ? … … … … … Ind400 ? ? ? ?
= {A v B, B v C2}
H2 H1 = {A v B, B v C1}
fitn(H1, A, . . .) = dLen(π(A, . . .), ∅) − dLen(π(A, . . .), H1) = 760 − 380 = 380 fitn(H2, A, . . .) = dLen(π(A, . . .), ∅) − dLen(π(A, . . .), H2) = 760 − 400 = 360 brave(H1, A, . . .) = dLen(Ass(A, H1, . . .), A) = 20 brave(H2, A, . . .) = dLen(Ass(A, H2, . . .), A) = 40
H1 >> H2
Example: empty TBox, ABox X A, B A, C A, C RR A, C R B R
fitn({X v 8R.A}, A, . . .) = dLen(π(A, . . .), ;) dLen(π(A, . . .), {X v 8R.A}) = 12 9 = 3 brave({X v 8R.A}, A, . . .) = dLen(Ass(A, {X v 8R.A}, . . .), A) = 1
Example: empty TBox, ABox X A, B A, C A, C RR A, C R B R
fitn({X v 8R.A}, A, . . .) = dLen(π(A, . . .), ;) dLen(π(A, . . .), {X v 8R.A}) = 12 9 = 3 brave({X v 8R.A}, A, . . .) = dLen(Ass(A, {X v 8R.A}, . . .), A) = 1
Example: empty TBox, ABox X A, B A, C A, C RR A, C R B R
fitn({X v 8R.A}, A, . . .) = dLen(π(A, . . .), ;) dLen(π(A, . . .), {X v 8R.A}) = 12 9 = 3 brave({X v 8R.A}, A, . . .) = dLen(Ass(A, {X v 8R.A}, . . .), A) = 1
TBox ABox Learner axiom axiom axiom axiom axiom Hypotheses we wanted to mine axioms!
H ≡ H0 ⇒ fitn(O, H, C, R) = fitn(O, H0, C, R)
– semantically faithful: …
– easy for (1), tricky for (2):
C R
dLen(A, O) = min{`(A0) | A0 ∪ O ≡ A ∪ O}
O | = H ⇒ Ass(O, H, C, R) = 0 TBox ABox
Learner
Hypotheses
axiom(s) m1,m2,m3,… axiom(s) m1,m2,m3,… axiom(s) m1,m2,m3,… axiom(s) m1,m2,m3,… axiom(s) m1,m2,m3,…
H ≡ H0 ⇒ fitn(O, H, C, R) = fitn(O, H0, C, R)
– semantically faithful: …
– easy for (1), tricky for (2):
C R
dLen(A, O) = min{`(A0) | A0 ∪ O ≡ A ∪ O}
O | = H ⇒ Ass(O, H, C, R) = 0
– do these measures correlate? – how independent are they?
– are longer/bigger hypotheses better?
– how do we guide users through these?
Ontology Cleaner O Hypothesis Constructor L, Σ Hypothesis Evaluator Q Hypothesis Sorter rf(H) H qf(H, q)
TBox ABox
axiom( m1,m2,m3 axiom(
m1,m2,m3
axiom( m1,m2,m3 axiom( m1,m2,m3
axiom(s) m1,m2,m3,…
parameters
Ontology Cleaner O Hypothesis Constructor L, Σ Hypothesis Evaluator Q Hypothesis Sorter rf(H) H qf(H, q)
TBox ABox
axiom( m1,m2,m3 axiom(
m1,m2,m3
axiom( m1,m2,m3 axiom( m1,m2,m3
axiom(s) m1,m2,m3,…
parameters
Subjective Solution
Easy:
– finitely many thanks to language bias
– –
if yes, add it to
L Ci v Cj
O [ {Ci v Cj} 6| = > v ? O 6| = Ci v Cj
H
Easy:
– finitely many thanks to language bias
– –
if yes, add it to
L Ci v Cj
O [ {Ci v Cj} 6| = > v ? O 6| = Ci v Cj
H Bonkers! Even for EL, 100 concept/role names 4 max length of concepts Ci ~100,000,000 concepts Ci ~100,000,0002 GCIs to test
Easy:
– finitely many thanks to language bias
– –
if yes, add it to
L Ci v Cj
O [ {Ci v Cj} 6| = > v ? O 6| = Ci v Cj
H Bonkers! Even for EL, 100 concept/role names 4 max length of concepts Ci ~100,000,000 concepts Ci ~100,000,0002 GCIs to test Bonkers! Even for EL, n concept/role names k max length of concepts Ci nk concepts Ci n2k GCIs to test
Use a refinement operator to build Ci informed by ABox
– used in concept learning, conceptual blending
– a function such that, for each
– proper if, for all – complete if, for all – suitable if, for all
L
ρ : Conc(L) 7! P(Conc(L)) C 2 L, C0 2 ρ(C) : C0 v C C 2 L, C0 2 ρ(C) : C0 6⌘ C C P L there is n, C1 P ⇢npJq : C1 ” C and `pC1q § `pCq C, C1 P L : if C1 à C then there is some n, C2 ” C with C1 P ρnpC2q
Use a refinement operator to build Ci informed by ABox
– used in concept learning, conceptual blending
– a function such that, for each
– proper if, for all – complete if, for all – suitable if, for all
L
ρ : Conc(L) 7! P(Conc(L)) C 2 L, C0 2 ρ(C) : C0 v C C 2 L, C0 2 ρ(C) : C0 6⌘ C C P L there is n, C1 P ⇢npJq : C1 ” C and `pC1q § `pCq
Great: there are known refinement
for ALC [LehmHitzler2010]
C, C1 P L : if C1 à C then there is some n, C2 ” C with C1 P ρnpC2q
Algorithm 8 DL-Apriori (O, Σ, DL, `max, pmin)
1: inputs 2:
O := T [ A: an ontology
3:
Σ: a finite set of terms such that > 2 Σ
4:
DL: a DL for concepts
5:
`max: a maximal length of a concept such that 1 `max < 1
6:
pmin: a minimal concept support such that 0 < pmin |in(O)|
7: outputs 8:
C: the set of suitable concepts
9: do 10:
C ; % initialise the final set of suitable concepts
11:
D {>} % initialise the set of concepts yet to be specialised
12:
⇢ getOperator(DL) % initialise a suitable operator ⇢ for DL
13:
while D 6= ; do
14:
C pick(D) % pick a concept C to be specialised
15:
D D\{C} % remove C from the concepts to be specialised
16:
C C [ {C} % add C to the final set
17:
⇢C specialise(C, ⇢, Σ, `max) % specialise C using ⇢
18:
DC {D 2 urc(⇢C) | @D0 2 C [ D : D0 ⌘ D} % discard variations
19:
D D [ {D 2 DC | p(D, O) pmin} % add suitable specialisations
20:
end while
21:
return C
Algorithm 8 DL-Apriori (O, Σ, DL, `max, pmin)
1: inputs 2:
O := T [ A: an ontology
3:
Σ: a finite set of terms such that > 2 Σ
4:
DL: a DL for concepts
5:
`max: a maximal length of a concept such that 1 `max < 1
6:
pmin: a minimal concept support such that 0 < pmin |in(O)|
7: outputs 8:
C: the set of suitable concepts
9: do 10:
C ; % initialise the final set of suitable concepts
11:
D {>} % initialise the set of concepts yet to be specialised
12:
⇢ getOperator(DL) % initialise a suitable operator ⇢ for DL
13:
while D 6= ; do
14:
C pick(D) % pick a concept C to be specialised
15:
D D\{C} % remove C from the concepts to be specialised
16:
C C [ {C} % add C to the final set
17:
⇢C specialise(C, ⇢, Σ, `max) % specialise C using ⇢
18:
DC {D 2 urc(⇢C) | @D0 2 C [ D : D0 ⌘ D} % discard variations
19:
D D [ {D 2 DC | p(D, O) pmin} % add suitable specialisations
20:
end while
21:
return C
specialise concepts
instances!
Algorithm 8 DL-Apriori (O, Σ, DL, `max, pmin)
1: inputs 2:
O := T [ A: an ontology
3:
Σ: a finite set of terms such that > 2 Σ
4:
DL: a DL for concepts
5:
`max: a maximal length of a concept such that 1 `max < 1
6:
pmin: a minimal concept support such that 0 < pmin |in(O)|
7: outputs 8:
C: the set of suitable concepts
9: do 10:
C ; % initialise the final set of suitable concepts
11:
D {>} % initialise the set of concepts yet to be specialised
12:
⇢ getOperator(DL) % initialise a suitable operator ⇢ for DL
13:
while D 6= ; do
14:
C pick(D) % pick a concept C to be specialised
15:
D D\{C} % remove C from the concepts to be specialised
16:
C C [ {C} % add C to the final set
17:
⇢C specialise(C, ⇢, Σ, `max) % specialise C using ⇢
18:
DC {D 2 urc(⇢C) | @D0 2 C [ D : D0 ⌘ D} % discard variations
19:
D D [ {D 2 DC | p(D, O) pmin} % add suitable specialisations
20:
end while
21:
return C
specialise concepts
instances!
Don’t even construct most of the nk concepts Ci
Ontology Cleaner O Hypothesis Constructor L, Σ Hypothesis Evaluator Q Hypothesis Sorter rf(H) H qf(H, q)
TBox ABox
axiom( m1,m2,m3 axiom(
m1,m2,m3
axiom( m1,m2,m3 axiom( m1,m2,m3
axiom(s) m1,m2,m3,…
parameters DL-Apriori (·) buildRolesTopDown(·) generateHypotheses(·)
Ontology Cleaner O Hypothesis Constructor L, Σ Hypothesis Evaluator Q Hypothesis Sorter rf(H) H qf(H, q)
TBox ABox
axiom( m1,m2,m3 axiom(
m1,m2,m3
axiom( m1,m2,m3 axiom( m1,m2,m3
axiom(s) m1,m2,m3,…
parameters DL-Apriori (·) buildRolesTopDown(·) generateHypotheses(·)
Complete (for the parameters provided).
– hard test case for instance retrieval
– due to – DL Miner implements an approximation that
– great test case for incremental reasoning: Pellet! dLen(A, O) = min{`(A0) | A0 ∪ O ≡ A ∪ O}
dLen∗(A, O) = `(A) − `(Redundt(A, O))
– throw away all hypotheses that are dominated by another one – i.e., compute the Pareto front wrt the measures provided
Woman u 9hasChild.> v Mother Man u 9hasChild.> v Father 9hasChild.> v 9marriedTo.> 9marriedTo.> v 9hasChild.> 9marriedTo.Woman v Man 9marriedTo.Mother v Father Father v 9marriedTo.(9hasChild.>) ( Mother v 9marriedTo.(9hasChild.>) ( 9hasChild.> v Mother t Father 9hasChild.> v Man t Woman 9hasChild.> v Father t Woman
Given a Kinship Ontology,1 it mines 536 Hs with confidence above 0.9, e.g. TBox ABox
DL Miner
UCI Machine Learning Repository
Woman u 9hasChild.> v Mother Man u 9hasChild.> v Father 9hasChild.> v 9marriedTo.> 9marriedTo.> v 9hasChild.> 9marriedTo.Woman v Man 9marriedTo.Mother v Father Father v 9marriedTo.(9hasChild.>) ( Mother v 9marriedTo.(9hasChild.>) ( 9hasChild.> v Mother t Father 9hasChild.> v Man t Woman 9hasChild.> v Father t Woman
Given a Kinship Ontology,1 it mines 536 Hs with confidence above 0.9, e.g. TBox ABox
DL Miner
Great - it works really well
UCI Machine Learning Repository
– do these measures correlate? – how independent are they?
– are longer/bigger hypotheses better?
– how do we guide users through these?
>= 100 RAs 21 ontologies
>= 100 RAs 21 ontologies
– is SHI
– RIAs with inverse, composition
– minsupport = 10 – max concept length in GCIs = 4
L
>= 100 RAs 21 ontologies
– is SHI
– RIAs with inverse, composition
– minsupport = 10 – max concept length in GCIs = 4
L
– re. readability of hypotheses: – what kind of axioms should we roughly aim for?
mean mode 5% 25% 50% 75% 95% 99% 99.9% length 2.63 3 2 2 3 3 3 3 5 depth 0.69 1 1 1 1 1 3
Length & role depth of axioms in Bioportal - Taxonomies
DL constructor C 9R.C C u D 8R.C C t D ¬C Axioms, % 99.73 67.82 1.15 0.46 0.09 0.01
Use of DL constructors in Bioportal - Taxonomies
– re. readability of hypotheses: – what kind of axioms should we roughly aim for?
mean mode 5% 25% 50% 75% 95% 99% 99.9% length 2.63 3 2 2 3 3 3 3 5 depth 0.69 1 1 1 1 1 3
Length & role depth of axioms in Bioportal - Taxonomies
DL constructor C 9R.C C u D 8R.C C t D ¬C Axioms, % 99.73 67.82 1.15 0.46 0.09 0.01
Use of DL constructors in Bioportal - Taxonomies
Restricting length of concepts in axioms to 4 (axioms to 8) is fine!
How do the measures correlate?
BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH
(a) Handpicked corpus
BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH
(b) Principled corpus
How do the measures correlate?
BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH
(a) Handpicked corpus
BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH
(b) Principled corpus
How do the measures correlate?
BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH
(a) Handpicked corpus
BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH
(b) Principled corpus
How do the measures correlate?
BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH
(a) Handpicked corpus
BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH BSUPP BASSUM BCONF BLIFT BCONVN BCONVQ SUPP ASSUM CONF LIFT CONVN CONVQ CONTR FITN BRAV COMPL DISSIM LENGTH DEPTH
(b) Principled corpus
10 20 30 40 50 60 70 OC HC HP HE
Runtime (%)
Handpicked Principled
How feasible is hypothesis mining?
Parsing & Classification Hypothesis Construction Preparatory
Hypothesis Evaluation
10 20 30 40 50 60 70 OC HC HP HE
Runtime (%)
Handpicked Principled
How feasible is hypothesis mining? Works fine for classifiable ontologies.
Parsing & Classification Hypothesis Construction Preparatory
Hypothesis Evaluation
5 10 15 20 25 30 35 40 AXM1 AXM2 FITN BRAV CONS INFOR STREN REDUN DISSIM COMPL
Runtime (%)
Handpicked Principled
How costly are the different measures?
5 10 15 20 25 30 35 40 AXM1 AXM2 FITN BRAV CONS INFOR STREN REDUN DISSIM COMPL
Runtime (%)
Handpicked Principled
How costly are the different measures? Consistency is the most costly measure
TBox ABox
DL Miner
Hypotheses
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
– mostly independent – some superfluous on positive data (unsurprisingly)
– provided our ontology is classifiable – provided our search space isn’t too massive
– are longer/bigger hypotheses better?
– how do we guide users through these?
Can we learn hypotheses are
…and how does this correlate with measures? TBox SROIQ sig = 522
ABox 169K CAs 405K RAs
DL Miner
S1: 60 Hypos unfocused S2: 60 Hypos focused
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
SHI |Ci| <= 4
Can we learn hypotheses are
…and how does this correlate with measures? TBox SROIQ sig = 522
ABox 169K CAs 405K RAs
DL Miner
S1: 60 Hypos unfocused S2: 60 Hypos focused
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
SHI |Ci| <= 4
30 high-confidence 30 low-confidence
Can we learn hypotheses are
…and how does this correlate with measures? TBox SROIQ sig = 522
ABox 169K CAs 405K RAs
DL Miner
S1: 60 Hypos unfocused S2: 60 Hypos focused
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
SHI |Ci| <= 4
Valid? Interesting? 30 high-confidence 30 low-confidence
Validity Interestingness 1 2 3 4 Wrong 6 11 30
Don’t know
4 (unfocused) Correct
1
Survey 2 Don’t know
(focused) Correct
How good/valid are the mined hypotheses?
Validity Interestingness 1 2 3 4 Wrong 6 11 30
Don’t know
4 (unfocused) Correct
1
Survey 2 Don’t know
(focused) Correct
How good/valid are the mined hypotheses?
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 BLIFT LIFT LENGTH DEPTH DISSIM BCONVQ CONVQ BCONF CONF BASSUM ASSUM FITN BRAV BSUPP SUPP COMPL
Correlation coefficient
negative positive
(c) Survey 2: Validity
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 LENGTH DEPTH DISSIM BLIFT LIFT BCONF CONF BCONVQ CONVQ FITN BSUPP SUPP BRAV BASSUM ASSUM COMPL
Correlation coefficient
negative positive
(d) Survey 2: Interestingness
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 BLIFT LIFT LENGTH DEPTH DISSIM BCONVQ CONVQ BCONF CONF BASSUM ASSUM FITN BRAV BSUPP SUPP COMPL
Correlation coefficient
negative positive
(c) Survey 2: Validity
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 LENGTH DEPTH DISSIM BLIFT LIFT BCONF CONF BCONVQ CONVQ FITN BSUPP SUPP BRAV BASSUM ASSUM COMPL
Correlation coefficient
negative positive
(d) Survey 2: Interestingness
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 BLIFT LIFT LENGTH DEPTH DISSIM BCONVQ CONVQ BCONF CONF BASSUM ASSUM FITN BRAV BSUPP SUPP COMPL
Correlation coefficient
negative positive
(c) Survey 2: Validity
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 LENGTH DEPTH DISSIM BLIFT LIFT BCONF CONF BCONVQ CONVQ FITN BSUPP SUPP BRAV BASSUM ASSUM COMPL
Correlation coefficient
negative positive
(d) Survey 2: Interestingness
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 BLIFT LIFT LENGTH DEPTH DISSIM BCONVQ CONVQ BCONF CONF BASSUM ASSUM FITN BRAV BSUPP SUPP COMPL
Correlation coefficient
negative positive
(c) Survey 2: Validity
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 LENGTH DEPTH DISSIM BLIFT LIFT BCONF CONF BCONVQ CONVQ FITN BSUPP SUPP BRAV BASSUM ASSUM COMPL
Correlation coefficient
negative positive
(d) Survey 2: Interestingness
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 BLIFT LIFT LENGTH DEPTH DISSIM BCONVQ CONVQ BCONF CONF BASSUM ASSUM FITN BRAV BSUPP SUPP COMPL
Correlation coefficient
negative positive
(c) Survey 2: Validity
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 LENGTH DEPTH DISSIM BLIFT LIFT BCONF CONF BCONVQ CONVQ FITN BSUPP SUPP BRAV BASSUM ASSUM COMPL
Correlation coefficient
negative positive
(d) Survey 2: Interestingness
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 BLIFT LIFT LENGTH DEPTH DISSIM BCONVQ CONVQ BCONF CONF BASSUM ASSUM FITN BRAV BSUPP SUPP COMPL
Correlation coefficient
negative positive
(c) Survey 2: Validity
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 LENGTH DEPTH DISSIM BLIFT LIFT BCONF CONF BCONVQ CONVQ FITN BSUPP SUPP BRAV BASSUM ASSUM COMPL
Correlation coefficient
negative positive
(d) Survey 2: Interestingness
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 BLIFT LIFT LENGTH DEPTH DISSIM BCONVQ CONVQ BCONF CONF BASSUM ASSUM FITN BRAV BSUPP SUPP COMPL
Correlation coefficient
negative positive
(c) Survey 2: Validity
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 LENGTH DEPTH DISSIM BLIFT LIFT BCONF CONF BCONVQ CONVQ FITN BSUPP SUPP BRAV BASSUM ASSUM COMPL
Correlation coefficient
negative positive
(d) Survey 2: Interestingness
TBox ABox
DL Miner
Hypotheses
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
high confidence/lift/… low assumptions/braveness
TBox ABox
DL Miner
Hypotheses
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
high confidence/lift/… low assumptions/braveness
TBox ABox
DL Miner
Hypotheses
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
!!!
TBox ABox
DL Miner
Hypotheses
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
!!!
high confidence/lift/… low assumptions/braveness
TBox ABox
DL Miner
Hypotheses
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
high confidence/lift/… low assumptions/braveness
TBox ABox
DL Miner
Hypotheses
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
???
TBox ABox
DL Miner
Hypotheses
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3, high confidence/lift/… low assumptions/braveness
No - they look alike
TBox ABox
DL Miner
Hypotheses
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3,
axiom(s)
m1,m2,m3, high confidence/lift/… low assumptions/braveness
No - they look alike Perhaps - with different ABoxes/other sources
– gives us more than we thought – expressive axioms are better!
– to better understand relevance of metrics – but we’ve got the shape now
– stripping superfluous parts from concepts, (sets of) axioms
– for more expressive DLs – redundancy-free – ontology-aware
– design of experiments & surveys – but also rather complex: sooo many design choices
– metrics make “ontology mining” subjective – requires understanding of logic & reasoners & …
– abduction – similarity – good explanations/proofs for entailments justifications – good counter-models for non-entailments – good repair of inconsistent/incoherent ontologies – …
Cover illustration: The Description Logic logo. Courtesy of Enrico Franconi.
Baader, Horrocks, Lutz, Sattler
An Introduction to Description Logic
Description Logics (DLs) have a long tradition in computer science and knowledge representation, being designed so that domain knowledge can be described and so that computers can reason about this knowledge. DLs have recently gained increased importance since they form the logical basis of widely used ontology languages, in particular the web ontology language OWL. Written by four renowned experts, this is the first textbook on Description Logic. It is suitable for self-study by graduates and as the basis for a university course. Starting from a basic DL, the book introduces the reader to their syntax, semantics, reasoning problems and model theory, and discusses the computational complexity of these reasoning problems and algorithms to solve them. It then explores a variety of reasoning techniques, knowledge-based applications and tools, and describes the relationship between DLs and OWL. Franz Baader is a professor in the Institute of Theoretical Computer Science at TU Dresden. Ian Horrocks is a professor in the Department of Computer Science at the University of Oxford. Carsten Lutz is a professor in the Department of Computer Science at the University of Bremen. Uli Sattler is a professor in the Information Management Group within the School of Computer Science at the University of Manchester.
Designed by Zoe Naylor.
An Introduction to
Franz Baader Ian Horrocks Carsten Lutz Uli Sattler
Get 20% Discount:
www.cambridge.org/9780521695428
and enter the code BAADER2017 at the checkout
15
algorithms; 5. Complexity; 6. Reasoning in the ε family of Description Logics; 7. Query answering; 8. Ontology languages and applications; Appendix A. Description Logic terminology; References; Index.
£59.99 £47.99 $79.99 $63.99 £29.99 £23.99 $39.99 $31.99
Hardback 978-0-521-87361-1 Paperback 978-0-521-69542-8
April
228 x 152 mm 260pp 30 b/w illus.