Bottom Clause
(Redirected from bottom clause)
Jump to navigation
Jump to search
A Bottom Clause is a Logic Program Clause that corresponds to many representations of an example.
- AKA: Saturation, Starting Clause.
- Context:
- It can be computed inverse entailment.
- It can correspond to the most specific hypothesis covering a particular example when learning from entailment.
- It can be computed using a bottom clause generation algorithm.
- …
- Example(s):
- Given the background theory $B$bird :- blackbird.
bird :- ostrich.
and the example $e$:
flies :- blackbird, normal.the bottom clause is $H$
flies :- bird, blackbird, normal.
- Given the background theory $B$
- Counter-Example(s):
- See: Bottom Clause Propositionalization (BCP) Algorithm, Clause, Entailment, Inductive Logic Programming, Inverse Entailment, Logic of Generality.
References
2017
- (Sammut & Webb, 2017) ⇒ Claude Sammut, and Geoffrey I. Webb. (2017). "Bottom Clause". In: Sammut & Webb (2017). DOI: 10.1007/978-1-4899-7687-1_936.
- QUOTE: The bottom clause is a notion from the field of inductive logic programming. It is used to refer to the most specific hypothesis covering a particular example when learning from entailment. When learning from entailment, a hypothesis $H$ covers an example $e$ relative to the background theory $B$ if and only if $B \wedge H \models e$, that is, $B$ together with $H$ entails the example $e$. The bottom clause is now the most specific clause satisfying this relationship w.r.t the background theory $B$ and a given example $e$.
For instance, given the background theory $B$
bird :- blackbird.bird :- ostrich.
and the example $e$:
flies :- blackbird, normal.the bottom clause is $H$
flies :- bird, blackbird, normal.The bottom clause can be used to constrain the search for clauses covering the given example because all clauses covering the example relative to the background theory should be more general than the bottom clause. The bottom clause can be computed using inverse entailment.
- QUOTE: The bottom clause is a notion from the field of inductive logic programming. It is used to refer to the most specific hypothesis covering a particular example when learning from entailment. When learning from entailment, a hypothesis $H$ covers an example $e$ relative to the background theory $B$ if and only if $B \wedge H \models e$, that is, $B$ together with $H$ entails the example $e$. The bottom clause is now the most specific clause satisfying this relationship w.r.t the background theory $B$ and a given example $e$.
2014
- (França et al., 2014) ⇒ Manoel V.M. França, Gerson Zaverucha, Artur S. d'Avila Garcez (2014). "Fast Relational Learning Using bottom Clause Propositionalization with Artificial Neural Networks". In: Machine learning, 94(1), 81-104. DOI:10.1007/s10994-013-5392-1.
- QUOTE: As stated in Sect. 2.1, bottom clauses are extensive representations of an example, possibly having an infinite size. In order to tackle this problem, at least two approaches have been proposed: reducing the size of the clauses during generation or using a statistical approach afterwards. The first can be done as part of the bottom clause generation algorithm (Muggleton 1995), by reducing the variable depth value. Variable depth specifies an upper bound on the number of times that the algorithm can pass through mode declarations and by reducing its value, it is possible to cut a considerable chunk of literals, although causing some information loss. Alternatively, statistical methods such as Pearson's correlation and Principal Component Analysis can be used (a survey of those methods can be found in May|May et al. 2011), taking advantage of the use of numerical feature vectors as training patterns. A recent method, which has low computational cost, while surpassing most common methods in terms of information loss, is the mRMR algorithm (Ding and Peng 2005), which focuses on balancing minimum redundancy and maximum relevance of features (...)
2011
- (May et al., 2011) ⇒ Robert May, Graeme Dandy and Holger Maier (2011). "Review of Input Variable Selection Methods for Artificial Neural Networks". In: K. Suzuki (Ed.), Artificial neural networks — method logical advances and biomedical applications (pp. 19–44). DOI:10.5772/16004.
2005
- (Ding & Peng, 2005) ⇒ Chris Ding, and Hanchuan Peng (2005). "Minimum Redundancy Feature Selection from Microarray Gene Expression Data". In: Journal of Bioinformatics and Computational Biology, 3(2), 185–205.DOI:10.1142/S0219720005001004.
1994
- (Bain & Muggleton, 1994) ⇒ Michael Bain, and Stephen Muggleton (1994). “Learning Optimal Chess Strategies". In: Machine Intelligence, 13, 291–309.