Knowledge Representation (KR) Theory
A Knowledge Representation (KR) Theory is an artificial intelligence research field that is dedicated to representing real world information into a computer system
- AKA: Knowledge Representation and Reasoning Theory.
- Context:
- It can study formal methods to specify concepts and the constraints between them.
- It can represent the world involving a set of ontological commitments to make in order to best approximate the desired portion of reality.
- It can form a Knowledge Representation System that can solve a Knoweldge Represenation Task by impleting Knowledge Represenation Algorithm.
- Example(s):
- Counter-Example(s):
- See: Automated Theorem Proving, Artificial Intelligence, Computer-Aided Diagnosis, Natural Language User Interface, Formalism (Mathematics), Logic, Set Theory, Subset, Semantic Network, Systems Architecture, Universal Learning Theory, Adaptive Resonance Theory, Knowledge Discovery System, Ontology, Semantic Network.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning Retrieved:2018-8-23.
- Knowledge representation and reasoning (KR, KR², KR&R) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets. Examples of knowledge representation formalisms include semantic nets, systems architecture, frames, rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers. The KR conference series was established to share ideas and progress on this challenging field.
2004
- (Brachman & Levesque, 2004) ⇒ Ronald Brachman, and Hector Levesque (2004). "Knowledge Representation and Reasoning". Publisher: Morgan Kaufmann. ISBN: 9780080489322, 9781558609327 , ISBN: 9781493303793
1999
- (Brachman et al., 1999) ⇒ Ronald J. Brachman, Deborah L. McGuinness, Peter F. Patel-Schneider, Alex Borgida (1999). “Reducing” classic to practice: Knowledge representation theory meets reality. Artificial Intelligence, 114(1-2), 203-237. DOI:10.1016/S0004-3702(99)00078-8
- QUOTE: Most recent key developments in research on knowledge representation (KR) have been of the more theoretical sort, involving worst-case complexity results, solutions to technical challenge problems, etc. While some of this work has influenced practice in Artificial Intelligence, it is rarely—if ever—made clear what is compromised when the transition is made from relatively abstract theory to the real world. CLASSIC is a description logic with an ancestry of extensive theoretical work (tracing back over twenty years to KL-ONE), and several novel contributions to KR theory. Basic research on classic paved the way for an implementation that has been used significantly in practice, including by users not versed in KR theory. In moving from a pure logic to a practical tool, many compromises and changes of perspective were necessary. We report on this transition and articulate some of the profound influences practice can have on relatively idealistic theoretical work. We have found that CLASSIC has been quite useful in practice, yet still strongly retains most of its original spirit, but much of our thinking and many details had to change along the way.
1988
- (Geller, 1988) ⇒ James Geller (1988). "A knowledge representation theory for natural language graphics".
- ABSTRACT: Natural Language Graphics (NLG) deals with diagram generation driven by natural language utterances. This investigation applies the methods of declarative knowledge representation to NLG systems. Declarative knowledge that can be used for display purposes as well as reasoning purposes is termed “Graphical Deep Knowledge” and described by supplying syntax and semantics of its constructs. A task domain analysis of Graphical Deep Knowledge is presented covering forms, positions, attributes, parts, classes, reference frames, inheritability, etc. Part hierarchies are differentiated into three sub-types. The usefulness of inheritance along part hierarchies is demonstrated, and criticism of traditional inheritance-based knowledge representation formalisms is derived from this finding. The “Linearity Principle of Knowledge Representation” is introduced and used to constrain some of the presented knowledge structures. The analysis leading to Graphical Deep Knowledge also results in the description of two fundamental conjectures about knowledge representation.
The Gricean maxims of cooperative communication are used as another source of constraints for NLG systems. A new maxim for technical languages is introduced, the “Maxim of Unnecessary Variation”. It is argued that common symbolic representations like circuit board diagrams have not yet been described in the literature by explicit feature analysis, and that this is necessary to give a system knowledge about the meaning of the diagrams it is displaying.
Part of the presented theory has been implemented as a generator program that creates pictures from knowledge structures and as an ATN grammar that creates knowledge structures from limited natural language input. The function of the picture generation program ("T sc INA") as a user interface for a circuit board maintenance system (VMES) is demonstrated. Finally an older version of T sc INA is described that incorporates a module for “Intelligent Machine Drafting” (IMD), a completely new subfield of AI that has been introduced in this research and that relates to Computer Aided Design (CAD). The IMD program does layout and routing for the members of a simple class of functional circuit diagrams based on a policy of symmetry and equal distribution over the available space. This layout/routing is based on cognitive requirements as opposed to physical requirements used by CAD systems.
- ABSTRACT: Natural Language Graphics (NLG) deals with diagram generation driven by natural language utterances. This investigation applies the methods of declarative knowledge representation to NLG systems. Declarative knowledge that can be used for display purposes as well as reasoning purposes is termed “Graphical Deep Knowledge” and described by supplying syntax and semantics of its constructs. A task domain analysis of Graphical Deep Knowledge is presented covering forms, positions, attributes, parts, classes, reference frames, inheritability, etc. Part hierarchies are differentiated into three sub-types. The usefulness of inheritance along part hierarchies is demonstrated, and criticism of traditional inheritance-based knowledge representation formalisms is derived from this finding. The “Linearity Principle of Knowledge Representation” is introduced and used to constrain some of the presented knowledge structures. The analysis leading to Graphical Deep Knowledge also results in the description of two fundamental conjectures about knowledge representation.