Internally Represented Concept
An Internally Represented Concept is a concept record within an intelligent system.
- AKA: Actively Represented Concept.
- Example(s):
- You're current representation of this concept.
- …
- Counter-Example(s):
- an Externally Represented Concept, such as one described in Wikipedia.
- See: Software Process, Active Data Structure, Conceptual Knowledge.
References
2013
- (Wikipedia, 2013) ⇒ http://en.wikipedia.org/wiki/Concept
- In metaphysics, and especially ontology, a concept is a fundamental category of existence. In contemporary philosophy, there are at least three prevailing ways to understand what a concept is:[1]
- Concepts as mental representations, where concepts are entities that exist in the brain.
- Concepts as abilities, where concepts are abilities peculiar to cognitive agents.
- Concepts as abstract objects, where objects are the constituents of propositions that mediate between thought, language, and referents.
- In metaphysics, and especially ontology, a concept is a fundamental category of existence. In contemporary philosophy, there are at least three prevailing ways to understand what a concept is:[1]
- ↑ Eric Margolis; Stephen Lawrence. "Concepts". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab at Stanford University. http://plato.stanford.edu/entries/concepts/. Retrieved 6 November 2012.
2009
- (Carey, 2009) ⇒ Susan Carey. (2009). “The Origin of Concepts." Oxford University Press, ISBN:0199887918
- QUOTE: Concepts are units of thought, the constituents of beliefs and theories, and those that interest me here are roughly the grain of single lexical items. Indeed, the meanings of words are paradigm examples of concepts. I am concerned with the mental representation of concepts; I use phrases such as “the infant’s concept animal” to mean the infant’s representation of animals. I assume representations are states of the nervous system that have content, that refer to concrete or abstract entities, to properties, to events. I do not attempt a philosophical analysis of mental representations; I will not try to say how it is that some states of the nervous system have symbolic content. Such a theory would explain how the extension of a given representation is determined, as well as providing a computational account of how that representation fulfills its particular inferential role, how it functions in thought.1 Here I merely assume that such a theory will be forthcoming. In the pages to come, I work backwards from behavioral evidence for some concept’s extension and inferential role to characterize that concept’s content and to specify something of its nature and format of representation.
There are many different types of mental representations and one challenge to cognitive science is to find the principled distinctions among them. Different types of representations may well have theoretically important differences in origins, developmental trajectories, types of conceptual roles, and relations to their extensions. Also, some theories of conceptual development posit shifts in kinds of mental representations available to children of different ages — from a perceptual similarity space to natural kind concepts (Quine, 1977), from sensori-motor to symbolic representations (Piaget, 1954), from implicit to explicit representations (Karmiloff-Smith, 1990), for examples. Such theories depend, of course, on defensible distinctions among types of mental representations.
I will join forces with the many writers who draw a distinction between perceptual representations, on the one hand, and conceptual representations, on the other. Chapter 2 examines thesis that infants begin with perceptual representations and only construct conceptual representations later in development. Differentiating the perceptual from the conceptual is difficult. There are probably many different distinctions at work here, and most are probably ends of continua rather than categorical. An intuitive characterization of perceptual representations as what things in the world look like, sound like, feel like, taste like, contrasts these with conceptual representations as what things is the world are.. Distinctive properties of perceptual representations include, first of all, that their extensions are fixed by virtue of innate, modular, sensory input analyzers. There are innate shape analyzers, phoneme detectors, color detectors, motion detectors, and so forth. That representations of red have the content red is ensured by evolution, by how color vision works. Second, perceptual representations have very little in the way of inferential role. Almost nothing else follows from the fact that something is red. Third, and related to the above two points, perceptual representations are inferentially close the output of sensori-analyzers. Consider the difference between the representation of red or loud, on the one hand, and the representation of electron or life, on the other. Although we certainly can sometimes identify electrons or living things perceptual evidence, there is a long inferential chain between a path in a cloud chamber to the presence of an electron, or from what a bacteria colony on a petri dish looks like to the fact that it contains living things.
Natural kind concepts, paradigm conceptual representations, are at the other end of the continuum, contrasting with perceptual representations in all three respects. There are no innate input analyzers for tigers or electrons, natural kind concepts have rich conceptual roles, and there is a long inferential chain between the perceptible properties of natural kinds and the content of concepts of natural kinds. According to the Kripke/Putnam (Kripke, 1972; Putnam, 1975) analyses of natural kind concepts, their extensions are fixed not by the mind but by some social process of ostensive definition and by the essential nature (a metaphysical matter, not an epistemelogical one) of the entities so dubbed. The discovery of the extension of gold or of wolf is a matter for science, not for philosophy, linguistics, or psychology. As for the psychology of natural kind concepts, they fall under the assumption of “psychological essentialism,” (Medin and Ortony, 1989). It is a fact about our mind that we assume (usually correctly, as it turns out, but it needn’t be) that individuals of a given natural kind have hidden essences which both determine their kind and their surface properties. Often we have no fleshed-out guess as to a kind’s essential properties.
A natural kind concept’s features fall along a continuum from core to periphery, a continuum determined by explanatory depth (Ahn, Kim, Lassaline & Dennis, 2000; Keil, 1989). Its core, its essence, consists of its inferentially deepest features, and for natural kinds, these are its causally deepest features. Thus, the analysis of concepts of natural kinds is deeply intertwined with the analysis of the conceptual structures that represent causal/explanatory knowledge: intuitive theories.
Some writers deny a principled distinction between perceptual representations and conceptual representations, claiming that all mental representations at root perceptual representations (e.g., Thelan, Schoner, Scheir & Smith, 2000). Others (e.g., Quine, 1977; Piaget, 1954) grant the distinction and believe that conceptual development in the first few years of life involves a transition from perceptual representations alone to a representational repetoire that contains both types. These positions are considered in Chapter 2.
This book’s first major thesis is that there is a third type of conceptual structure, called “core knowledge” by Spelke, that differs systematically from both perceptual domains of representation and from theoretical conceptual knowledge. I shall argue that core knowledge is the developmental foundation of human conceptual understanding. Like perceptual domains, the entities in core domains of knowledge are identified by modular innate perceptual input devices, but the representations are conceptual, not perceptual. Unlike perceptual representations, they have relatively rich inferential roles in thought, and there is a longer inferential chain between the output of sensori-analyzers and the content of the representations that articulate core knowledge. However the conceptual role of the concepts that articulate core knowledge is vastly less rich than that of the concepts embedded in intuitive theories, and the inferential depth between perceptual properties and the content of core knowledge is vastly less than in the case of intuitive theories. Finally, knowledge acquisition in core domains is supported by innate domain-specific learning devices, whereas that in intuitive theories is not. Chapters 3 and 4 characterize core knowledge more fully and summarize evidence for human core knowledge of objects, contact causality, intentional causality, number and emotion.
- QUOTE: Concepts are units of thought, the constituents of beliefs and theories, and those that interest me here are roughly the grain of single lexical items. Indeed, the meanings of words are paradigm examples of concepts. I am concerned with the mental representation of concepts; I use phrases such as “the infant’s concept animal” to mean the infant’s representation of animals. I assume representations are states of the nervous system that have content, that refer to concrete or abstract entities, to properties, to events. I do not attempt a philosophical analysis of mental representations; I will not try to say how it is that some states of the nervous system have symbolic content. Such a theory would explain how the extension of a given representation is determined, as well as providing a computational account of how that representation fulfills its particular inferential role, how it functions in thought.1 Here I merely assume that such a theory will be forthcoming. In the pages to come, I work backwards from behavioral evidence for some concept’s extension and inferential role to characterize that concept’s content and to specify something of its nature and format of representation.