What is the term for ones internal mental symbols that represent objects events or states of affairs in the world?

This circumscribed complementary role for connectionism in a hybrid system seems to remedy the weaknesses of the two current competitors in their attempts to model the mind independently. In a pure symbolic model the crucial connection between the symbols and their referents is missing; an autonomous symbol system, though amenable to a systematic semantic interpretation, is ungrounded. In a pure connectionist model, names are connected to objects through invariant patterns in their sensory projections, learned through exposure and feedback, but the crucial compositional property is missing; a network of names, though grounded, is not yet amenable to a full systematic semantic interpretation. In the hybrid system proposed here, there is no longer any autonomous symbolic level at all; instead, there is an intrinsically dedicated symbol system, its elementary symbols (names) connected to nonsymbolic representations that can pick out the objects to which they refer, via connectionist networks that extract the invariant features of their analog sensory projections.

In an intrinsically dedicated symbol system there are more constraints on the symbol tokens than merely syntactic ones. Symbols are manipulated not only on the basis of the arbitrary shape of their tokens, but also on the basis of the decidedly nonarbitrary "shape" of the iconic and categorical representations connected to the grounded elementary symbols out of which the higher-order symbols are composed. Of these two kinds of constraints, the iconic/categorical ones are primary. I am not aware of any formal analysis of such dedicated symbol systems,[21] but this may be because they are unique to cognitive and robotic modeling and their properties will depend on the specific kinds of robotic (i.e., behavioral) capacities they are designed to exhibit.

It is appropriate that the properties of dedicated symbol systems should turn out to depend on behavioral considerations. The present grounding scheme is still in the spirit of behaviorism in that the only tests proposed for whether a semantic interpretation will bear the semantic weight placed on it consist of one formal test (does it meet the eight criteria for being a symbol system?) and one behavioral test (can it discriminate, identify and describe all the objects and states of affairs to which its symbols refer?). If both tests are passed, then the semantic interpretation of its symbols is "fixed" by the behavioral capacity of the dedicated symbol system, as exercised on the objects and states of affairs in the world to which its symbols refer; the symbol meanings are accordingly not just parasitic on the meanings in the head of the interpreter, but intrinsic to the dedicated symbol system itself. This is still no guarantee that our model has captured subjective meaning, of course. But if the system's behavioral capacities are lifesize, it's as close as we can ever hope to get.

References

Catania, A. C. & Harnad, S. (eds.) (1988) The Selection of Behavior. The Operant Behaviorism of B. F. Skinner: Comments and Consequences. New York: Cambridge University Press.

Chomsky, N. (1980) Rules and representations. Behavioral and Brain Sciences 3: 1-61.

Davis, M. (1958) Computability and unsolvability. Manchester: McGraw-Hill.

Davis, M. (1965) The undecidable. New York: Raven.

Dennett, D. C. (1983) Intentional systems in cognitive ethology. Behavioral and Brain Sciences 6: 343 - 90.

Fodor, J. A. (1975) The language of thought New York: Thomas Y. Crowell

Fodor, J. A. (1980) Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences 3: 63 - 109.

Fodor, J. A. (1985) Précis of "The Modularity of Mind." Behavioral and Brain Sciences 8: 1 - 42.

Fodor, J. A. (1987) Psychosemantics Cambridge MA: MIT/Bradford.

Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive architecture: A critical appraisal. Cognition 28: 3 - 71.

Gibson, J. J. (1979) An ecological approach to visual perception. Boston: Houghton Mifflin

Harnad, S. (1982) Metaphor and mental duality. In T. Simon & R. Scholes, R. (Eds.) Language, mind and brain. Hillsdale, N. J.: Lawrence Erlbaum Associates

Harnad, S. (1987a) Categorical perception: A critical overview. In S. Harnad (Ed.) Categorical perception: The groundwork of Cognition. New York: Cambridge University Press

Harnad, S. (1987b) Category induction and representation. In S. Harnad (Ed.) Categorical perception: The groundwork of Cognition. New York: Cambridge University Press

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990) Computational Hermeneutics. Social Epistemology in press.

Haugeland, J. (1978) The nature and plausibility of cognitivism. Behavioral and Brain Sciences 1: 215-260.

Kleene, S. C. (1969) Formalized recursive functionals and formalized realizability. Providence, R.: American Mathematical Society.

Kripke, S.A. (1980) Naming and Necessity. Cambridge MA: Harvard University Press

Liberman, A. M. (1982) On the finding that speech is special. American Psychologist 37: 148-167.

Lucas, J. R. (1961) Minds, machines and G\*"odel. Philosophy 36: 112-117.

McCarthy, J. & Hayes, P. (1969) Some philosophical problems from the standpoint of artificial intelligence. In: Meltzer B. & Michie, P. Machine Intelligence Volume 4. Edinburgh: Edinburgh University Press.

McDermott, D. (1976) Artificial intelligence meets natural stupidity. SIGART Newsletter 57: 4 - 9.

McClelland, J. L., Rumelhart, D. E., and the PDP Research Group (1986) Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1. Cambridge MA: MIT/Bradford.

Miller, G. A. (1956) The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review 63: 81 - 97.

Minsky, M. (1974) A framework for Representing knowledge. MIT Lab Memo # 306.

Minsky, M. & Papert, S. (1969) Perceptrons: An introduction to computational geometry. Cambridge MA: MIT Press (Reissued in an Expanded Edition, 1988).

Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4: 135 - 83.

Neisser, U. (1967) Cognitive Psychology NY: Appleton-Century-Crofts.

Cognitive Psychology

Paivio, A. (1986) Mental representation: A dual coding approach. New York: Oxford

Penrose, R. (1989) The emperor's new mind. Oxford: Oxford University Press

Pylyshyn, Z. W. (1980) Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences 3: 111-169.

Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: MIT/Bradford

Pylyshyn, Z. W. (Ed.) (1987) The robot's dilemma: The frame problem in artificial intelligence. Norwood NJ: Ablex

Rosch, E. & Lloyd, B. B. (1978) Cognition and categorization. Hillsdale NJ: Erlbaum Associates

Rosenblatt, F. (1962)

Principles of neurodynamics.

NY: Spartan Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-457.

Shepard, R. N. & Cooper, L. A. (1982) Mental images and their transformations. Cambridge: MIT Press/Bradford.

Smolensky, P. (1988) On the proper treatment of connectionism. Behavioral and Brain Sciences 11: 1 - 74.

Stabler, E. P. (1985) How are grammars represented? Behavioral and Brain Sciences 6: 391-421.

Terrace, H. (1979) Nim. NY: Random House.

Turkkan, J. (1989) Classical conditioning: The new hegemony. Behavioral and Brain Sciences 12: 121 - 79.

Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines, A. R. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall.

Ullman, S. (1980) Against direct perception. Behavioral and Brain Sciences 3: 373 - 415.

Wittgenstein, L. (1953) Philosophical investigations. New York: Macmillan

This figure should consist of the Chinese characters for "zebra," "horse" and "stripes," formatted as a dictionary entry, thus:

"ZEBRA": "HORSE" with "STRIPES"

Table 1. Connectionism Vs. Symbol Systems

Strengths of Connectionism:

(1) Nonsymbolic Function:

As long as it does not aspire to be a symbol system, a connectionist network has the advantage of not being subject to the symbol grounding problem.

(2) Generality:

Connectionism applies the same small family of algorithms to many problems, whereas symbolism, being a methodology rather than an algorithm, relies on endless problem-specific symbolic rules.

(3) "Neurosimilitude":

Connectionist architecture seems more brain-like than a Turing machine or a digital computer.

(4) Pattern Learning:

Connectionist networks are especially suited to the learning of patterns from data.

Weaknesses of Connectionism:

(1) Nonsymbolic Function:

Connectionist networks, because they are not symbol systems, do not have the systematic semantic properties that many cognitive phenomena appear to have.

(2) Generality:

Not every problem amounts to pattern learning. Some cognitive tasks may call for problem-specific rules, symbol manipulation, and standard computation.

(3) "Neurosimilitude" :

Connectionism's brain-likeness may be superficial and may (like toy models) camoflauge deeper performance limitations.

Strengths of Symbol Systems:

(1) Symbolic Function:

Symbols have the computing power of Turing Machines and the systematic properties of a formal syntax that is semantically interpretable.

(2) Generality:

All computable functions (including all cognitive functions) are equivalent to a computational state in a Turing Machine.

(3) Practical Successes:

Symbol systems' ability to generate intelligent behavior is demonstrated by the successes of Artificial Intelligence.

Weaknesses of Symbol Systems:

(1) Symbolic Function:

Symbol systems are subject to the symbol grounding problem.

(2) Generality:

Turing power is too general. The solutions to AI's many toy problems do not give rise to common principles of cognition but to a vast variety of ad hoc symbolic strategies.

Footnotes

1. Paul Kube (personal communication) has suggested that (2) and (3) may be too strong, excluding some kinds of Turing Machine and perhaps even leading to an infinite regress on levels of explicitness and systematicity.

2. Similar considerations apply to Chomsky's (1980) concept of "psychological reality" (i. e., whether Chomskian rules are really physically represented in the brain or whether they merely "fit" our performance regularities, without being what actually governs them). Another version of the distinction concerns explicitly represented rules versus hard-wired physical constraints (Stabler 1985). In each case, an explicit representation consisting of elements that can be recombined in systematic ways would be symbolic whereas an implicit physical constraint would not, although both would be semantically "intepretable" as a "rule" if construed in isolation rather than as part of a system.

3. Analogously, the mere fact that a behavior is interpretable as purposeful or conscious or meaningful does not mean that it really is purposeful or conscious. (For arguments to the contrary, see Dennett 1983).

4. It is not even clear yet that a "neural network" needs to be implemented as a net (i.e., a parallel system of interconnected units) in order to do what it can do; if symbolic simulations of nets have the same functional capacity as real nets, then a connectionist model is just a special kind of symbolic model, and connectionism is just a special family of symbolic algorithms.

5. There is some misunderstanding of this point because it is often conflated with a mere implementational issue: Connectionist networks can be simulated using symbol systems, and symbol systems can be implemented using a connectionist architecture, but that is independent of the question of what each can do qua symbol system or connectionist network, respectively. By way of analogy, silicon can be used to build a computer, and a computer can simulate the properties of silicon, but the functional properties of silicon are not those of computation, and the functional properties of computation are not those of silicon.

6. Symbolic AI abounds with symptoms of the symbol grounding problem. One well-known (though misdiagnosed) manifestation of it is the so-called "frame" problem (McCarthy & Hayes 1969; Minsky 1974; NcDermott 1976; Pylyshyn 1987): It is a frustrating but familiar experience in writing "knowledge-based" programs that a system apparently behaving perfectly intelligently for a while can be foiled by an unexpected case that demonstrates its utter stupidity: A "scene-understanding" program will blithely describe the goings-on in a visual scene and answer questions demonstrating its comprehension (who did what, where, why?) and then suddenly reveal that it doesn't "know" that hanging up the phone and leaving the room does not make the phone disappear, or something like that. (It is important to note that these are not the kinds of lapses and gaps in knowledge that people are prone to; rather, they are such howlers as to cast serious doubt on whether the system has anything like "knowledge" at all.)

The "frame" problem has been optimistically defined as the problem of formally specifying ("framing") what varies and what stays constant in a particular "knowledge domain," but in reality it's the problem of second-guessing all the contingencies the programmer has not anticipated in symbolizing the knowledge he is attempting to symbolize. These contingencies are probably unbounded, for practical purposes, because purely symbolic "knowledge" is ungrounded. Merely adding on more symbolic contingencies is like taking a few more turns in the Chinese/Chinese Dictionary-Go-Round. There is in reality no ground in sight: merely enough "intelligent" symbol-manipulation to lull the programmer into losing sight of the fact that its meaningfulness is just parasitic on the meanings he is projecting onto it from the grounded meanings in his own head. (I've called this effect the "hermeneutic hall of mirrors" [Harnad 1990]; it's the reverse side of the symbol grounding problem). Yet parasitism it is, as the next "frame problem" lurking around the corner is ready to confirm. (A similar form of over-interpretation has occurred in the ape "language" experiments [Terrace 1979]. Perhaps both apes and computers should be trained using Chinese code, to immunize their experimenters and programmers against spurious over-interpretations. But since the actual behavioral tasks in both domains are still so trivial, there's probably no way to prevent their being decrypted. In fact, there seems to be an irresistible tendency to overinterpret toy task performance itself, preemptively extrapolating and "scaling it up" conceptually to lifesize without any justification in practice.)

7. Cryptologists also use statistical information about word frequencies, inferences about what an ancient culture or an enemy government are likely to be writing about, decryption algorithms, etc.

8. There is of course no need to restrict the symbolic resources to a dictionary; the task would be just as impossible if one had access to the entire body of Chinese-language literature, including all of its computer programs and anything else that can be codified in symbols.

9. Even mathematicians, whether Platonist or formalist, point out that symbol manipulation (computation) itself cannot capture the notion of the intended interpretation of the symbols (Penrose 1989). The fact that formal symbol systems and their interpretations are not the same thing is hence evident independently of the Church-Turing thesis (Kleene 1969) or the Goedel results (Davis 1958, 1965), which have been zealously misapplied to the problem of mind-modeling (e.g., by Lucas 1964) -- to which they are largely irrelevant, in my view.

10. Note that, strictly speaking, symbol grounding is a problem only for cognitive modeling, not for AI in general. If symbol systems alone succeed in generating all the intelligent machine performance pure AI is interested in -- e.g., an automated dictionary -- then there is no reason whatsoever to demand that their symbols have intrinsic meaning. On the other hand, the fact that our own symbols do have intrinsic meaning whereas the computer's do not, and the fact that we can do things that the computer so far cannot, may be indications that even in AI there are performance gains to be made (especially in robotics and machine vision) from endeavouring to ground symbol systems.

11. The homuncular viewpoint inherent in this belief is quite apparent, as is the effect of the "hermeneutic hall of mirrors" (Harnad 1990).

12. Although they are no doubt as important as perceptual skills, motor skills will not be explicitly considered here. It is assumed that the relevant features of the sensory story (e.g., iconicity) will generalize to the motor story (e.g., in motor analogs; Liberman 1982). In addition, large parts of the motor story may not be cognitive, drawing instead upon innate motor patterns and sensorimotor feedback. Gibson's (1979) concept of "affordances" -- the invariant stimulus features that are detected by the motor possibilities they "afford" -- is relevant here too, though Gibson underestimates the processing problems involved in finding such invariants (Ullman 1980). In any case, motor and sensory-motor grounding will no doubt be as important as the sensory grounding that is being focused on here.

13. If a candidate model were to exhibit all these behavioral capacities, both linguistic (5-6) and robotic (i.e., sensorimotor), (1-3) it would pass the "Total Turing Test" (Harnad 1989). The standard Turing Test (Turing 1964) calls for linguistic performance capacity only: symbols in and symbols out. This makes it equivocal about the status, scope and limits of pure symbol manipulation, and hence subject to the symbol grounding problem. A model that could pass the Total Turing Test, however, would be grounded in the world.

14. There are many problems having to do with figure/ground discrimination, smoothing, size constancy, shape constancy, stereopsis, etc., that make the problem of discrimination much more complicated than what is described here, but these do not change the basic fact that iconic representations are a natural candidate substrate for our capacity to discriminate.

15. Elsewhere (Harnad 1987a,b) I have tried to show how the phenomenon of "categorical perception" could generate internal discontinuities where there is external continuity. There is evidence that our perceptual system is able to segment a continuum, such as the color spectrum, into relatively discrete, bounded regions or categories. Physical differences of equal magnitude are more discriminable across the boundaries between these categories than within them. This boundary effect, both innate and learned, may play an important role in the representation of the elementary perceptual categories out of which the higher-order ones are built.

16. On the other hand, the resemblance on which discrimination performance is based -- the degree of isomorphism between the icon and the sensory projection, and between the sensory projection and the distal object -- seems to be intrinsic, rather than just a matter of interpretation. The resemblance can be objectively characterized as the degree of invertibility of the physical transformation from object to icon (Harnad 1987b).

17. Figure 1 is actually the Chinese dictionary entry for "zebra," which is "striped horse." Note that the character for "zebra" actually happens to be the character for "horse" plus the character for "striped." Although Chinese characters are iconic in structure, they function just like arbitrary alphabetic lexigrams at the level of syntax and semantics.

18. Some standard logical connectives and quantifiers are needed too, such as not, and, all, etc.

19. Note that it is not being claimed that "horse," "stripes," etc. are actually elementary symbols, with direct sensory grounding; the claim is only that some set of symbols must be directly grounded. Most sensory category representations are no doubt hybrid sensory/symbolic; and their features can change by bootstrapping: "Horse" can always be revised, both sensorily and symbolically, even if it was previously elementary. Kripke (1980) gives a good example of how "gold" might be baptized on the shiny yellow metal in question, used for trade, decoration and discourse, and then we might discover "fool's gold," which would make all the sensory features we had used until then inadequate, forcing us to find new ones. He points out that it is even possible in principle for "gold" to have been inadvertently baptized on "fool's gold"! Of interest here are not the ontological aspects of this possibility, but the epistemic ones: We could bootstrap successfully to real gold even if every prior case had been fool's gold. "Gold" would still be the right word for what we had been trying to pick out all along, and its original provisional features would still have provided a close enough approximation to ground it, even if later information were to pull the ground out from under it, so to speak.

20. Although it is beyond the scope of this paper to discuss it at length, it must be mentioned that this question has often been begged in the past, mainly on the grounds of "vanishing intersections." It has been claimed that one cannot find invariant features in the sensory projection because they simply do not exist: The intersection of all the projections of the members of a category such as "horse" is empty. The British empiricists have been criticized for thinking otherwise; for example, Wittgenstein's (1953) discussion of "games" and "family resemblances" has been taken to have discredited their view. And current research on human categorization (Rosch & Lloyd 1978) has been interpreted as confirming that intersections vanish and that hence categories are not represented in terms of invariant features. The problem of vanishing intersections (together with Chomsky's [1980] "poverty of the stimulus argument") has even been cited by thinkers such as Fodor (1985, 1987) as a justification for extreme nativism. The present paper is frankly empiricist. In my view, the reason intersections have not been found is that no one has yet looked for them properly. Introspection certainly isn't the way to look. And general pattern learning algorithms such as connectionism are relatively new; their inductive power remains to be tested. In addition, a careful distinction has not been made between pure sensory categories (which, I claim, must have invariants, otherwise we could not successfully identify them as we do) and higher-order categories that are grounded in sensory categories; these abstract representations may be symbolic rather than sensory, and hence not based directly on sensory invariants. For further discussion of this problem, see Harnad 1987b).

21. Although mathematicians investigate the formal properties of uninterpreted symbol systems, all of their motivations and intuitions clearly come from the intended interpretations of those systems (see Penrose 1989). Perhaps these too are grounded in the iconic and categorical representations in their heads.

What is the term for one's internal mental symbols that represent objects events or states of affairs in the world?

What is the term for one's internal mental symbols that represent objects, events, or states of affairs in the world? mental representations.

What is a mental representation quizlet?

mental representation. something that stands for people, places, things, concepts, events, ideas... knowledge representation. the form for what you know in your mind about things, ideas, events, and so on, in the outside world.

Which of the following refers to processing a mental representation of a problem or situation quizlet?

The most correct solution to this problem is provided by option C: cognition. It is worth noting that the term cognition encompasses all activities in which information is mentally processed.

What are ideas that represent a class or category of objects events or activities?

Concepts are ideas that represents a class or category of objects, events or activities.