계산주의-연결주의 논쟁
계산주의와 연결주의 사이의 논쟁.
Wikipedia:1
As Connectionism became increasingly popular in the late 1980s there was a reaction against connectionism by some researchers, including Jerry Fodor, Steven Pinker, and many others. These theorists argued that connectionism, as it was being developed at that time, was in danger of obliterating the progress made in the fields of cognitive science and cognitive psychology by the classical approach of computationalism. Computationalism is a specific form of cognitivism which argues that mental activity is computational, i.e. that the mind is essentially a Turing machine. Many researchers argued that the trend in connectionism was towards a reversion to associationism, and the abandonment of the idea of a language of thought, something they felt was mistaken. On the other hand, it was those very tendencies that made connectionism attractive for other researchers.
Connectionism and computationalism need not be at odds per se, but the debate as it was phrased in the late 1980s and early 1990s certainly led to opposition between the two approaches. However, throughout the debate some researchers have argued that connectionism and computationalism are fully compatible, though nothing like a consensus has ever been reached. The differences between the two approaches that are usually cited are the following:
- Computationalists posit symbolic models that do not resemble underlying brain structure at all, whereas connectionists engage in “low level” modeling, trying to ensure that their models resemble neurological structures.
- Computationalists generally focus on the structure of explicit symbols (mental models) and syntactical rules for their internal manipulation, whereas connectionists focus on learning from environmental stimuli and storing this information in a form of connections between neurons.
- Computationalists believe that internal mental activity consists of manipulation of explicit symbols, whereas connectionists believe that the manipulation of explicit symbols is a poor model of mental activity.
Though these differences do exist, they may not be necessary. For example, it is well known that connectionist models can actually implement symbol manipulation systems of the kind used in computationalist models. So, the differences might be a matter of the personal choices that some connectionist researchers make as opposed to anything fundamental to connectionism.
To make matters more complicated, the recent popularity of dynamical systems in philosophy of mind (due to the works of authors such as Tim van Gelder) have added a new perspective on the debate. Some authors now argue that any split between connectionism and computationalism is really just a split between computationalism and dynamical systems, suggesting that the original debate was wholly misguided.
All of these opposing views have led to a fair amount of discussion on the issue amongst researchers, and it is likely that the debates will continue.
[!memo] 나는 connectionism과 computationalism이 양립가능하다는 입장에 완전히 동의한다. 하지만 양립가능성 자체와는 별개로, computationalism은 지나치게 단순화되고 부적절하게 높은 수준의 모델이라고 생각한다. 인공신경망으로 구현된 로직 중 일부는 computationalism에서 가정하는 기호처리와 관련이 있겠으나 마음의 모든 활동을 기호 처리로 보는 관점은 너무 경직되어 있다고 본다. —AK, 2005-08-28
SEP의 Computational theory of mind 중:2
One cornerstone of Fodor’s case for computational theory of mind was that some version of the theory was implied by cognitivist theories of phenomena like learning and language acquisition, and that these theories were the only contenders we have in those domains. Critics of computational theory of mind have since argued that there are now alternative accounts of most psychological phenomena that do not require rule-governed reasoning in a Language of thought, and indeed seem at odds with it.
Neural network models seek to model the dynamics of psychological processes, not directly at the level of intentional states, but at the level of the networks of neurons through which mental states are (presumably) implemented. In some cases, psychological phenomena that resisted algorithmic modeling at the cognitive level just seem to “fall out” of the architecture of network models, or of network models of particular design. Several types of learning in particular seem to come naturally to network architectures, and more recently researchers such as Paul Smolensky have produced results suggesting that at least some features of language acquisition can be simulated by his models as well.
During the late 1980s and 1990s there was a great deal of philosophical discussion of the relation between network and computational models of the mind. Connectionist architectures were contrasted with “classical” or “GOFAI” (“Good old-fashioned AI”) architectures employing rules and symbolic representations. Advocates of connectionism, such as Smolensky (1987), argued that connectionist models were importantly distinct from classical computational models in that the processing involved took place (and hence the relevant level of causal explanation must be cast) at a sub-symbolic level, such as Smolensky’s tensor-product encoding. Unlike processing in a conventional computer, the process is distributed rather than serial, there is no explicit representation of the rules, and the representations are not concatenative.
There is some general agreement that some of these differences do not matter. Both sides are agreed, for example, that processes in the brain are highly parallel and distributed. Likewise, even in production-model computers, it is only in stored programs that rules are themselves represented; the rules hard-wired into the CPU are not. (The concatenative character is argued by some — e.g., van Gelder 1991, Aydede 1997 — to be significant.)
The most important “classicist” response is that of Fodor and Pylyshyn (1988). (See also Fodor and McLaughlin 1990.) They argue that any connectionist system that could guarantee systematicity and productivity would simply be an implementation of a classical (LOT) architecture. Much turns, however, on exactly what features are constitutive of a classical architecture. Tim van Gelder (1991), for example, claims that classicists are committed to specifically “concatenative compositionality” (Van Gelder 1991, p. 365) — i.e., compositionality in a linear sequence like a sentence rather than in a multi-dimensional space like Smolensky’s tensor-product model — and that this means that they explain cognitive features without being merely “implementational” and hence provide a significantly different alternative to classicism. In response, Aydede (1997), while recognizing the tendency of classicists to make assumptions that the LOT is concatenative, argues that the LOT need not be held to this stronger criterion. (Compare Loewer and Rey, 1991.) However, if one allows non-concatenative systems like Smolensky’s tensor space or Pollack’s Recursive Auto-Associative Memory to count as examples or implementations of LOT, more attention is needed to how the notion of a “language” of thought places constraints upon what types of “representations” are included in and excluded from the family of LOT models. There has been no generally-agreed-to resolution of this particular dispute, and while it has ceased to generate a steady stream of articles, it should be classified as an “open question”.
In terms of the case for CTM, recognition of alternative network models (and other alternative models, such as the dynamics systems approach of van Gelder) has at least undercut the “only game in town” argument. In the present dialectical situation, advocates of CTM must additionally clarify the relationship of their models to network models, and argue that their models are better as accounts of how things are done in the human mind and brain in particular problem domains. Some of the particulars of this project of clarifying the relations between classical and connectionist computational architectures are also discussed in the entry “connectionism”.
SEP의 Connectionism 중:3
… For example, (Pinker & Prince 1988) point out that the model does a poor job of generalizing to some novel regular verbs. They believe that this is a sign of a basic failing in connectionist models. Nets may be good at making associations and matching patterns, but they have fundamental limitations in mastering general rules such as the formation of the regular past tense. These complaints raise an important issue for connectionist modelers, namely whether nets can generalize properly to master cognitive tasks involving rules. Despite Pinker and Prince’s objections, many connectionists believe that generalization of the right kind is still possible (Niklasson and van Gelder, 1994). …
The Shape of the Controversy between Connectionists and Classicists
The last thirty years have been dominated by the classical view that (at least higher) human cognition is analogous to symbolic computation in digital computers. On the classical account, information is represented by strings of symbols, just as we represent data in computer memory or on pieces of paper. The connectionist claims, on the other hand, that information is stored non-symbolically in the weights, or connection strengths, between the units of a neural net. The classicist believes that cognition resembles digital processing, where strings are produced in sequence according to the instructions of a (symbolic) program. The connectionist views mental processing as the dynamic and graded evolution of activity in a neural net, each unit’s activation depending on the connection strengths and activity of its neighbors, according to the activation function.
On the face of it, these views seem very different. However many connectionists do not view their work as a challenge to classicism and some overtly support the classical picture. So-called implementational connectionists seek an accommodation between the two paradigms. They hold that the brain’s net implements a symbolic processor. True, the mind is a neural net; but it is also a symbolic processor at a higher and more abstract level of description. So the role for connectionist research according to the implementationalist is to discover how the machinery needed for symbolic processing can be forged from neural network materials, so that classical processing can be reduced to the neural network account.
However, many connectionists resist the implementational point of view. Such radical connectionists claim that symbolic processing was a bad guess about how the mind works. They complain that classical theory does a poor job of explaining graceful degradation of function, holistic representation of data, spontaneous generalization, appreciation of context, and many other features of human intelligence which are captured in their models. The failure of classical programming to match the flexibility and efficiency of human cognition is by their lights a symptom of the need for a new paradigm in cognitive science. So radical connectionists would eliminate symbolic processing from cognitive science forever.
[!memo] 굵게 표시한 부분에 동의. —AK, 2005-08-25
See also
- 스티븐 핑커의 꾸준한 비판: The language instinct, How the mind works, The blank slate