par Cleeremans, Axel ;Timmermans, Bert ;Pasquali, Antoine
Référence Neural networks, 20, 9, page (1032-1039)
Publication Publié, 2007-11
Article révisé par les pairs
Résumé : When one is conscious of something, one is also conscious that one is conscious. Higher-Order Thought Theory [Rosenthal, D. (1997). A theory of consciousness. In N. Block, O. Flanagan, & G. Güzeldere (Eds.), The nature of consciousness: Philosophical debates. Cambridge, MA: MIT Press] takes it that it is in virtue of the fact that one is conscious of being conscious, that one is conscious. Here, we ask what the computational mechanisms may be that implement this intuition. Our starting point is Clark and Karmiloff-Smith's [Clark, A., & Karmiloff-Smith, A. (1993). The cognizer's innards: A psychological and philosophical perspective on the development of thought. Mind and Language, 8, 487-519] point that knowledge acquired by a connectionist network always remains "knowledge in the network rather than knowledge for the network". That is, while connectionist networks may become exquisitely sensitive to regularities contained in their input-output environment, they never exhibit the ability to access and manipulate this knowledge as knowledge: The knowledge can only be expressed through performing the task upon which the network was trained; it remains forever embedded in the causal pathways that developed as a result of training. To address this issue, we present simulations in which two networks interact. The states of a first-order network trained to perform a simple categorization task become input to a second-order network trained either as an encoder or on another categorization task. Thus, the second-order network "observes" the states of the first-order network and has, in the first case, to reproduce these states on its output units, and in the second case, to use the states as cues in order to solve the secondary task. This implements a limited form of metarepresentation, to the extent that the second-order network's internal representations become re-representations of the first-order network's internal states. We conclude that this mechanism provides the beginnings of a computational mechanism to account for mental attitudes, that is, an understanding by a cognitive system of the manner in which its first-order knowledge is held (belief, hope, fear, etc.). Consciousness, in this light, thus involves knowledge of the geography of one own's internal representations - a geography that is itself learned over time as a result of an agent's attributing value to the various experiences it enjoys through interaction with itself, the world, and others.