VOLUNTARY CONTROL

IN NATURAL AND ARTIFICIAL ARCHITECTURES

Henrique Schützer Del Nero

 

ABSTRACT

Consciousness has been omitted in the Cognitive Science literature, despite being the mark of the mental. The reasons for this stay in the fact that if one considers the mental primacy of categories one is forced to accept anti-reductionistic arguments. This comes from the fact that consciousness supposedly endorses objects to exist, objects that pertain to the phenomenological screen. These objects and any prior mental-semantics analysis must be changed towards seeing consciousness as an evolutionary functional gain. Voluntary control is one of the main predicates of consciousness and it must be modeled. If one doesn't cope with this dichotomy, voluntary x automatic, one cannot state that the models display real cognitive power. Talking about voluntary control and showing how can a physicalistic approach of neuron oscillations enlighten a syntax that scaffolds a reductionistic project, will avoid the difficulties of coping with volition through "free will" an eternal philosophical conundrum. If the sensation of freedom usually accompanies the voluntary mode of operation this may be due to the fact that the process by which the brain switch from the automatic to the voluntary mode is based, in my model, upon the presence of bifurcations that enrich the system's information entropy, loosing however predictability as it happens in ordinary values where the behavior is quasi-linear.

I. Consciousness: semantics and syntax, content and function

Cognitive Science must face the problem of consciousness if any solution to the mind-brain dilemma is expected. But consciousness is so vague and at the same time so obvious a concept that many authors defend that : a) it lacks systematicity to be treated informationally; b) it has to wait until more basic data become available.

My aim here is to show that there are two ways of seeing consciousness: as a class of objects or as an adaptive function. Moreover, considering the role and position that syntax and semantics play in the models, one can envisage some branches to allocate consciousness as a way some architectures process information.

a. syntax and semantics

One of the common mistakes that may obscure the attempt to give a model of consciousness is not to consider the priority one gives to syntactical and to semantical aspects of the problem. Whenever one considers entities to be known and rules, be they strict (logical rules), be they functional rules (e.g. differential equations), to be sought, one adopts semantical priority. Arguments that doubt the existence of mental entities (e.g.Churchland, 1984), or its status as a natural type, are abandoned in favor of a view that: precludes reducibility to exist and that defends that science must always elect a level to the primitives of the theory. Traditional Artificial Intelligence models and Connectionist ones that interpret entities with objects that are able to be translated into a natural language are semantical nets, despite their being able to have different ways of processing or connecting the elements (syntax). The concept of distributed representation (PDP), of sparse encoding, etc. shadows the semantical priority that categories play, being the way they are connected afterwards, a syntactical discussion that has two major supposed rivals: traditional rule seekers and connectionist statistical graspers. When one accepts that a concept that can be expressed in natural language as a minimal entity or predicate, it is the lowest level of interpretation for an architecture, being the rules to connect concepts a second stage of the problem, the model that will result will be always a semantical net.

The tradition to build semantical nets, be they governed by logical rules, hence highly dissociated from neural plausibility (traditional AI), or governed by dynamical connections, hence neural inspired (Connectionist AI), precludes the solution of the mind-brain dilemma in a reductionistic vein because it accepts the arguments that mental objects displaying intentionality cannot be radically translated into physical laws (the strong argument of Fodor 1975 in the Language of Thought). Connectionist models in this tradition may induce mistake, because the supposed neural inspiration is a matter of processing the elements (syntax) and not of endorsing its appearance and existence (semantics and ontology). Semantical priority is closed to any solution that considers the way entities appear (not to use emerge, avoiding conceptual confusion), having, however, two different ways of considering the way these entities are manipulated: through rules (second neural dissociation), through invariant dynamical and statistical properties (neural inspired, through the large assembly of processing elements and the wide ignorance of the strict rules that play the game).

Reduction of the mental to the physical can be seen as full of difficulties, but if one seeks the very nature of several disciplines it cannot be discarded solely based on arguments. The disciplines that wait urgently for a "solution" to the problem are: Psychiatry (where anthropological, cultural, neurological, dynamical and biochemical aspects play altogether vital roles), Neuropsychology (where the correlations between function and deviance faced with lesions, are to be considered in an informational way, as it is defended by Shallice 1988, but that has to have a way that endorses correlations to exist. If the semantical aspect is prior and the syntactical rules are the only place where some models can allude to the underlying neural structure, then one may ask if the correlations of entities and neural structures are legitimate?), and Artificial Intelligence (because if intentionality is the mark of the mental and if there are certain aspects of cognitive systems that are to be putatively ascribed to the behavior of the machine, as if the machine had them, leaving Searle's Chinese Room argument among others without solution, then artificial cognitive architectures must face the problems that natural architectures solve quite well. This presupposes a correct comprehension of the mechanisms that process information in the Central Nervous Systems, allowing entities to exist and functions to be well carried).

Reduction is a multi-sensical word. One can talk about reduction of theories, of entities and of rules. Each one is different and will not be touched here. Accepting that entities like beliefs and fears exist, that memory and attention too, etc. are positions that subsume semantical priority. Reduction can be seen in this way as a double process that tries to constitute objects through the natural rules that allow an architecture to codify, to retrieve, to categorize, to functionalize in legitimate behaviors, these very objects. Double-process here means that this effort doesn't occur without any prior knowledge, but it builds the semantical blocks of cognition and of the mind from the syntactical rules (embedded in the semantics of physicalism) of the brain. One cannot state anything without any ontology, but the acceptance of the primacy of mental knowledge over the physical one with a secondary trial on reduction is a project doomed to failure, for the above quoted reasons (cf. Fodor and all other critics of this kind of project). More radical approaches towards reduction might posit the brain as the primary place of information processing, grasping from its style of computation the boundaries of the categories that can be well formed. This is a kind of reduction that privileges brain syntax over mental semantics, adopting semantical objects that are not mental (in a certain sense) like depolarizations, oscillations, action potentials, neurotransmitters, etc.

Modeling through the primacy of syntax is more tied to anti-representationalism according to a view that sees representations to be objects that fulfill intentional blanks, but it is not according to the view that considers representation to be a map from an environment that re-presents objects and relations throughout codification and topography (see Pribam 1991).

Of course, the objects that may appear from this kind of syntactical priority model are not cats and dogs, fears and hopes, but functions that have been selected through evolution to allow mental entities and processes to exist.

b. Content and function

Discussions that assume the mental to exist, as if it were an empirical given, are vicious if one pursues reduction and identification of the underlying mechanisms of the brain. Mental objects might be the result of careful theoretical building over brain codification, as Paul Churchland 1984 defends, being a theoretical realm, despite its apparent nature of prior privileged knowledge. This priority comes exactly from the fact that our inner sensations, perceptions, ideas, etc. are tailored according to a mind-language vocabulary. This is the place where phenomenal consciousness plays its role as the source of first knowledge. Whatever one feels, or senses is untouchable. It may be a mistake according to the correspondence theory of truth but it is not ill-built from the propositional point of view. In other words, inner feelings, sensations, perceptions, etc., that appear in our conscious phenomenological screen are propositions, be they false or true., i.e. they are well formed strings that are legitimate in spite of the possibility that they can be mistaken.

Assuming that the objects that happen to be present in our conscious screen are prior "knowledge" is a mistake that ruins all efforts to make any correlation between mind and brain. The semantical priority of these conscious objects, after language described, precludes the brain to play an interesting role. It can at least play the token-identity role, and the weak contingent lesion role that identifies some coincidence to occur: whenever one part is lesioned some sort of problem appears, etc. One must pay attention to the fact that these allowances in the semantical priority style only hide the impossibility of a final identification of the mental and the physical.

Phenomenological consciousness, with its first and indubitable objects, is part of the conscious problem and must be abandoned because it is stated in a first-person language and cannot be translated into a third-objective language as science is. (This is Searles's suggestion in 1992, but the core of the implications is different. While Searle's points this fact to redeem consciousness, I think that this is a pernicious aspect that has to be considered and abandoned, searching for other aspects of consciousness to be modeled).

Consciousness as the place where conscious objects appear must be abandoned and the function of consciousness in a cognitive-evolutionary perspective must be considered. How can one consider the functions of consciousness? I assume that this is carried through the consideration of its functional-behavioral predicates.

What are the marks of consciousness as a process, a class of states or an entity? Besides phenomenological experience, already discarded, one must mention: a) a decision-making device associated with justification and responsibility (decision-maker architecture), b) judgment and evaluation of counter-intuitive and counter-factual situations e.g. scientific laws and normative statements; c) "free" will; d) a sense of unity called personhood, responsible for one's identity and center of determination of one's rights and obligations.

Maybe this list is not complete and may contain some intersections. I am going to focus on the problem of will, and its relative: volition, to launch a tentative hypothesis -- voluntary control opposed to automatic control over actions, be the actions motor ones, be they thoughts, is what characterizes a legitimate natural architecture. If one seeks to replicate consciousness in his, or her, models one may discard the phenomenological first person aspect of it, assuming that the problem of consciousness as a privileged locus of control (e.g. the Supervisory Attentional Systems of Shallice 1988) must be accompanied by considerations tied to voluntary and non-voluntary control over cognitive actions.

The inversion is to privilege function, and, among the consciousness functions, to select free-will as quasi-equivalent to voluntary control. If free-will is a terrible issue from the philosophical point of view, voluntary control has a large amount of psychological, neurological and psychiatric data already compiled. Second: if the routes that preclude reduction to hold are based on mental semantics priority over syntax during modeling, syntax priority must use some kind of formalism that will not enlighten the boundaries between subtle linguistic and mental categories, but may enlighten the formal basis that might underlie a "voluntary controller" to appear.

Then, we abandoned semantical priority, suggesting that neural syntax can enlighten the process by which functions, and not objects, are tailored in order to allow the mental, particularly conscious voluntary control to appear in the human cognitive systems.

2. Three basic formalisms to the problem of cognition

Science must pursue quantification and formalization of phenomena. The more sentences are law-like, with a predictive and explicative character the more scientific the field becomes. Cognitive Science may reach the status of a mature science if it has a formalism and an explanation to the ways cognition emerges in complex, e.g. brain-like, systems.

There are three basic formalisms and/or their respective "physical" theories that are candidates for these: (one may not defend the natural explanation but defend the utilization of a certain sort of formalism, e.g. one may suggest that quantum formalisms capture the problems of brain-like computations well without defending that quantum processes are held in the brain)

a) Neurons can be logical-gates hence formal systems, e.g. predicate calculus are good tools to model cognition. Entities will be whatever substitutes bounded variables in quantified sentences (Quine's 1969 notion of being), hence mental entities are candidates to substitute bounded variables in a formal system, being irrelevant if they exist or not, if they are natural types or not, etc. This is a style that, first, from Nature's point of view has a very narrow notion of neuron and its likeness to von Neumann's architectures, and second, from the pragmatical point of view supposes rules to govern whatever operation in the cognitive realm. Complexity, another important feature of cognitive systems, when considered through the lenses of natural and artificial models, here means algorithmic complexity. The measure of information here means something obscure, being preferable to say that these systems are information-processors.

b) Dynamical Systems: from Nature's point of view causality and determinism are held to exist. Cognitive architectures do not break the chain of causality and of determinism, but may exhibit complex patterns of behavior throughout time. Complex behaviors appear in these classical systems, described by differential or difference equations, due to the presence of non-linearities and at the extreme to the sensitivity of the initial conditions. In a certain way this is the nature of Connectionist Models, using dynamics to underlie learning and retrieval, relaxation, etc. Concepts like basin of attraction, transients, non-linear connection, digital or analogical processing are all tied to this class of formalisms. It is difficult to assume that one might use these formalisms without embracing the classical physics that subserve them. Complexity here means progressively: non-linearities and "chaos" to hold. The measure of information in these systems may be the common Shannon's relation of individual information and probability.

c) Quantum mechanics may play a role that is highly controversial in cognitive models: from one point of view it may furnish the diffusion Schröedinger-like equations that could describe fields of dipole oscillations in dendritic branches (cf. Pribam 1991), from another perspective quantum aspects may play a real role because the quantal probability of neurotransmitter release bears microscopic events at the quantum level (cf. Eccles 1993). Be they only formalisms, without commitment to the very nature of the neural process, be they the very essence of the biological process, mainly in the neocortex, complexity here means breaking causality, and allowing probability to play a vital role. Information here must be measured as a product of frequency and time, being impossible to be precise in both axes, since Heisenberg-like principles preclude this to follow. Dynamical Systems and Broad Connectionist models are recruited to scaffold my attempt to launch a information-dynamical way of modeling cognitive conscious controllers.

3. Biological and Formal Data

W. Freeman (1992) has shown that chaos may play an important role in the Central Nervous System. This is not the place to stress the misuses of "deterministic chaos" nor its abuses. It is enough to say that if one seeks the basis for a kind of brain syntax, neurons will have to be treated as oscillators, according to the Hodking-Huxley model and some derivations (cf. Glass and Mackey 1988). Oscillators can be described by the equation: d2X/dt + b dX/dt + c X = 0 . The term "b" here means a friction coefficient that can be: a) positive: the system goes to an equilibrium stable point (asymptotically), b) negative: the system amplifies its oscillation amplitudes to the infinite, c) zero: the system oscillates forever with the same amplitude (e,g. a free pendulum).

The friction coefficient is a parameter that in the case of positive and negative values allows the system to be structurally stable, i.e. for a small perturbation in b's value the system behaves topologically almost the same way (cf.Abraham and Shaw 1992). For zero value of b the system is structurally unstable, hence, small perturbations change dramatically the systems behavior from the topological point of view. This is why b's in the negative and in the positive domain are called ordinary parameters' values. In the zero value b is called a bifurcation parameter value. It simply enriches and multiplies the systems possible states for a small perturbation. Chaos is a cascade of bifurcations with successive duplications of period. But it is not necessary to have chaos in my view to have this kind of rich behavior that the concept of structural stability furnishes. Stability from the structural point of view measures a qualitative change in the system's behavior (henceforth topological variability). From the informational point of view one can say that if one tries to forecast the state of a deterministic system in t+1, given the system's structure and the system's state in t, then: for ordinary parameters value the probability of a state in t+1 is 100%, hence 0 bits of information. For bifurcation parameters value the states in t+1 will be at least two (for period 2) being the individual measure of information in Shannon's terms 1 bit. Bifurcations this way, at the same time interrupt structural stability but enrich information entropy.

In a different class of arguments, Eccles(1993) defends that consciousness is related to the philogenetic history of the neocortex. One knows that the frontal lobes are good candidates to be the center of complex voluntary controls, planning and monitoring. And one also knows that the frontal areas receive information from the hippocampus, being sometimes unable to reinterpret them, the case that will cause "knowledge acts to happen without awareness" (cf. Moscovitch and Umilta 1991). The way the hippocampus transfers information to the frontal lobes in these cases is a shallow output, being enough to trigger a coherent action , but unable to trigger consciousness over the action, or mastery. Automatic controlled behavior might be processed at the hippocampus, and in other subcortical areas, but may not reach the information status or load to be transferred to the frontal, neocortex, areas.

In another interesting work Ito (1993) proposes that voluntary control is conducted by the frontal lobes in a feed-back mode, with the intense labor of learning and training. The more the action becomes a routine, the more it is transferred to the cerebellum, be the action a motor one, be it a thought. In the cerebellum the action will follow a kind of feed forward control way till something happens that obliges the switch back to the frontal, voluntary mode.

Then here we have a way to propose a model on three brain structures that might underlie the dichotomy voluntary x automatic control, hence enlighten one of the cores of consciousness. But the feed-back mode and the feed-forward mode are not enough to characterize conscious and not conscious. Nor is it the sole identification with the frontal areas and with the cerebellum. The really interesting suggestion comes with the notion of bifurcation, with the dichotomy between structurally stable and unstable and with measurements of information in both extremes of the parameter's space.

The hippocampus and other sub-cortical areas process information that comes from the environment and from past experiences. It can allocate modules or schemata, in Shallice's terms. If these allocations are trivial then it is not necessary to recruit frontal lobes that are more tied to decisions under risk and with novelties. The cerebellum plays the role if the information is trivial. If something happens that turns information complex, then the frontal lobes are recruited. Suppose now what are the signaling criteria that underlie this switching mode from the hippocampus to the cerebellum or to the frontal lobes. Remember, the more complex, the more non-linear, the more bifurcations one may find. The more complex, the more apt to display voluntary control, a job that is tied to the neocortex. Then the more complex the environment and its manipulations the more bifurcations will be held, making information grow, and switching the mode to the frontal or to the cerebellum. If something is new, informational entropy, or in dynamical terms, parameter's values in differential equations that govern assemblies of oscillators, is high sending the problem to the attentional and volitional frontal modes of operation: there through learning and careful analysis, things become more trivial, i.e. training sets allow bifurcations parameters to go to ordinary values: there the behavior of the system will be predictable, structural stability will be back and informational entropy will be 0. The cerebellum can handle this situation till something disturbs the parameter's values switching the problem back to the frontal lobes.

Something however remains problematic: why are voluntary acts accompanied by the sensation of freedom? If the quantum explanation is true, then freedom could be a correlate of inner chance. But if the system is interpreted through classical formalisms, then causality and determinism have to share a place with the sensation of freedom the subject experiences. The explanation can be the well known evoked potential (Libet et al. 1983) that occurs 300 ms before a voluntary action appears on the conscious screen: the system has a basal rhythm and even for apparently non-sequitur, free, actions, there must have been something that preceded it. If the value that preceded is a bifurcation one, or something that led to chaos, then the examination of t-1 will not be capable of rescueing causality. But causality is there, prediction failed, but determinism still underlies the will and conscious control. Freedom stands for phenomenological sensation and unpredicatability. Causality and determinism to the structure of the system.

REFERENCES

Abraham,R. and Shaw,C.(1992) Dynamics, the Geometry of Behavior. Addison Wesley

Churchland,P.(1984) Matter and Consciousness. MIT Press

Eccles,J.(1993) Evolution of Complexity of the Brain with the Emergence of Consciousness. in Pribam,K.(ed) Rethinking Neural Networks: Quantum Fields and Biological Data. Lawrence Erlbaum Associates.

Fodor,J.(1975) The Language of Thought. Harvard University Press

Glass,L. and Mackey,M.(1988) From Clocks to Chaos,The Rhytms of Life. Prin.Un.Pr.

Ito,M.(1993) How Does the Cerebellum Facilitate Thought? in Ono,T. et al (ed) Brain Mechanisms of Perception and Memory. Oxford University Press

Libet,B. Gleason,C. Wright,E. Pearl,D.(1983) Time of conscious intention to act in relation to onset of cerebral activity(readiness-potential): The unconscious initiation of a freely voluntary act. Brain, 106, 623-642

Moscovitch,M and Umilta,C.(1991) Conscious and Nonconscious Aspects of Memory: a Neuropsychological Framework of Modules and Central Systems. in Lister,R. and Weingartner,H.(ed) Perspectives on Cognitive Neuroscience. Oxford University Press.

Pribam,K.(1991) Brain and Perception, Holonomy and Structure in Figural Processing. Lawrence Erlbaum Associates.

Quine,W.(1969) Ontological Relativity and Other Essays. Columbia Univ. Press

Searle,J. (1992) The Rediscovery of Mind. MIT Press.

Shallice,T. (1988) From Neuropsychology to Mental Structure. Cambridge Univ.Press