TOPOLOGICAL COMPUTATION AND VOLUNTARY CONTROL:

the possible role of the neocortex and the very aim of cognitive science

Henrique Schützer Del Nero, José Roberto Castilho Piqueira, Alfredo Portinari Maranca

ABSTRACT

Cognitive Science, a reaction against behaviorism, proposed the revival of the notions of mental representation and of intentional systems. However, there was a price to pay: it embraced a dissociationist position regarding mind and brain, positing mental primitives to exist, independently of the implementational-brain mechanisms. Symbolic Artificial Intelligence and Connectionist Artificial Intelligence (Neural Networks) fail in our view to model mental phenomena, from the brain perspective, because mental-properties such as consciousness are either imputed or omitted. Mental semantics is still the class of primitives that guide the interpretation of both models, in spite of the fact that Symbolic architectures take advantage of one style of processing elements and Neural Networks others. Departing from this point we propose a third class of models that are based upon the following assumptions:

a) a bottom-up strategy that doesn't consider mental phenomena as primitives but as results of the partitions in the topological space that can be made by complex systems

b) redefining the class of mental contents, particularly consciousness, as a class of functions.

c) defining consciousness as a function, as something that performs voluntary control as opposed to automatic control

d) analyzing objects that can be subject of voluntary and of automatic control as a class of oscillations that can be modelled through differential equations

e) using the notion of structural stability, bifurcation parameter and chaos, an oscillation can be topologically analyzed, allowing the system to recognize the mode that is to be recruited: the voluntary or the automatic

f) defining consciousness as synchronization, according to the hypothesis of the 40 Hz oscillation, and analyzing the relation between Phase-Locked-Loops (well known engines that display oscillatory behavior) and neuron assemblies .

g) defining a class of happenings that might occur when one PLL locks, the "automatic mode", and when the second locks through synchronization with the first, the "conscious mode".

h) defining each pair of two PLLs, one of second order other of third order, as the minimum atom that might perform the process of conscious x automatic computation through an analysis of the topological aspect of the signal

Consciousness, as a content, would be non-analyzable through the third-person language (typical of science, according to Searle), but as a function, explicitly tied to voluntary control, it could be considered as the result of a topological analysis that happens at the preprocessing levels, being consciousness a function that demodulates information whenever it presents structural instability, i.e. a bifurcation in its parameters space.

INTRODUCTION

There is an eternal battle that has regained vigor with the appearance of computer science: must semantics and ontology be maintained separated from syntax? Or, are the rules of processing (syntax) different from the rules, if there are any, of meaning and existence (semantics)?

Artificial Intelligence has improved many methods to control, identify and manipulate artificial systems, like petrol exploitation, signal identification, expert systems ,etc. But Artificial Intelligence went further, presuming that the mind could also be the subject of models, or at least, models could gain a mind status, were they able to be intelligent or to behave intelligently. Then, began the disente, because Psychology that was at that time an orphan of the behavioristic dream, catched on immediately to the revival of intermediate agencies, mixing information, mental content and mental rules in the same strange concept of representation. Intermediate agencies were recruited to simplify the problem, being the links that can interpret internal phenomena. Is it legitimate to recruit entities when the existance of these same entities is in doubt?

Artificial Intelligence with programs that have a psychological flavor and Psychology with the computational flavor of a symbol-processor mind, began a new era that should be better understood.

The aim of this article is to show that Cognitive Systems will not enrichen our knowledge of the human cognitive system, if one doesn't pay attention to the nature of the presumed mental, or intelligent, objects they handle and the results they allow. Cognitive Science in a very broad sense is devoted to study both the process and the nature of the objects that can be recruited as internal-mental agencies. In this sense, it has to pay attention to both sides of the coin: how can human cognition enrichen our artificial models for different purposes, and how can artificial models and formalisms enrichen our understanding of the human mind, launching the basis of a scientific-formal Psychology? Besides being a question of interpretation, it is a trial on establishing the limits and boundaries that a set must have to be called the set of mental and intelligent operations. There might be objects that are intelligent and operations that, in spite of being complex and unpredictable, are not intelligent. Careful conceptual research must be undertaken in order to clarify the field. Cognitive Systems will never be cognitive, if one doesn't define with precision what it is to be cognitive and to be a system. The search for these concepts pushes one to do both science and metascience before labeling 'cognitive' a result or an inquiry. Engineering without scientific and conceptual care will be problem-solving. If it is problem solving, with an acute sense of purpose and pragmatical-oriented work, it is better to abandon contaminated terms like intelligence, cognition and mind.

My aim in this work is to propose an alternative method to ascribe intelligence and cognition to an architecture. If one doesn't pay attention to consciousness, one will never be modeling cognitive-intelligent behavior, but consciousness is a very complicated concept full of traps and fallacies. Let us try to face the enemy, trying to prevent its opaque sides.

I. Consciousness

The trick behind all the discussions about consciousness, and all the failures they have led to, is the misunderstanding between form and content.

Consciousness can be considered a class of contents, i.e. all the objects like feelings, sensations, ideas, etc. that populate our inner subjective lives. It is bad strategy to conceive the problem from the content's point of view, because all the conscious contents are expressed in a first-person language, 'I feel..', "I think..', etc. and the translation of these languages to third-person language is often difficult. Without translation, however, the mark of subjectivity survives on consciousness, in spite of having a great deal of objectiveness, qua directly acquainted, preventing any scientific statement to be made about the subject. [1]

Consciousness can be considered as a form, or as a class of forms, that underlies a certain functional feature. As a biological trait, it is reasonable to think that it has been selected and that it represents a gain over environmental challenges. The human mind has been considered something that enabled animals to better manipulate food resources and to acquire social relationships.[2]

Consciousness is the main predicate and the very essence of mind. When one talks about intelligent or cognitive behavior one is forcefully talking about mindful behavior. Then, if one talks about cognitive systems, one is talking about mindful systems, hence conscious ones.

Consciousness as a class of objects can be considered as the phenomenological flow of inner experiences one has, experiences of the world, of the inner body, of the past, of the future, of imagination, of reality, of bizareness, of properness, of reasonability, etc. Phenomenological descriptions are the contents that play the conscious continuous role of our inner life.

Conscious forms are the neural functions that allow consciousness to manipulate its objects in a particular way. These neural functions are highly circumscribed to certain structures of the Central Nervous System (CNS), particularly in the neocortex.

The adaptive functions of consciousness are the ability to master novelties, imagination, to-be-learned things, dangerous-risky situations, creation, reflection, inquiry, justifications, responsibility, moral values, normative-contingent rules, etc.

Language, in spite of being so rich and noble, can be highly manipulated through automatic behavior: when one wants to discuss something, only the nature of the arguments is focused, being almost all the rest automatically driven.

If we consider among the objects of consciousness all the inner experiences, including self-reflection and awareness, attention etc., we must at the same time consider that a particular structure renders all these possible. There is a whole body of arguments tied to the structure, function and form that renders consciousness possible. One thing is to consider an object of will. Other is to consider the will as a function that renders possible objects to fulfill this mode. I think of A. A is the object of my thinking, but there is something happening that makes thinking possible. Is it the nature of the object A (be it a value, a set of values or a stochastic distribution of values) or the nature of the process that enables A to be the subject of my thinking?

Consciousness as a class of contents is every phenomenological object one experiences; as a form, it is the way those objects are computed in order to be conscious.

The conscious mode is tied to phenomenological-inner experience, hence phenomenology is almost immediately recognized as the science of conscious contents. If there is no future to a science based upon phenomenological contents, then one should pay attention to the functional traits that underlie this mode of operation.

There is a class of sub-routines that enables an architecture to perform esthetic, ethical judgments, voluntary control, justification, etc. Observing the functional roles and the form that underlie these operations, one can access consciousness from the form point of view.

Suppose someone is learning how to drive a car, where the sequence ABC should be skilled. First, it is done slowly and with full attention. As it is learned, it becomes quicker, less conscious and more automatic. The ABC sequence seems to be the same as it was in the beginning, but there is something that changed in our brains that enabled the structure to manipulate the situation in a faster and more automatic mode. The brain structures that manipulate these events are different: while the conscious novelties are highly manipulated by frontal structures, the automatic mode is highly circumscribed to cerebellar structures. [3] [4]

The ABC sequence can be the subject of conscious experience, i.e. phenomenal experience, and can also be, after learning, the subject of automatic manipulation. It must be stressed that ABC is not a triplet of single values: it can be strictly deterministic or stochastic, continuous or discrete. This explains the fact that learning enables one to a large scale of different actions, all subsumed by a class of values. Within these values a lot of computations can be performed, be they 'if ..then' - rules, be they statistical correlations of numerical series. All these computations are made within the limits defined by the possible values of A's, B's and C's for the automatic mode. Whenever something new, risky, anomalous, etc. happens, the sequence is gated back to the conscious-phenomenological-attentional mode. ABC is an object that can be manipulated both by the conscious-mode or by the automatic-mode. There are several algorithms that may perform corrections during the automatic mode, but in certain situations ABC will be pushed back to the conscious mode.

If ABC is an object, there must be something tied to it that enables the mutual gating mechanisms to go from the conscious to the automatic mode, and vice versa. Moreover, if ABC is not a single triplet, but a class of values within a certain range, allowing the systems to perform subtle and complicated corrections, what is the essential feature of ABC, qua object, that makes it to be phenomenologically relevant?

It is not only to be phenomenologically accessible that makes ABC important. This is the result from the contents' point of view, since all the objects that populate our consciousness will enhance the phenomenological experience. It is to be driven to a special mode that contains a certain class of functions that are not present in the automatic mode that makes ABC conscious.

If consciousness is a mode and a class of contents, if there is always phenomenological experience (from the content point of view), what else can characterize consciousness from the structural point of view?

Voluntary control, the counterpart of will and freedom as conscious contents, is one of the major structural traits that differentiate the conscious mode from the automatic one. During automatic computation there is not the slightest appearance of voluntary control, something that can be inferred from the fact that there are no wishes on the phenomenological-conscious screen.

Then, we can hypothesize that phenomenological experience is common to all conscious objects, and so is voluntary control. But, this is not trivially true because there are two classes of objects of conscious experience that seem to be detached from voluntary control: psychoses and dreams.

Voluntary control is an important feature of conscious manipulation, in opposition to the automatic mode. If the automatic mode performs complicated calculations, rule-based or statistically driven, they do not include voluntary modes of operations, nor phenomenological experience. But, if phenomenological experience is common to all forms of consciousness, the same might not be true of voluntary control that seems absent during psychoses and dreams.

Consciousness as content is phenomenological, as opposed to automatic- 'blind' operation. Whatever structure that mimics mind powers, must have phenomenological experience or voluntary control.

Voluntary control is a kind of special operation tied to wakefulness, creativity, justifications and actions but it seems absent during other common happenings of cognitive architectures: dreams and psychoses. During both there is a certain degree of phenomenological experience and a certain lack of control. Does it render voluntary control a bad functional-equivalent to conscious contents?

We will try to examine the object ABC that is able to be learned voluntarily, that is able to be handled by automatic modes, that can be present during dreams and during psychoses, trying to grasp the intricacies that enable this object, or class of objects, to be subject of four different modes of cognitive operation: wakefulness, dreams, psychoses and automatic operation. ( see table1).

Wakefulness = phenomenological + voluntary

Dreams = phenomenological + partially automatic

Psychoses = phenomenological + partially automatic

Automatic = non-phenomenological + non-voluntary

TABLE 1: four "cognitive" modes

II. Wakefulness, dreams and psychoses

Let me quickly resume the above said about cognitive objects and modes of operation upon them. Every object can be in principle conscious or automatic. It may be the same object that is gated from one mode to the other. Among the cognitive, qua phenomenological objects, one must include awake-type objects (objects one perceives during wakefulness), psychotic-type objects (objects one perceives while psychotic) and dream-type objects (objects one perceives while dreaming). All the last three have phenomenic experience underneath.

If voluntary control is opposed to an automatic one (the fourth cognitive mode of computation), and if there is phenomenological experience in dreams and psychoses without explicit or effective voluntary control, then neither the automatic one would mean coming out of voluntary control nor would the voluntary would mean conscious.

This is the story of all the failures to explain consciousness in a reasonable way, because there is something that characterizes it, phenomenological experience that cannot be, at least till now, modeled or putatively ascribed.

What is the interest of this scheme to the problem of cognitive architectures? And what is its relation to engineering from the models' point of view and from the conceptual point of view? If one doesn't pay attention to consciousness one never builds cognitive architectures (at least those that can help one to better understand the human's mind, giving new elements for a better Psychology and a better Psychiatry); but if one pays attention to consciousness, every practical project and model will be doomed to failure, because consciousness seems foreign to these realms. Then, it is better to examine carefully the above table searching for some traps that must be hidden. Voluntary control might be a good candidate for a functional synonym of phenomenological experience, and the existence of these two defective-representatives -- dreams and psychoses -- might be a source to strengthen the formalisms that shall be proposed to underlie conscious control.

III. Environmental feeding and short-term memories

Table 1 contains many simplifications and was conceived as a way to help one to turn from the common way of seeing consciousness from the phenomenological point of view (content) to a functional mode (form).What may be the trap that appears in the table? Sensorial information about the environment and about the system (be it a body or a machine) is continuously fed to short-term memories, mainly represented by the hippocampus [5], except during dreams when sensorial feeding is almost interrupted, leaving the system exposed autonomously to its natural-endogenous frequency (a deep continuous oscillation that pervades the whole brain). [6]

Sensorial information is not an equivalent map of any of the above categories: phenomenological x non-phenomenological (from the content point of view) nor voluntary x automatic (from the form point of view). The input signal to the system that performs the four cognitive modes is the result of environmental information + the inner frequencies that pervade the system. This could explain why dreams mimic certain features of consciousness. The Central Nervous System can be considered as a closed system where representations take the form o "a priori" constraints. During wakefulness there is only a small part of the sensorial feeding that reaches the cortex, resetting parameters. During dreams the system resembles wakefulness in terms of the class of representations, in spite of the fact that coherence among the units is usually loosen.[7]

wakefulness

dream

psychoses

automatic mode

phen. experience

yes

yes

yes

no

voluntary control

yes

no

no

no

sensorial infor-mation

yes

no

yes

yes

TABLE 2

If one wants to model cognition, one has to deal with phenomenological aspects but why not choose voluntary control as a source of difference that must underlie the cognitive style of computation? The only reason would be the argument that there is not an equivalence between voluntary and conscious because there are dreams and psychoses when phenomenological experience helds and voluntary control is absent. But, maybe voluntary control is still present in these two phenomenological-cognitive classes but in a deviant form. This may enrichen a model, instead of representing an obstacle.

During dreams there is no environmental feeding, but there can be a certain amount of control regarding contents and interruption of sleep.[8] And during psychoses? In principle there is voluntary control over a large range of behaviors during psychoses, but the very subject of diagnosis. Schizophrenia will be considered here as the prototype, a situation which a lot of troubles in the frontal lobe happens [9], with loss of command over will, isolation (autism) and progressive lack of purpose and goal-oriented behavior.

Then one might suggest that voluntary control is present in wakefulness in its purest and full form, in dreams as a quasi-absent mechanism, and in psychoses as a pathological combination of automatic and ill-voluntary.

Voluntary control (in its pure and normal form) could be in principle ascribed to each of the three cognitive phenomenological states - wakefulness, dreams and psychoses-, despite being almost absent in the last two. Instead of eliminating voluntary control from dreams and psychoses, the very examination of the nature of volunteerism can enable us to understand the normal and the pathological aspects of the concept of 'volition', that could be a good candidate to substitute phenomenological experience from the content point of view.

IV. Algorithms and cognition

The very core of science is to produce models of knowledge that simplify, abstract, explain and predict phenomena. The idiosyncratic aspect of each object must be removed in order to find affinities, classes, and to find rules that connect classes in legal forms.

Cognitive Science [10] [11] appeared as a revival of the old presence of intermediate levels of processing information. Behaviorism failed to explain behavioral phenomena based upon inputs and outputs and internal representations were recruited to stabilize knowledge. But the enterprise committed two mistakes from my point of view:

a) it overlooked conscious phenomena as the real mark of mental phenomena, and

b) it adopted a 'content' (opposed to form-functional) way of seeing mental categories.

The doctrine that states that objects and rules can be abstracted in spite of preserving the nomic rules that enhance the formation of these objects, and of their rules of connection, led to strong Artificial Intelligence positions that dislodge neurology as the very substratum of cognition: "mind is software and brain is hardware".[12] [13]

The brain as a Universal Turing Machine was a mere implementer that could be dismissed, leaving to researchers the goal of finding the classes of objects and the classes of rules. The first would be a matter of semantics and ontology and the second a matter of syntax. As there were no rules of strict equivalence between the syntax of mental phenomena and of brain phenomena, the brain as a general implementer could be omitted. [14]

The criticism of adopting a "content" point of view comes from the omission of consciousness and the strange way some authors define representations. When representations took the form of ideas, beliefs, etc., they became contentful, in spite of appearing as the form of mind phenomena. If one considers the sentence 'Paul believes that P', then P is the object and believe is the mode. This seems a way of building a system where the opposition between content and form is preserved. But beliefs are already mentally interpreted objects, then they could be called structures from the mental point of view, but they would be contents of an other order regarding brain operations. If there is not a way to translate 'beliefs' radically in the brain vocabulary then, either belief is a functional-emergent predicate or it is kind of special mind- content that seems as if it were a mode, regarding the object of belief, but indeed it is a content too.

Adopting a kind of dissociationistic position, regarding cognition as a kind of algorithm, but leaving the primitives of interpretation in the mental level, cognitive science precludes the distinction between voluntary algorithms and automatic algorithms, that is the only one, in my view, that grasps the very nature of cognitive phenomena. Doing so, cognitive science modeled only automatic modes of operation leaving the doors open to a large class of criticisms that saw in the computer metaphor of mind a mere syntax processor without any real semantic power. [15]

If cognition is not defined regarding the phenomenological aspects of consciousness, or its equivalents-- voluntary modes of operations, be they normal or deviants-- cognition is only automatic operation. We know that the automatic computations that take part in our life are very complicated. Maybe some are rule based (more tied to a software way of considering the mind) and others are more shadowed and statistical, more tied to a neural net way of considering the mind. But both, traditional AI (Symbolic Artificial Intelligence) and Neural Networks (Connectionist Artificial Intelligence) models are not able to grasp the essence of cognition because they don't face the problem of voluntary control, or of phenomenological experience. Then, they are always driven by complicated formalisms and data analysis, but fail to answer simple questions: does the architecture understand what is going on? Or does the architecture have control over its acts?

There is a very common way to escape from these questions, giving complicated philosophical explanations: it is a matter of imputation that makes something possess or not consciousness.[16] This is the well-known Turing test. If the computer is able to lie, pretending what it is not, then it has cognitive abilities.

The answer to the questions may be twofold:

a) one may say that it is a matter of time until we reach enough computational power and memory to enable a computer to pass the test

b) it is a mistake to consider this because the very essence of cognition is tied to non-algorithmic operations. [17]

I think one should not abandon the algorithmic metaphor to the mind, but the nature of the algorithms that will mimic cognition must be retailored regarding the very nature of the opposition between voluntary and automatic control.

Algorithmic considerations of mind can be seen as legitimate science regarding the abstractive nature of the enterprise. The rules of cognition must be abstracted in spite of preserving their nomic form. This is science and determinism. Without a kind of determinism of structure, regularities don't hold, then science becomes impossible. There must be an equivalence between a computational-algorithmic credo, and deterministic and causal theories: both are seeking regularities that can render contingencies explainable and predictable.

When then is Cognitive Science wrong, becoming only cognitive systems in a very degraded sense of 'cognitive'? When it takes the objects of mental experience, hence conscious objects, and transforms them into mental forms, in the case of beliefs, or in mental blocks in the case of genuine mental objects, as in the case of rooms and tables.

We don't have the faintest idea of how the brain computes the frequencies of neurons to codify mental objects and rules. But we can suspect that the complete dissociation of brain and mind was an extreme misunderstanding of what a science of the mind must pursuit. The objects and the rules in a cognitive architecture must be abstractions, but they have to pay attention to all the potential computations that a cognitive architecture can perform. Comparing a cognitive architecture with a Turing machine must elicit the mistake of reducing all the laws to a kind of predicate calculus (a branch of Mathematical Logic). Were there laws that connect things in another manner that are not described by rules, then Turing machines would not grasp the very nature of cognitions. Comparing a cognitive architecture with a neural network commits the mistake of adopting as legitimate categories that are under suspicion: mental categories. Even a very complicated neural architecture has to have an interpretation. The categories that will be recruited to interpret nodes and attractors and convergent solutions will be the chosen categories: if one interprets a neural network as a system's identifier then it may be closer to the brain but far from the mind. If one interprets it with mental blocks then one is farther from the brain than the traditional AI proposer was.

Considering a Turing-machine argument to model cognition is a rather syntactical prejudice that may lead to a false understanding of the mind. Considering a neural net model of the mind is a semantical prejudice, because the blocks that will be the interpreters will at the end be the mind contents. What is difficult is to choose primitives or state variables. Identification of a system is sometimes much more complicated than the rules of processing the chosen elements. [18]

There remains always the question whether a Universal Turing machine would be able to compute all the functions a neural net does. Of course, if one answers no, then Penrose could be right, but the nature of the problem would be not the quantum problems sometimes alluded, but only an impossibility of translating rules of the differential calculus to the predicate calculus.

Adopting a more cautious and humble position, it is difficult to ascertain if the very nature of mind phenomena is classical or quantal. In a certain sense, regarding models, there is only a problem of having structure ( a kind of nomic relation between the elements) and a matter of measure. Both, classical mechanics and quantum physics have deterministic structures [19] [20] and the problem of quantum physics is that there is no strict causal relation between isolated elements, but only probabilistic distributions. In a very broad sense of algorithm one can consider the connection of A and B, be these connections a matter of strict or stochastic link. Abstractions exist exactly to affirm that the distribution of A's has a certain relation to the distribution of B's. Determinism is not derogated and stochastic determinism is still determinism and not mere chance and casualty that would prevent science to exist.

Then, the discussion about the nature of the physical event that underlies cognition is not a very problematic one. Taking algorithms in a broad sense, that of necessary stochastic connection, one can hold that a science of cognition is algorithmic and computational. But, why has it failed? Because it has overlooked the four aspects of a cognitive system: the voluntary-conscious mode, the dream mode, the psychotic mode and the automatic mode.

All the algorithms that have been tailored might have problems because: they have syntactical prejudices, semantical prejudices, and they don't face the problem of mind as a structure that enabled us to have genuine cognition.

V. Semantics and syntax

One of the cues that might underlie cognition is that the entities that appear in our conscious mental screen are not the best candidates for a semantical interpretation of the world. This position is called eliminative materialism and holds that a genuine science of cognition will be a neurophysiological analysis, full of neurophysiological terms. [21] [22]. The hard dissociation between mind and brain would be a mistake and the future would be to consider cognitive what is explainable in brain terms (brainly syntactical). Representation, in this sense, would be a mere topography of the brain sub-systems . This extreme version of the reaction against 'strong AI' commits a kind of simplification because it considers all our phenomenological experience as a mere result of acquired knowledge through ordinary language. Voluntary control could be in this sense considered only as a trait that happens whenever the locus of computation is the neocortex. Mind has more to do with contents, and if we believe in our contents this must have adaptive significance. If, however, it is impossible to map contents from the mind realm to the physical entities at the brain level, why not ask if the separation between brain syntax, brain semantics, mind syntax and mind semantics is not a trap?

Brain syntax means firing action potentials at synapses, modifying strengths of connectivity between them, conversing pulse to frequency codes at the level of neuron assemblies, etc.[24] (prototype discipline--Neurophysiology)

Brain semantics means a localizationistic style of recognizing structures and objects, like olfactions, visions, emotions, etc. (prototype discipline-- Neuropsychology)

Mind syntax means finding rules of connection and of formation of significant sentences and actions. It can be a predicate calculus in traditional AI version or a differential calculus (or Statistical Mechanics) analysis in neural networks. (prototype discipline: Cognitive Science with strong emphasis in computational models)

Mind semantics means the objects we perceive, sense, feel, and the modes our conscious experience stands for them (in the case of intentional objects like beliefs, fears, hopes, etc) (prototype discipline: Cognitive Science with strong emphasis in Psychology and Philosophy)

Adopting that there might be a trap, or an anomaly in the relation between mind and brain, is only another version of saying that what we need is a way of connecting brain syntax and mind semantics. But in such a crude version this seems dull. The inversion between conscious phenomenological contents to voluntary modes can enlighten partially the problems because:

a) it preserves the brain syntax since it can propose a mechanism of recognizing categories based upon syntactical features.

b) it preserves brain semantics because it preserves the gating structures, particularly frontal lobes, cerebellum, thalamus and hippocampus.

c) it preserves mind syntax because it doesn't deny that there are rules of connection between elements that mimic brain connections in a quasi-homologous form.

d) it preserves mental semantics because it doesn't deny freedom to exist qua conscious objects, but it encapsulates freedom in a functional mode, that has deep brain reasons underneath, called voluntary control.

In other words, the opposition between semantics and syntax must preclude our understanding of the very nature of cognition leading one to model either the brain or the mind but, never their connections. A model of the connection might surpass this dichotomy and propose a formal character that treats the four modes of cognitive computation.

VI. Topological computation

Let us be back to the object ABC that must be learned. Consider ABC an object, a sequence of operations, a rule of connecting A to B and then to C, or a transformation of one to the other, etc. Consider A, B and C as having one value, a set of values, as being state variables, probabilistic distributions, etc. it doesn't matter, what counts is to ask what particular trait in the object ABC makes it subject of voluntary-conscious computation or subject of automatic computation. Is the same object ABC that is computed in the conscious- voluntary mode, or in the dream mode, or in the psychotic mode or in the automatic mode? If it is not the same object, there must be something one level above , i.e. there must be some kind of designation that makes the ABC sequence subject to the performance of the frontal lobe or of the cerebellum. The final product, the motor acts that command the car are (almost!) indistinguishable. As we don't want to proliferate entities, it is not recommended to suppose the existence of a supervisory system that qualifies the ABC objects because the problem would be transferred to this supervisory system in an infinite chain. [24]

If it is the same object that is in the frontal lobe (voluntary) or in the cerebellum (automatic) , then there must be a kind of label on it that enables the system to recognize when it has to be gated from one mode to the other.

The formal concept that seduces when one deals with slight variations in a certain variable, leading or not to dramatic changes in the solutions, is that of structural stability.

Considering a pendulum, the parameter that describes the friction coefficient may be positive, negative or zero. In the domain of positive and negative values of this parameter every slight perturbation e doesn't lead to topological variations in the state space: (see table 3) there is no topological variation, the qualitative solution is the same, the system shows structural stability. For zero values in this parameter, a slight perturbation will change dramatically the system's behavior from the topological point of view. This is structural non-stability and this value of the parameter is called a bifurcation value.

structural stability

structural non-stability

parameter + e

nothing happens topologically

changes dramatically topologically

parameter

ordinary values(OPV)

bifurcation value(BPV)

TABLE 3

Bifurcations are topological variations in the system's behavior when parameters are in critical values (called bifurcation values) that when slightly perturbed lead to dramatic changes in space of states.

If one is dealing with values in a distribution of course each perturbation of a parameter in a branch of structural stability will lead to a variation in the system's behavior. However, this variation will lead to homeomorphic solutions regarding the perturbated system, i.e. both solutions will have common traits from the qualitative-topological point of view.

Chaos, a phenomenon that may happen in deterministic non-linear dynamical systems that are characterized by sensitivity to the initial conditions is structurally stable, i.e. in spite of having an enormous quantity of states in the state-space, all unpredictable, small perturbations in the parameters values don't lead to topological variation. Chaos can occur after successive bifurcations.[25][26] Walter Freeman and other researchers have proposed that there is chaotic behavior in the Central Nervous System and that it might be the source of richness and variability some systems present.[27] Moreover, the school of this author and others is highly neurophysiologycal, far from strong AI, etc.

The concept of structural stability might be a very rich one to explain the nature of the computational that the CNS performs in the four modes: voluntary, automatic, psychotic and dream.

Suppose one has a description of the system (which is impossible given the codimension). Novelties and voluntary control might represent structural non-stable modes. As soon as system parameters are set in order to obtain structural stability, then the mode turns to the automatic. The automatic is a large set of values, all with qualitative homeomorphic performances. Robust chaos could be a kind of mixed state of consciousness and automatisms that occur in psychoses. Weak or non-robust chaos would be the state of dreams, when the system may perform some kind of calculations in order to reset variables, eliminating spurious attractors, or reinforcing memories. [28] [29]

From the point of view of contents, of the systems flow (the temporal function that describes the systems dynamics)[25] , there is always difference when one alters the value of the parameter. Consciousness as phenomenological experience sees only states in the state space. Computations that perform the gating throughout voluntary frontal modes and automatic-cerebellar modes must analyze homeomorphisms and topological similarities. Determinism still holds but, predicatability for a certain range of bifurcations values is severely affected, which leads to the dissociation between explanations and predictability.

When given the state A in t we can predict and explain the state B in t + 1 we have explanation and predictability. [30]

When we have only the connection of A to a set of values of B we have explanation without predictability. When we have both A and B as distributions of probability there is still determinism but the extreme of chance- behavior is reached. However, there still are explainable things happening that maintain A's and B's in a deterministic relation. That's why there is no interest, in a certain sense, of examining the nature of the problem, if classical or quantal.[31]

VI. Information and Brain Syntax

The relevance of considering the computational way the brain uses to gate from the voluntary to the automatic mode, based upon the notion of structural stability and dynamics, is that it might allow us to understand phenomena that are considered as information- driven in the CNS.

The major concept that inspired Cybernetics and later Cognitive Science was that of information. However, information as content doesn't say much and is a misunderstanding of the original work of Shannon.[32]

There are basically two formalisms that describe a theory of communication. D. Gabor's [33] is a theory that uses formalisms of quantum mechanics and is very popular among Quantum Neurodynamics. But there is a strong equivalence between Shannon's and Gabor's formalisms as is shown in a recent article [34]. Both are measures of a probability density function and hence the limits of interpretation are both the same. Probability is tied to a state and not to a content. Then, one may understand informationally why the hippocampus or the thalamus gate states to the voluntary mode or to the automatic mode. Whenever given one state in t the next state in t + 1 is predictable, its probability is one, henceforth, the information measured in bits is zero. Structural non-stability means that there are more states in t + 1 than one, hence the informational entropy increases. Richness of states could mean voluntary and a gating mechanism to the frontal lobe. This might be compatible with certain neuropsychological syndromes where a certain degree of semantical understanding of information without consciousness (priming effects, etc) happens to occur. These problems are interpreted under the label of shallow outputs from the hippocampus and other structures tied to short-term memories.[5].

Information based on forms and states as proposed by the scheme above, is able to explain these problems, in spite of being totally theoretical and speculative. We cannot imagine if these ideas can enlighten new directions of research, but we suggest they represent a change in the way one see cognitive systems and cognitive legitimate science.

VIII. Sketch of a model

I propose that the gating mechanism that renders topological computation possible is the presence of bifurcation parameters values (BPV) or ordinary parameters value (OPV).(fig.1)

When information reaches the CNS through short-term memories, mainly in the hippocampus, there might happen an evaluation of the stability from the structural point of view. If it is unstable, information would go to the frontal lobes. As it becomes stable it would go back to the cerebellum, main site of automatic behavior.

Information would be fed to both, the automatic and voluntary systems, all the time allowing to a continuous evaluation of the problem of stability. The effectors are the same but, the structures that trigger the process are different. If it is to be considered from the dynamical point of view, each of these three structures could be considered as an oscillator (van der Pol) or a Phase Locked Loop (PLL). To show the qualitative aspects we want, it is enough to remember that large assemblies of neurons, treated as dynamical systems, will have bifurcations and even chaos in the space of frequencies. Undisplayed Graphic

FIG.1

The strongest assumption is that there is a map from stability from the syntactical point of view to ambiguity from the semantical point of view, or that dynamical systems can describe the intimacies of the mind's structure throughout an examination of the aspects of the signals that scaffold the mental blocks . This is speculative in this sketched model, but it might be plausible. In order to understand the implications, one must:

a) give an artificial model that performs some topological computations.

b) analyze the correlation between this model and data that come from the Central Nervous System.

First we shall deep the analysis regarding the nature of a candidate model that could in principle perform topological computation.

IX. An artificial model of topological computation

Cognitive Science appeared in the late fifties as a reaction against behaviorism. The importance of intermediate agents between input and output were to be considered. Mental representations were the new concept in the arena of this super discipline that would aggregate the neurosciences, psychology, linguistics, anthropology, artificial intelligence and philosophy.[35]

Cognitive Science seeks the nature of the processes that render intelligent behavior possible. Mental objects are supposed to exist, being the primitives of rule-based manipulations. With neural network models, the role of these primitives began to collapse. Why? Because rule-based formation of intelligent behavior supposed that there always was a logic (in the sense of inferential rules) underneath. It is common in Cognitive Systems that one knows plenty of data but doesn't know the structure of the system that manipulates these data. Rule-based reasoning, traditional AI, expert systems, etc., are not very adequate to cope with these situations. Neural Networks (or connectionist architectures or Parallel Distributed Processing) have the advantage to deal with highly complicated spaces of composite functions. (The question if a neural network may mimic and surpass an Universal Turing machine is one of the cornerstones of the field, from the conceptual point of view).

Neural Networks represented a revival of a tendency that defends a deeper knowledge of brain physiology in order to grasp the way intelligent-cognitive architectures operate. But this revival was only partial because the way one feeds the network, the vector, the state-variables, etc. are still mental blocks. In terms of processing, neural networks represented a step forward seeking and mimetizing simplistically the way brains process information. There are authors that have proposed a third generation of networks (after perceptrons- two layers-, hidden units --multiple layers- now genuine neurally inspired networks that can face the problem of long-term-potentiation, neuromodulators, second messengers, etc.) [36]

The problem of the entities that are to be handled is not common in the literature. Entities like colors, shapes, etc. are supposed to exist primarily or secondarily to mental transformations. There is always the old debate if theoretical terms can be radically translated unto physical observables. Rudolf Carnap and all the logical positivists tried to purge theoretical terms from the vocabulary of science without success. There are no operational definitions that can radically eliminate the very existence of non-observable terms.

One can model images composing colors, depth, texture etc., without being worried about these considerations. These are all observable qualities and present no problems to the question. X is a conscious object or X is an non-conscious object is a matter that touches very deep issues. If one considers qualia, inner sensations, direct evidence, etc. as undeniable, they are the most observable objects nevertheless depending strongly on language to be publicly communicated. Mental objects like fear, plans, intentions, disappointment, etc. may be only partially inferred from observable traits. If there were the perfect actor, all evidence that would come from physical measurements, e.g. grimaces, galvanometric potentials, etc., could be under suspicion. Even with these tools one cannot state about the nature of the object that is being processed. Electroencephalogram (EEG), evoked potentials are poor methods to evaluate the nature of the object and even the nature of some intricate mental operations, but they've been refined recently through the utilization of sophisticated formalisms of signal analysis.

The discussion about the nature of the theoretical inner objects that crowd our inner rich lives, is absolutely fundamental to launch a real science of cognition.

I prefer here to abandon all the intricacies that seek powerful algorithms that process information, scaffolding "intelligent-cognitive" systems. On the contrary, my aim is to investigate if there are "a priori" conditions in the CNS that allow peculiar ways of processing information. In other words, is it possible to reveal the nature of the objects that are able to be processed departing from the laws that govern the way neurons make their connections?

Instead of adopting a prior knowledge of the objects, qua primitives, or making complicated algorithms to manipulate objects, I propose that there might be an order in the CNS that has allowed high-order functions to appear, mainly consciousness and language. Seeking the peculiarities that endow the way brains compute information, one might have a way to understand the switching mode that processes information consciously and automatically (non-consciously) .

Briefly, there are two issues that cognitive science may address: the issue of the object and the issue of the process.

Addressing the question of objects one can assume they exist, be they sensorial or theoretical (directly evident by introspection, but communicable only through language), or one can:

a) deny that there are objects that can be treated as primitives, mainly in the inner experiences.

b) deny that there is a way to identify specific objects and specific happenings in the brain.

c) transform objects in a very large class that privileges functions such as X is a conscious object and Y is an automatic object.

d) state that evolution might have rendered possible for the CNS to manipulate the same object in the conscious mode and in the automatic mode.

e) trying to posit that there are primitives in the CNS that might come from the way the brain processes information and not from the semantically interpreted nature of the object

Statement 1: There is a brain syntax and a brain semantics.

Statement 2: There is a mental syntax and mental semantics.

Definition 1: Syntax is every rule that connects elements. Ex: A implies B is a way of connecting A to B. The rate of variation A is a function of B, is another rule of connection.

Definition 2: Semantics is the meaning and existence of an object. "The King of France" is an object, has a meaning and an existence, in spite of being false in the case of the actual King of France, etc.

Statement 3: Cognitive manipulation is whichever manipulation an architecture performs that is apt to be conscious.

Statement 4: Consciousness is the core of cognition.

Definition 3: Consciousness is the class of all objects that are conscious and the operations that render it possible

Statement 5: Conscious objects are objects that a) show phenomenological inner experience; b) are partially translatable unto ordinary language; c) are able to be subject of voluntary manipulation, or are under voluntary control.

Statement 6: Phenomenological experience cannot be subject of scientific inquiry

Statement 7: Ordinary language is polissemic and was designed to communicate and not to be subject and tool of scientific statements, except for its role as an evolutionary trait.

Definition 4: X is a conscious object iff it is able to be manipulated voluntarily.

Definition 5: X is a non-conscious object iff it is not able to be manipulated voluntarily. It is totally automatic if it is never able to be manipulated voluntarily. It is partially automatic if in an instant t is not under voluntary control.

Statement 8: Volition is a phenomenological inner experience, but voluntary control is a process that occupies certain areas of the neocortex, mainly the frontal lobes that can be, in conjunction with inner reports, considered physically measurable.

Statement 9: Automatic control despite being highly phenomenological in its ultimate nature, displays also for a certain class of functions a strong degree of cerebellization [4] which renders the concept, in conjunction with inner reports, physically measurable.

Let us consider the object A:

It can be conscious iff it is subject of voluntary control.

It is completely automatic if it is never under voluntary control. It is partially automatic if it is not at this time under voluntary control.

Examples: Conscious object: an intention, a plan. Partially automatic: driving a car. Completely automatic: control over the bowel rhythm, velocity of the joint during a walk.

One can, of course, define the main direction of action: Paul wishes to run. Running here would be a conscious object in spite of existing inside "running" objects that are partially automatic, e.g. the way one runs, techniques, and completely automatic: the degree of impulses that counterbalances a torque applied to the knee joint.

Definition 6: X is an object whenever an oscillation represents this object in the CNS

VII.1. Neurons and oscillations

Neurons are very complicated signal-processing devices. They convert local potentials at the dendritic tree, summing them up, depending on temporal and spatial constants that define its cable properties, until the sum of local potentials reaches the axon hillock where an action potential is triggered or not. If the local potential exceeds the threshold an action potential is triggered with relatively monotonous amplitude. This is the so called wave to pulse conversion, or local-analogical to digital conversion. The action potential might seem like a digitized signal were it not for the fact that the train of action potentials that may be elicited reconverts the digital character into analogical one via frequency devices. The signal that reaches the next neuron, releasing an amount of neurotransmitters, is dependent on the rate of action potentials or on their frequency.

The first tendency that wanted to compare the neuron with a Boolean device is wrong because the very nature of the process passes the following steps:

local potentials (variable amplitudes) ® action potential (same amplitude) ® rate of action potentials (same amplitude but analogical character of frequency)

It can be said that the way information is encoded is analog via a frequency code. It is true for the single neuron and it is true for assemblies where there is a distribution of frequencies all over the field. The slope of the sigmoid that describes the distribution of pulses x waves is variable, being the relation treatable via ordinary differential equations.[23]

If neurons and assemblies of neurons can be treated as oscillators, having strong non-linearities in their equations, then the theory that might scaffold this analysis is the theory of dynamical systems for a certain amount of neurons, statistical mechanics and field equations for larger amounts. [37]

Considering that there are non-linearities and that the code seems to be analogical based upon phase (or frequencies) detection [31] , one can posit that the topologies systems like this allow to be formed, are essential to understand partitions in the functions of the system.

Consider that an object is codified in the system as a phase and there is recognition whenever a certain class of neurons oscillate in this phase. Of course, there might be a topography, i.e. a certain degree of spatial specificity in the system, in spite of the fact that the higher the representation the stronger to show a peculiar topography.

A is an object that can be represented through a class of ordinary differential equations that describe the oscillations of the neurons that are able to represent that object. Suppose that A is an object that can be handled consciously or automatically. Which is the way the system recognizes an object that can be handled through consciousness or automatically, switching the mode and even the spatial location, from frontal poles to posterior poles?

If A is defined by a class of differential equations and if the process that underlies is strongly non-linear, one may state that there are parameters that are ordinary, preserving topologies in the state space and others that are bifurcations that change topologies.

The object A can be defined as the class of equations but there are subtle differences in the parameters space that changes dramatically the behavior in topological terms. Object A must show structural stability (ordinary parameters value) and preserve topology for a single perturbation in the parameter. If, however the system shows structural instability (bifurcation parameter value) a small perturbation changes dramatically the topology in the state space. This can be called dynamical phase transition or topological gate.

The same object A, here defined via a purely brain syntactical mechanism, can be analyzed in terms of its topological properties that depend on the ordinary or bifurcation nature of parameters.

An object that has to be preprocessed and then switched to the automatic or to the conscious mode, can receive an analysis of this sort. Whenever it shows structural stability it is handled by automatic modes, whenever it shows structural instability it is handled by conscious modes.

Preprocessing mechanisms like short-term memories or working memories, mainly located at the hippocampal level analyzes the features of oscillations that represent the object, grasping topological variability from them. If there is structural stability, the number of future states will be one, when the quantity of information measured in Shannon's terms will be 0 bit. If there is bifurcation, the number of possible states for t + 1 is at least two, when information as a measure of incertitude is 1 bit. The preprocessing analysis is done this way via a topological consideration based upon structural stability and instability regarding the neural representation of the object.

Consciousness would be this way a tool that through learning, practice, decision, creativity, responsibility, etc. resets the parameters value or even the structure of the system that represents the object A allowing further stability from the structural point of view.

There are some authors that show nowadays that the presence of "chaos" in the receptive fields would be a source of variability and adaptiveness to the system. [27]

In this model the cognitive status of the brain is reached whenever the nature of an object is analyzed regarding its functional role. If it is to be handled by conscious-frontal modes or if it is to be handled through automatic modes. One way the system could use to solve this kind of rich dichotomy is through topological evaluation.

Preprocessing elements would grasp the intricacies of the object. If it is structurally stable, it would be handled automatically, if it is not the error would be sent to conscious modes.

A very simplistic way to represent these ideas is that there is a preprocessing mechanisms in the hippocampus and surrounding structures that are responsible for short-term memories. Information is analyzed in this region and processed automatically if there is no risk, novelty, etc. involved. If, however, something happens that the preprocessing elements cannot "solve" the problem other regions, mainly the frontal lobes in the neocortex are recruited. Goal-oriented, voluntary-driven and decision-laden behaviors are mainly situated in the frontal lobes and it appears to exist a relation regarding conscious experience.

IV.2. Phase-Locked-Loops as topological graspers

The model of two interconnected PLLs departs from a very schematic idea of the process that switches information from the automatic mode to the conscious mode. The first PLL that receives information from the environment, be it internal (ideas, drives) or external (sensorial), analyzes topological features of the oscillation that represents the object that is being processed. If the error that results from the comparison between input and internal phase is zero, there is no feeding to a second PLL. If there is an error the second PLL will try to handle it. The first PLL corresponds roughly to the preprocessing mechanism and to the automatic mode. The second corresponds to a degree towards "consciousness". Consciousness this way would be a way to analyze errors that come from automatic modes. Of course there is a strong supposition that all novelties lead to big errors, because there are no internal representations, or internal phases, to compare with input. Another strong supposition is that the very nature of the object, i.e. its intimacy that allows the system to analyze it is the existence of structural stability or instability in the parameters space.

If an object can be represented as an internal phase linked to an external phase, one could use phase- locked-loop (PLL) to analyze the problem.

PLLs are very known devices that have an internal phase that is compared with the external phase, and a filter that measures the error and corrects it trying to lock the loop.

Second order autonomous PLLs always lock. Third order autonomous PLLs may show bifurcations (Hopf and a sequence to chaos) generating aperiodic behavior.[38]

If there is a feeding environmental phase that is compared with an internal phase in a third order PLL it may happen that

a) if there is a situation of structural stability, the first PLL will lock and the error transmitted to the second PLL will be zero.

b) if there is a situation of structural instability measured by the first PLL, i.e. a bifurcation value in trhe parameters space, the second PLL can synchronize with the first if the error is periodic or it can reach chaotic behavior if the error is aperiodic.

Undisplayed Graphic

PD: phase-detector (error detection) F : filter V.C.O: voltage controlled oscillator

Undisplayed Graphic

q e : environmental phase

q a : "automatic" phase (third-order PLL)

q c : feeding-phase to second-order PLL

q' c : "conscious" phase (second-order PLL)

m1 : filter constant ( third-order PLL)

m2 : filter constant ( second-order PLL)

e: resulting error (qa - qe)

e': resulting error (q'c - qc)

The system of two interconnected PLL's can process in my view a topological evaluation of the information that is feeding the system. The first PLL, a third-order one, grasps all the intimacies of the object (the phase that "represents" the object). If it is a well known object the error will be zero. All the conditions when locking occurs correspond to structural stability in the parameters domain (given that the external phase is within the capture range of phases for that PLL).

One can build this way the following table of possible states for the two PLLs, regarding common macro behavioral and psychological facts:

MACRO NEUROPSY-CHOLO-

GICAL FACTS

FIRST- PLL (Third order)

SECOND- PLL

(Second Order)

novelty

doesn't lock

locks after learning

decision under risk

doesn't lock

locks

totally automatic

locks

nothing happens

psychoses

doesn't lock

doesn't lock

dreams

doesn't lock

locks

voluntary control

doesn't lock

locks

partially automatic

locks

it may have phase jitter

Of course, it is highly speculative to state that there might be a relation between stability and non-locking behavior in PLLs and macro neuropsychological facts. However, functional approaches that deny importance to the implementing level, i.e. to the way brains processes information, are wrong in my view. If one wants to launch tentative ideas about the way brains process and encode information either more neurophysiology has to be waited or some "thought-experiments" have to be tried.

Dreams and psychoses represent intermediate states between the conscious voluntary mode and the automatic mode. Maybe, they represent states where a sequence of PLLs don't lock leading to a cascade of bifurcations or even chaos. Dreams, however, as autonomous states are not so robust as psychoses, what represent a big difference in terms of the degree of perturbation that relocks the system as a whole.

IV.3. Synchronization and consciousness

Recently, it has been shown that there is a strong correlation among awareness, attention and synchronization of some cells. Awareness, a less vicious name for consciousness, would be the result of a combination of attention and short-term-memories.[39]. Attention would be a resetting mechanism that would allow a certain class of neurons to oscillate together in the same frequency, representing the object of perception (in the case of vision, for example). Through short-term-memories these representations could be maintained for a brief period of time. Perceiving or "knowing that", would be the result of a process that would bind topographical information (that comes from the various predicative modalities) and synchronization. Above the rhythm that pervades the whole CNS that shows different frequencies, depending on the problem in focus, an aggregate of neurons appears oscillating synchronously in the range of 40 Hz. This could be the very substratum of human consciousness. It wouldn't be a case of topography but of topology, I'd say. Synchronization would be the correlate of consciousness at the implemental level.

Dreams as shown in another article[7] would present the same phenomenon of synchronization among populations of neurons, despite lasting longer. I.e., what would differentiate dream-objects from awake-objects, would be the fact that dream-objects are more persistent because the synchronous events that are beneath them are not fed by external resetting drives.

Briefly, dreams and awakeness would be equivalent as I said and quoted above. The very substratum of the phenomenological character that one would experience during awakeness, dreams and psychoses would be the synchonization of some parts that compose the "object". Predicates of an object are represented in different areas depending on topographical maps that process information. The whole object, as it used to be emphasized by the gestaltists, would emerge as the synchronization among parts. Phenomenological aspects are not accesible to a legitimate science, but the condition that renders synchonization possible can be modeled throughout PLL's.

It is obvious that the aplication of the Theory of Dynamical Systems and the broad concept of topological computation cannot afford to answer the questions on how a system performs "mentation" from the contents point of view. But as a form, it can be said that, if there is no central integrator that sends an order to some subsets of neurons in order to keep them oscillating synchronously, it has to be a class of parameters that underlie the synchronous function, i.e. consciousness as a form. But there is a problem because the synchronous events are not very large and spread out, being the major part of the time highly circumscribed to a certain region. How could one answer this problem with the two interconnected PLL's?.

I think that the class of computations these two PLL's perform are rich enough, but they aren't able to support such a macroscopic model as it was alluded to before. I cannot say that the whole brain is represented through these two PLL.

To grasp all the mathematical intrincacies of the two PLLs one must rely on them alone. But the phenomenon of synchronization that occurs whenever the first sends a periodic error to the second might be an approximation of the phenomenon of synchronization that is seen in the brain.

Advances in the model would come when one considers that:

a) the output of each unit composed of two PLLs can be of three different classes: zero activity, periodic signals and aperiodic signals.

b) linking multiple units of two PLLs (let us call them the atom of cognition) one could make networks that would have three classes of outputs, being strongly non-linear, etc. The property of being "conscious" would arrive the more synchronicity one had in the network. But, of course, if the input layer is able to lock, this would be the quickest way and might be considered the core of automaticity.

The rest of this work will have the mathematical analysis of the possible behavior of each atom. Simulations will show other interesting properties of networks of these atoms. These nets will be strong demodulator models, and if synchronicity and non-linear, even chaotic, behavior explains certain aspects of the brain style of computation then:

a) networks of PLLs could be analized throughout Dynamical Systems in a meta-level. If there are three classes of states, continuous values, that can be outputs of each atom, then a second analysis can be made regarding roughly three different state variables: zero output, periodic and aperiodic output.

This hierarchical way to analyze the problem could enlighten through mathematical analysis and simulation the topological constraints that are tied to consciousness, now renamed synchronization.

The problem on how these systems learn could be solved through the connection of some feed-back links that would reset parameters and state-variables. These would subserve the quick dynamics (in the space of variables) and the slow dynamics (in the space of parameters). Mixing the two dynamics could be an interesting point that would avoid the stereotypical division of a dynamics of training and a dynamics of recovery.

IV. CONCLUSIONS

This is a very simplistic model on how the system could perform a pre analysis on the topological character of the information. The first third order PLL would be a simple representation of preprocessing elements and of automatic modes. The second would be the conscious mode.

This could answer works that see consciousness as a process that is time-dependent and that is reached whenever information is lasting and branched out enough to trigger consciousness.

Each atom A that is a class of objects that are categorized as a source of a discrete action in the Central Nervous System would suffer the action of successive topological graspers. Each PLL could represent a degree in the hierarchy towards consciousness. Consciousness would arise whenever topological analysis failed to grasp coherence (error zero) in the comparison between input and local representation of the object A or the class of objects A.

The sequence of processing elements would refine error analysis and one could guess that the steps towards consciousness represent an analysis of the flow in the parameters space. Short-term dynamics would be a feed-back from second-order PLL to the third-order PLL changing its parameters. Long-term dynamics would be a change in the structure of the system as a whole, mainly through the change of the local phase that priorly represents the object.

Consciousness is a way the system controls information that show a peculiar character. Topological aspects that preclude structural stability to hold in the first preprocessing elements might be a syntactical way the system manages to analyze information in its purest neuronal-oscillatory nature.

The whole body of evidence to deny or to endorse this position might come from analysis in two domains:

1) in the neurophysiological domain showing that structural stability plays an essential role in the switching mode. This could be done with a combination of PET-scan techniques, EEG sophisticated analysis (as done by Freeman and colleagues) and subjects' reports.

2) in the psychological domain through a careful analysis of problem-solving strategies, trying to separate conscious and non-conscious parts of the task.

A correlation between oscillatory modes, structural stability and conscious x non-conscious aspects of the problem, could launch a new partition of the everyday psychological blocks - those that need to be processed consciously, those that are never processed consciously and those that can be processed through both modes.

These analyses could enlighten research on the very nature and function of consciousness as the core of our inner personal life, what makes us individuals and not robots. Maybe the difference stays in a cascade of errors that are analyzed via non-linearities. Consciousness would seem in this complex structure detached from the brain, but its nature would rest in a peculiar way the system processes and refines the comparison between input and internal states.

Cognition is after all computation in a broad sense defined as calculation. But, consciousness is a class of states that might emerge from a very special sort of topological computation the system performs at each level of analysis.

REFERENCES

[1] J.R. Searle, The Rediscovery of the Mind. MIT Press, 1992

[2] J.Fischman, "New Clues Surface About the Making of the Mind" in Science, vol.262. p.1517, dec.1993

[3] J.Horgan "Fractured Functions: does the brain have a supreme integrator'? in Scientific American Dec. 1993

[4] M. Ito "How Does the Cerebellum Facilitate Thought?" in T.Ono, L.Squire, M.Raichle, D.Perrett, M.Fukuda (ed) Brain Mechanisms of Perception and Memory. Oxford University Press.1993

[5] M.Moscovitch, C.Umilta, "Conscious and Nonconscious Aspects of Memory: A Neuropsychological Framework of Modules and Central Systems" in R.Lister,H.Weingartner (ed), Perspectives on Cognitive Neuroscience. Oxford University Press. 1991

[6] R.Llinás, D.Paré, "Commentary of Dreaming and Weakfulness" in Neuroscience, vol.44, n.3,1991

[7] R.Llinás, U.Ribary "Perception as an Oneireic-like State Modulated by the Senses" in C.Koch and J.Davis, Large-Scale Neuronal Theories of the Brain. MIT Press. 1994

[8] J.Hobson, The Dreaming Brain. Basic Books. 1988

[9] J.Gold, D.Weinberger, "Frontal lobe structure, function, and connectivity in schizophrenia" in R.Kerwin (ed) Neurobiology and Psychiatry. Cambridge Medical Reviews. Cambridge University Press. 1991

[10] M.Posner (ed) Foundations of Cognitive Science. MIT Press.1989

[11] D.Osherson et al (ed) An Invitation to Cognitive Science. (3 volumes) MIT Press.1990

[12] J.Fodor, The Language of Thought. Harvard University Press. 1975

[13] Z.Pylyshyn, Computation and Cognition. MIT Press. 1986

[14] A.Anderson (ed) Minds and Machines.Prentice-Hall. 1964

[15] J.Searle, Minds, Brains and Science. Harvard University Press. 1984

[16] P.Churchland, T.Sejnowski, The Computational Brain. MIT Press. 1992

[17] R.Penrose, The Emperor's New Mind. Penguim Books.1991

[18] D.Hammerstrom, "Working with neural networks"in IEEE Spectrum July 1993

[19] E.Nagel, The Structure of Science. Harcourt, Brace & World, Inc. 1961

[20] M.Bunge, La investigación científica.Ariel.Methodos. Barcelona. 1985

[21] S.Stich, From Folk Psychology to CognitiveScience. MIT Press.1983

[22] P.Churchland, Matter and Consciousness. MIT Press. 1988

[23] W. Freeman, "Tutorial on Neurobiology: from single neurons to brain chaos" in International Journal of Bifurcation and Chaos, vol.2. No.3.1992

[24] T. Shallice, From Neuropsychology to Mental Structure. Cambridge University Press. 1991

[25] J.Guckenheimer, P.Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer-Verlag. 1983

[26] R.Abraham, C.Shaw, Dynamics: The Geometry of Behavior. Addison-Wesley.1992

[27] C.Skarda, W.Freeman, "How brains make chaos in order to make sense of the world" in Behavioral and Brain Sciences, 10, 161-195. 1987

[28] F.Crick, G.Mitchison, "The function of dream sleep" in Nature, vol.304, july 1983

[29] J.Winson. "The Meaning of Dreams" in Scientific American, november 1990

[30] C.Hempel, Aspects of Scientific Explanation and other essays in the philosophy of science. The Free Press. 1965

[31] K.Pribam, Brain and Perception. Lawrence Erlbaum Associates. 1991

[32] H.Atlan, L'organisation biologique et la théorie de l'information. Hermann Editeurs.1992

[33] K.Pribam (ed) Rethinking Neural Networks: quantum fields and biological data. INNS Press. Lawrence Erlbaum Ass. 1993.

[34] J.Piqueira "Information and Complexity" (submitted)

[35] H. Gardner, The Mind's New Science: A History of the Cognitive Revolution. Basic Books, 1987

[36] D. Gardner (ed) The Neurobiology of Neural Networks. MIT Press. 1993

[37] A.Maranca "Connectionism and Field Theory" (submitted)

[38] J.Piqueira and A.Maranca "Hopf Bifurcation on Third Order PLLs" (submitted to IEEE Transactions on Circuits and Systems)

[39] C.Koch and F.Crick "Some Further Ideas Regarding the Neuronal Basis of Awareness" in C.Koch and J.Davis (ed) Large-Scale Neuronal Theories of the Brain. MIT Press.1994