Cognitive Science is an effort to put several specialists together in order to pursue a comprehension of mental phenomena. In order to reach this goal one must be acquainted with at least two or three of the following disciplines: Artificial Intelligence, Neuroscience, Psychology, Linguistics, Anthropology and Philosophy of Science.

The authors deal with computation, electronic engineering, psychiatry and philosophy.

A lot of reasons lie beneath this work. We invite the reader, schematically, to follow our steps. Each item is a matter of long discussion and there is no time to analyze each in detail.

1. Introduction

We shall assume that the mind is an attribute that has emerged in the evolutive scale in order to solve a large kind of problems. What kind of problems?

Suppose there is a very rudimentary animal that has only to move one arm away when something burns it. Burning is a pre-delimited category, i.e. the class of all events that burn. Response is also a very well delimited category: all the ways an arm, in this case the animal's arm, can be moved away. No one needs to learn this situation because for the purpose of the animal, all situations that evoke burning the arm also have to evoke that "automatic" behavior. Automatic means: a) pre-wired or b) well-learned after training.

Were we very simple animals we would not have to learn and responses could be pre-wired in our system. This is a tradition that involves reflexes and has motivated, in a certain way, behaviorism to grow.

In another vein, Psychology as an independent effort tried to pursue the nature of mental representations and the laws that organize them.

Of course as Behaviorism failed in its trial to explain behavior in a mechanistic way, and the lights of the XX century precluded dualistic projects for science, Cognitive Science emerged as an integrative way to solve the problems tied to mental operations and to the very nature of mental categories.

We can list three problems related to the failure of behaviorism: quantum aspects tied to the problem of measure that precludes strict determinism to exist; classical aspects tied to non-linear and chaotic systems that modify the conception of determinism and mental aspects tied to the problem of internal manipulations of thoughts as intentional representations.

It is due to these three aspects that behaviorism could not win the battle and had to be replaced by another strategy that did not mean a revival of old doctrines of the XIX century. Cognitive Science and the computational paradigm appeared in the arena in the late fifties in the works of Lashley, Newell and Simon, Chomsky, etc.

Far from being an ecumenical field, Cognitive Science has its own rival models, particularly traditional AI models (von Neumann) and connectionist models (neural networks), to face the challenge of simulating the mind's abilities.

It is not the aim of this article to analyze in detail these two rival paradigms but instead to suggest that they both are tied to what we call the "semantic" tradition of mind models, far from the "syntactical" tradition of genuine dynamical models.

AI models suppose that as determinism is a "dead horse", there is functional emergence and multiple implementation of the same class of rules. Then, according to this tradition the quantum and classical indeterminacies are prevented, assuming that there are three levels in a cognitive architecture (implementation, algorithm and computation). [Marr (1982)]

Thanks to a lot of logic traditions that go back to Frege and Russel, culminating with the functionalistic paradigm, one must not confuse logical rules and physical rules. Putnam states as an AI credo that there are multiple architectures that can run the same program, hence "physical"rules that govern brain implementation must not be taken as equivalent to logical rules that manipulate mental primitives. Thinking would be a kind of result of logical operations over primitive mental objects. This can be considered the hyper-semantical paradigm.

Neural networks, instead, appeared in order to account for the rules that must inspire a cognitive architecture and that are closer to brain constraints. But, in spite of offering metaphors tied to "neurons"and connections, extracting statistical properties of data that feed the net, these architectures are still "semantic" in the sense that one always has to choose the categories that will represent nodes, basin attractions, output vectors, etc.

If one tries to extract from the computational properties of the net the rule that allows the mental to emerge from the physical, one is back to the old problems of determinism that derogated behaviorism. In this sense, neural networks can answer the problem of quantum and classical indeterminacy by calculating in a non-deterministic way, and can avoid representational indeterminacies only by assuming that mental representations, be they complete or fragmentary, exist. In this sense, neural networks are semantical architectures as traditional AI are.

The only way to cope the syntax of mental representations in our view is to define the mental as an evolutionary tool, to define mental attributes, to investigate the invariant properties of the brain as a complex architecture and to propose an architecture based upon these invariant dynamical properties. In other words, maybe an architecture that has genuine dynamics can inspire the process by which the semantic categories appear in the syntactical-computational realm.

As we saw, pre-wired behavior is very adaptive but very brittle. Suppose that the animal that had to remove its arm when it was burned then had to evaluate: a) how hot does something have to be to represent danger? b) when is it time to stand pain? (our animal can be in a situation of torture!) c) when is the risk of burning one's arm better than being honest (virtues of blackmail)? When is burning a therapy (e.g. treating a skin lesion without anesthesia)? etc.

A very quick look at the examples above shows that:

a) there are several sets of "burning" situations the more complex the animal is.(ambiguity, fuzzy aspect, etc)

b) there are different ways to respond to the risks of being burned (learning and dynamical developmental differences)

c) there are risks that can not be avoided. One has to decide no matter what happens. (the adaptive necessity of assuming risks, of gambling and deciding without absolute certainty)

d) there are decisions that are counter-intuitive, in spite of every close situation in the past (training set) the individual chooses, or creates, a new pattern of response. (the possibility of a network that has been trained to perform a task that suddenly modifies its behavior. Determinism of the systems with dynamical properties according to the parameters that feed the system. Impossibility of forecasting one's acts in certain limit situations).

Briefly, we state that determinism was derogated by classical phenomena, non-linear with sensitivity to the initial conditions (not even having to allude to the late quantum impossibility tied to measure) and it is in this class of systems that one can find interesting suggestions about the syntax that allows semantic divisions. Mathematics from on point of view is a way to handle with data. It doesn't deal with the nature of the elements but only with their possible and valid relations. This a semantical interpretation of mathematics. Others, however, see peculiar structural concepts that can have substantive applications in the categories realm. Notions like bifurcations, structural stability and "maybe" chaos, can gives us a rule to see mental categories with the lenses of syntactical constraints- e.g. those that are inspired by mathematical notions.

It may seem strange to a lot of readers. Almost everyone is accustomed to do as follows: first one has to elect classes of things, then to detect a signal that has a deep structure and some parameters. This is called systems identification. In our view the mental is under suspect: all its categories (e.g. thoughts, ideas, feelings, affects, judgments, will, memory, self, consciousness, dream contents, illusions, hallucinations, delusions, etc) may be pseudo-sets, from the natural point of view, something that if true, precludes laws to exist, regarding mental phenomena [Fodor 1983].

We are not denying that mental categories exist and that they subserve interesting generalizations that can be taken inside Psychology or even speculative Philosophy. We are stating that Artificial Intelligence in its broad sense, i.e. formalization and quantification, can have two roles in a project called Cognitive Science:

1. it can be a tool to manipulate mental primitives (in this case the mental is a priori assumed to exist): semantical tradition with two branches- hypersemantical (traditional AI) and soft semantical (neural nets).

2. it can be a source of inspiration, looking at the operations the Central Nervous System performs, analyzing it through the right mathematical model, and then guessing what kind of categories would be well located in such a dynamical and complex system- syntactical or reductionist tradition.

In the syntactical tradition there is no place for a priori or unquestioned individuals. Objects oriented language are at the beginning impossible because it is the nature of the objects that are under suspect. It may seem elliptical, even circular, but it must be remembered that if "cognitive"architectures have been successful in terms of controlling plants or circuits or the stock market, AI has not even yet furnished us a good definition of what it is or what the mental is. We suspect of the proliferation of entities in the "cognitive" realm are instrumental and useful but we maybe the boundaries of "the mental" are build to the laymen. They are not a natural but a cultural taxonomy and as such they are bad data when one tries to grasp a structure and parameters from them. The generalizations common-sensical reasoning can do are the top AI can do with presumed mental categories. If the semantics of the mental is supposed to be true the only models one can pursue are semantical, hyper or soft, ones.

If one seeks for the nature of data, the enterprise will be twofold: one has to ask what kind of Mathematical model is good to treat the CNS and at the same time try to envisage what kind of individuals could be candidates to state-variables and parameters in such an architecture. Here the semantics of the mental emerges after the syntactics has dictated its limits and constraints. This, we suppose, is a syntactic dynamical model with some semantical side-effects. This side-effects and the bypasses that one can do with such ideas will be a syntactic hypothesis for consciousness and for decision-making devices with a human flavor.

2. Ambiguity as mark of Culture

Let us be back to our example of a very simple animal that can be programmed to execute only one work, e.g. removing the arm when it burns.

As we saw the classes are relatively well defined and the responses are quite the same. There is a many to one relation that states that all the events that burn will elicit the same response. What one can say about multiple relations like many to many? Then, ambiguity forces the variation of responses from the quantitative to the qualitative point of view.

Suppose that we have a class of events that has three subsets. Let us call them Class A and subsets A1, A2 and A3. Suppose now we have a class B of responses to events of A. Class B however has three susbsets B1, B2 and B3. Let us build an interpretation to this model.

In the case of the simple animal every A elicits B. Here too, but A has three different subsets, not necessarily with empty intersections, and so does B. Suppose A is a string of a violin and B is a sound. Every input of A will elicit a sound in B. Suppose instead that we have three classifications: A1 that elicit good music(B1), A3 that elicit bad music(B3) and A2 that elicits B2 and small perturbations of A2 elicit either B1, or B3.

Let us call A1 ® B1 ordinary parameter value

A3 ® B3 ordinary parameter value

A2 ® B2 bifurcation parameter value

If small perturbations on A2 lead to B1 or to B3 then we have a decision, if instead we have a perturbation in A2 that leads to B2(that is also a bifurcation parameter value with respect to a class C (subdivided in three C1, C2 and C3) then we can have a cascade of "bifurcations"and indecidibility. This will force the series to progress until it reaches an application such that a small perturbation in a bifurcation parameter values leads to an ordinary value in the subsequent set. The more promptly we have an application of A to B, the more automatic and deterministic the system is. The more slowly we have to progress from A to B to C, etc in order to grasp decidibility, the more spread the system becomes, the more "voluntary"and conscious it resembles, the more "chaos" as a cascade of bifurcations may appear.

Our brief proposal is that bifurcation and structural stability are cues to understand the partition of classes that shows invariant behavior from the structural point of view but qualitative variations internally depending on parameters.

There is always a relation between A's and B's. This is automatic and pre-wired. This is the general form of an equation that relates A to B. However, the richer the system is the more elements may exist inside A and inside B and there will be slight variations of behavior that connect sub-sets of A and subsets of B.

Imagine a dog that has to respond to a sound, let us imagine the word "chicken". We may have a pre-wired response or a deterministic learned response: whenever the dog listens "chicken" it salivates.(Variation of stimulus-response conditioning).

Suppose an English-spoken man that listen the word "chicken". This will elicit three different "behaviors"or at least three different "representations in mental chain: one may be speaking of a bird, one may be offending or there may happen to evoke doubt. What subset of responses of B must take part when one listen "chicken"that has a double-sense depending on cultural backgrounds? A small perturbation in "chicken's"input can eliminate ambiguity and solve the problem. But it can lead to another value in B that is also a bifurcation value and so on. This cascade may:

a) force the system to recruit more sets A, B, c etc to rich an ordinary parameters value. with the following consequences: more "computational"time will be needed to interpret and decide and more complex the system can be become (due to the cascade of bifurcations).

b) lead to indecidibility in a certain moment depending on parameters and on risks involved.

We suggest that ambiguous inputs in the semantical realm of culture use the characteristic of structural stability and non-stability of the processing system ( in this case the CNS and its dynamical behavior) to allow: qualitative interpretations that depend on quantitative variations at the syntactical level (the concept of bifurcation). In other words: culture proliferates the quantity of elements in sets that could be pre-wired and in certain macro-aspect are, e.g. A's always relate to B's. But, if A's and B's and C's have internal quantities that feed the dynamics and this quantities allow internal qualitative deviations of behavior, this is a good source to creativity, to richness of behavior but it pay the cost of recruiting more elements to compute the ambiguous problems and to decide. This is the emergence of the mental and the spread of consciousness that takes in a EEG a very large part of the cortex to be processed.

Schematically, take the system described for the equation:

d2x/dt + a dx/dt + c x =O (free oscillator)

If a = O we have a limit cycle in the state phase,

If a < 0 the trajectories in the sate space go to infinite

if a> 0 the trajectories go to the origin.

If one takes a + e, in the case of a < 0 and a> 0 the system behaves, topologically, in the same way. This is Structural Stability and a's in this case are ordinary parameters values.

If instead a = 0, a small perturbation e changes dramatically the behavior of the system from the topological point of view: this is structural instability.

Given the concept of structural stability and of bifurcations we propose:

1. The CNS has a dynamical structure, i.e. can be described by a dynamical differential equation.

2. This system is deterministic, but predictability, near bifurcations parameters values becomes difficult.

3. Pre-wired systems and learned automatic behaviors (like driving a car) are dynamical systems that relate A's to B's far from bifurcations values)

To be learned things, novelties, ambiguities and decisions under risk are marks of a very complex and rich system. They explore at the syntactical level bifurcations values, allowing richness to exist, qualities to emerge from quantities, but pay the costs that the system may behave in a unpredictable way and even assume "chaotic" behavior. (Cascade of bifurcations, period 3, etc)

Then, Nature when wants to be robust sets parameters far from bifurcations and when wants to be complex and rich creates internal divisions (new representations) in old set domains in order to explore bifurcations values at the level of implementation. The system is so complex that the cascade may lead to the impression that there is genuine dissociation from the syntactical to the semantical level.

Let us present some other ideas of modeling this problem biologically and dinamically inspired.

3. Coupling oscillators and the problem of real dynamics

Logic departs from the notion of atomic symbols. Geometry from primitives and so on. The main idea that can inspire a project of constraining the semantical level through the invariant properties of the syntactical level may take account of the following requisites:

a) it can have a bottom-up reliability in terms of the operations the CNS does

b) it has to do with signal processing and not with sentence processing.

c) it has to do with pulses, frequencies and the spread of "current". It may seem too reductionistic and it is.

We suppose that a minimum atom of computational architecture may perform a lot of rich dynamical calculations, and have at the same time very rich dynamical properties from the structural and the local point of view. This architecture that provisionally will be constituted of seven coupled non-linear (van der Pol) oscillators, called PSYCHON will be considered the atom of computation at the syntactic-dynamical level of signal processing. If there is genuine emergence this project will be doomed to failure, but if emergence is a kind of ignorance that waits till better "mechanistic"explanations appear, then a CNS will be constituted of multiple atoms of dynamical computation, psychons, that can be treated in an intermediate level through statistical mechanics and neural networks and at upper levels through semantical models (logic traditional AI and semantical networks). Like the metaphor of Newtonian mechanics and Relativistic mechanics, dealing with complicated problems in terms of cognitive endeavors is simpler if one simplifies the problem assuming semantical primitives to exist, but more deeply there is always a syntactical dynamics that allows semantical partitions to be mapped unto topological partitions of the space.

Supposing that there could be a class of equivalence between semantical bifurcations and syntactical bifurcations we show now an architecture of seven coupling oscillators that can show dynamics, that can assume different qualitative behaviors from the topological point of view and that can be considered as a kind of net of oscillators with "learning" rules through the modification of parameters, bifurcations tied to bifurcations and a rule of decision that is tied to the power spectrum of two rival outputs. (Fourier series).

The motives to use coupling oscillators come from the fact that one of the best models to neurons, regarding the triggering of action potentials, is the Hodking-Huxley set of equations.

There are transformations of the H-H model that lead to van der Pol oscillator described by

d2j/dt + a (1- j2) dj/dt + w j = 0

One way arrow means master-slave synchronism and two way arrows means mutually synchronized. (Lyndsey 1987)

The architecture represents seven neurons. Number 1 is the input and 6 and 7 are the outputs. 2, 3, 4 and 5 are the "intermediate" processors. Weight of the connections and even the connection itself could be handled by a kind of Hebbian rule. That's to say that slight changes in experience can modify the connection of the oscillators leading to dramatically differences in the set of equations that describe analytically the architecture.

We show the architecture called psychon in one of its fashions and show one bifurcation that happens at the "linear" system of equations, i.e. all a's = 0.

For different inputs of w1 we have bifurcation of behavior in the oscillator 6.

Other bifurcations can be shown with respect to the system with friction

a ¹ 0 regarding parameters. This systems, being non-linear, can even have "chaotic"behavior as is suggested by Freeman as a way to "make sense of the world" (Freeman 1987 and 1992).

4. ARCHITECTURE AND PRELIMINARY SIMULATIONS

5. Final remarks

The possibility of mapping the ambiguities of semantics through the notion of bifurcations and of stability at the processing level, using the inspiration of neurons as oscillators and neuron assemblies as atoms that perform the minimal set of cognitive computation is the main suggestion of our work. Maybe it is wrong and functionalism is true and the gap between semantics and the brain is absolute. Maybe not.