APÊNDICE DE TEXTOS DO CAPÍTULO I

 

I.1.

"Mais un article de physique ne se réduit pas à sa seule expression mathématique. Le physicien utilise souvent des hypotheses qu’il ne précise pas et qui ont pour origine ce qu’on appelle ‘l’intuition physique’. Elles lui permettent, en particulier, de négliger certaines quantités ou de faire des approximations que le mathématicien aurait bien du mal à deviner. Par exemple, il a fallu une vingtaine d’annés, entre 1930 et 1950, pour que les physiciens arrivent à élaborer la méthode de renormalisation en théorie des champs. Elle consiste à effectuer un calcul perturbatif dont tous les termes, à partir du deuxieme ordre, donnent des intégrales divergentes. Les physiciens, motivés par l’extraordinaire précision des résultats expérimentaux de la spectroscopie de la fin des annés 40 (structure fine des raies du spectre d’émission des atomes), ont désepérément cherché à tirer un résultat fini de ces intégrales divergentes. Pour cela, ils ont restreint le domaine d’intégration aux énergies de l’ordre de MC2, ou M est la masse de l’électron et C la vitesse de la lumiere. Moyennant des soustractions non justifiées, ils ont obtenu un résultat fini qui se rapproche beaucoup du résultat expérimental. Cette technique a été progressivement améliorée, par Tomonga, Schwinger, Feynman et Dyson, jusqu’a un accord avec les résultats expérimentaux qui corresponde à l’épaisseur d’un cheveu sur la distance Paris-New York. Quel fut le rôle de l’intuition physique dans leur raisonnement? Le mécanisme de renormalisation consiste, au cours des calculs, à changer la masse de l’électron et et à la remplacer par une quantité qui dépend de l’ordre de grandeur des énergies considérées, mais diverge quand l’ordre de grandeur tend vers l’infini. Pour prendre une comparaison trés simple, si un ballon gonflé d’hélium quitte le sol à un instant T=0, le calcul de son accélération par la poussée d’Archimede ne donnera pas le résultat que l’on observe expérimentalement. En effet, la présence d’un champ, l’air ambiant, équivaut à remplacer dans les calculs la masse réelle du ballon par une masse effective beaucoup plus grande. S’appuyant sur cette comparaison, on peut comprendre que l’électron, placé dans le champ électromagnétique, possede une masse effective bien différente de sa masse ‘réelle’, c’est-à-dire de celle qui rentre dans l’équation mathématique. Grâce à cette intuition les physiciens ont pu développer une méthode celle de la renormalisation, qui, bien sur, se formule en langage mathématique, mais que les mathématiciens confrontés au même probleme, auraient été bien en peine de décrouvir. Cette intuition physique les autorise d’ailleurs à prendre des libertés avec le rigueur mathématique. Par example, l’intégrale de Feynman ne correspond pour le moment à aucun objet mathématique précis. C’est pourtant le pain quotidien des physiciens théoriciens.

Pourtant on aurait tort de croire que les mathématiques ne jouent vis-à-vis de la physique que le rôle de langage exprimant de résultats. Lorsqu’on modélise une theorie qui en est à un stade assez primitif, le mathématiques ont bel et bien cette fonction. Mais à un stade ultérieur d’élaboration, comme dans les cas de la mécanique quantique, le caractere génératif des mathématiques finit par jouer un rôle crucial. Comment ne pas être troublé par la possibilité de retrouver le tableau périodique des éléments de Mendeléev à partir de l’équation de Schrodinger et du principe d’exclusion de Pauli? C’est pouquoi le mathématicien peut croire reduire la physique à un certain nombre d’équations. Bien souvent, cependant, cést l’intuition du physicien qui lui permet de comprendre ces équations"(in CHANGEUX,J.P. e CONNES,A. 1989 pg.2l-23)

I.2.

"Armstrong has recently argued that a viable theory of contingent property identify requires a fundamental distinction between predicates and properties and a realistic -as opposed to nominalistic or conceptualist- theory of properties. I think he is correct and I want first to develop this important argument...Predicates are linguistic affairs, in the now common usage of the term, and their identity conditions are widely held to be semantically fixed. Equally widespread is the view that properties are directly individuated by predicates. If these two assumptions are combined there results an identity criterion of this sort; predicates P and Q designate the same property if and only if P is synonymous with Q." (in HOOKER,C. 1981 pg.215

I.3.

"What someone means by an utterance depends on the related beliefs that he has, and in turn the content of his beliefs is a function of what he means bey certain expressions, in one big ball of wax (in Quine’s ‘Word and object). The meaning of an expression for an individual is a function of the role that expression plays in his internal representations economy- that is, of how it is related to sensory input and behavioral output and of its inferential/computational role within the internal economy. Sparing the niceties, this is the network theory of meaning, otherwise known as the holistic theory or the conceptual-role theory...Translation is accordingly a matter of finding a mapping between alien representations and one’s own such that the network of normal and material inferences holding among the alien representations closely mirrors the same network holding among our own. It is possibel that representational economies may be so different that translations are not possible.

Meaning is therefore relational in the sense that what an expression means is a function of its inferential/computational role in the person’s internal system of representations, his cognitive economy. This is not to say that an expression has meaning only if someone interprets or translates it as having a particular meaning. However, it does imply that isolated expressions do not somehow sheerly have meaning and that mentality cannot somehow magically endow an utterance with intrinsic meaning. What it does deny is that meaning is an intrinsic feature of mental states and that a state has the meaning it has regardless of the wider representational system. Moreover, it contrasts with a theory of meaning that says that the meaning of a word is the set of objects it is true of and that the meaning of a sentence is to be identified with the state of affairs that makes it true" (CHURCHLAND,P 1986 pg.344).

I.4.

"For instance, it seems (to many) that we cannot account for perception unless we suppose it provides us with an internal image (or model map) of the external world, and yet what good would that image do us unless we have an inner eye to perceive it, and how are we to explain its capacity for perception? It also seems (to many) that understanding a heard sentence must be somehow translating it into some internal message, but how will this message in turn be understood: by translating it into something else? The problem is an old one, and let’s call it Hume’s problem, for while he did not state it explicitly, he appreciated its force and strove mightly to escape its clutches. Hume’s internal representations were impressions and ideas and he wisely shunned the notion of an inner self that would intelligently manipulate these items, but this left him with the necessity of getting the ideas and impressions to ‘think for themselves’.The result was his theory of the self as a ‘bundle’ of (nothing but) impressions and ideas. He attempted to set these impressions an ideas into dynamic interaction by positing various associationistic links, so that each succeding idea in the stream of consciousness dragged its successor onto the stage according to one or another principle, all without benefit of intelligent supervision. It didn’t work, of course. It couldn’t coceivably work, and Hume’s failure es plausibly viewed as the harbinger of doom for any remotely analogous enterprise. On the one hand, how could any theory of psychology make sense of representations that understand themselves, and on the other, how could any theory of psychology avoid regress or circularity if it posits at least one representation-understander in adition to the representations?

Now no doubt some philosophers and psychologists who have appealed to internal representations over the years have believed in their hearts that somehow the force of this argument could be blunted, that Hume’s problem could be solved, but I am sure no one had the slightest idea how to do this until Artificial Intelligence and the notion of data-structures came along. Data structures may or may not be biologically or psychologically realistic representations, but they are, if not living, breathing examples, at least clanking, functioning examples of representations that can be said in the requisite sense to understand themselves.

How this is accomplished can be metaophorically described (and any talk about internal representations is bound to have a large element or metaphor in it) by elaborating on descriptions of AI as top-down theoretically inquiry. One starts, in AI, with a specification of a whole person or cognitive organism- whato I call, more neutrally, an intentional system- or some artificial segment of that person’s abilities (e.g. chess-playng, answering questions about baseball) and then breaks that largest intentional system into an organization of subsystems, each of which could itself be viewed as an intentional system (with its own specialized beliefs and desires) and hence as formally a homunculus. In fact, homunculus talk is ubiquitous in AI, and almost always iluminating. AI homunculi talk to each other, wrest control from each other, volunteer, sub-contract, supervise, and even kill. There seems no better way of describing what is going on. Homunculi are bogeymen only if they duplicate entire the talents they are rung to explain. If one can get a team or committee of relatively ignorant, narrow-minded, blind homunculi to produce the intelligent behavior of the whole, this is progress. A flow chart is typically the organizational chart of a committee of homunculi (investigators, librarians, accountants, executives); each box specifies a homunculus by prescribing a function without saying how it is to be accomplished (one says, in effect: put a little man in there to do the job). If we then look closer at the individual boxes we see that the function of each is accomplished by subdividing it via another flow chart into still smaller, more stupid homunculi. Eventually this nesting of boxes within boxes lands you with homunculi so stupid (all they have to do is to remember whther to say yes or no when asked) than they can be, as one says, ‘replaced by a machine’. One discharges fancy homunculi from one’s scheme by organizing armies or such idiots to do the work.

When homunculi at a level interact, they do so by sending messages and each homunculus has representations that it uses to execute its functions. Thus typical Ai discussions to draw a distinction between representation and representation-user: they take the first-step of the threatened infinite regress, but as many writers in AI ave observed, it has gradually emerged from the tinkerings of AI that there is a trade-off between sophistication in the representation and sophistication in the user. The more raw and uninterpreted the representation- e.g., the mosaic of retinal uninterpreted the representation- the more sophisticated the interpreter or user of the representation. The more interpreted a representation- the more procedural information is embodied in it, for instance- the less fancy the interpreter need be. It is this fact that permits one to get away with lesser homunculi at high levels, bey getting their earliers or lower brethren to do some of the work. One never quite gets completely self understanding representations (unless one stands back and views all representations in the system from a global vantage point), all homunculi are ultimately discharged. One gets the advantage of the trade-off only by sacrificing versatility and universality in one’s subsystem and their representations, so one’s homunculi cannot be too versatile nor can the messages they send and receive have the full flavor of normal human linguistic interaction." (in DENNETT 1986 pg.122-124).

 

APÊNDICE DE TEXTOS DO CAPÍTULO I I

 

 

 

II.1.

"Essentially this account of reduction can be characterized as direct reduction - in which the basic terms (and entities) of one theory are related to the basic terms (and entities) of the other, (assuming that the reduced theory is an adequate one) and the axioms and laws of the reduced theory are derivable from the reducing theory. The last assertion must be expanded somewhat, for quite often in intertheoretic explanation terms appear in the reduced theory which are not part of the reducing theory, e.g.,the term ‘gene’does not appear in organic chemistry. Thus we have to conjoin additional sentences to the reducing theory which associate these terms of the reduced or secondary theory with combinations of terms from the vocabulary of the reducing or primary theory." ( in SCHAFFNER 1967 pg.l38 )

II.2

"We must therefore not be misled by the convenient habit of thinking of the various atomic theories as representing a progress in our knowledge concerning a fixed set of submicroscopic objects. This way of describing the historical succession of atomic theories easily generates the belief that atoms can be said to exist and to have ascertainable ‘inherent natures’, independent of any particular theory that postulates the existence of atoms and prescribes what properties they possess. In point of fact, however, to maintain that there are atoms having some definite set of characteristics is to claim that a certain theory about the constitution of physical objects is warranted by experimental evidence. The succession of atomic theories propounded in the history of science may indeed represent not only advances in knowledge concerning the order and conection of macroscopic phenomena, but also a progressively more adequate understanding of the atomic constitution of physical things. It nevertheless does not follow that, apart from some particular atomic theory, it is possible to assert just what can or cannot be predicted from the ‘natures’of atomic particles". (NAGEL,E. 1961 pg.370)

II.3.

"This third condition is that we be able to specify what I shall call a proxy function: a function which assigns one of the new things, in this example a pure number, to each of the old things- each of the impure numbers of temperature. In this example the proxy function is the function ‘how many degrees centigrade’ - the function f such that f (noC)=n. It is not required that such a function be expressible in the original theory 0 to which ‘H’ belonged, much less that it be available in the final theory 0’ to which ‘Hc’ belongs. It is required rather of us, out in the metatheory where we are explaining and justifying the discontinuance of 0 in favor of 0’, that we have some means of expressing a proxy function. Only upon us, who explain ‘H (x,alfa)’ away by ‘Hc (x,n)’ does it devolve to show every alfa that was inteded in the old 0 determines an n of the new 0’." (QUINE,W. 1964 pg.204-205).

 

II.4.

"I have sought to establish the following main points: I.A crucial aspect of mind is captured in the notion of an ‘internal model’, and such a model is not adequately representable in colloquial language. A more adequate-but still inadequate- representation would use a programming language that uses intentional descriptions- even in describing a robot-, but does so in a sufficiently detailed way to allow causal explanations that possess extensional translations. II. `Utterances` are but samples of the states of such an `internal model`, and thus share with all samples the risk of being unrepresentative. Some aspects of `mental states` are sufficiently germane to action to merit `awareness’, and the ability to express these verbally is then to be seen as a secondary property. III. The slide-box metaphor suggests that the internal representation of an activity within the brain is correlated with the spatial extent of the object it represents, and that this correlation demolishes the problem of the secondary qualities." (in ARBIB, M. 1972 pg. 590-591)

II.6.

"This is due to the work of J. Kemeny and P. Oppenheim on reduction, and might be termed a paradigm of indirect reduction, since one does not obtain a theory T2 from T1 in the usual case of reduction, so reducing T2 by T1; rather one obtains identical observable predictions from both theories (though T1 may predict more). An example of this type of reduction might be the explanation, by Lavosier`s oxidation theory, of all the observable facts which the phlogiston theory explained. Notice in this case we would not be able to define `phlogiston` in the terms of the oxidation theory". (in SCHAFFNER, K. 1967 pg. 138)

II.7.

"Anything that we want to say about actual observations can be said without theoretical terms, but their introduction allows a much more highly systematised treatment of our total knowledge. Nevertheless, since theoretical terms are in a sense a luxury, we want to know if we can get along without some of them. It is, then, of considerable interest to know that a set of theoretical terms is superfluous since we can replace the theories using these by others in which they do not occur, without sacrificing the degree of systematization achieved by science to this day." (KEMENY, J e OPPENHEIM, P. 1956 pr. 12)

II.8.

"A reduction will be termed a Kemeny/Oppenheim reduction instance if and only if: (1) T2 has among its primitive terms, terms which are not in T1. (2) Any part of the observational data associated with T2 is explainable by T1. (3) T1 is at least as well sistematized as T2. (The notion os `systematized` is that of a measure which combines streght with simplicity. A theory which is more complex, but wich is more powerfull than a comparable theory is said to be better systematized. The notion is apparently an intuitive one." (in SCHAFFNER, K. 1967 pg. 140)

II.9.

"I do believe that there are good reasons for considering it a different approach to the problems of reduction, and an approach which is not totally negative.

The claim made in this paradigm is not that T2 is derivable from T1 in any formal sense of derivable, or even that T2 can have its primitive terms expressed in the language of T1, rather T1 is able to explain why T2 `worked`, and also to `correct` T2. The relation between the theories is not one of strict deduction of T2 from T1. Nevertheless in certain cases one can obtain T2 from T1 deductively: if one conjoins to T1 certain contrary to fact premises which would in certain experimental contexts (relative to the state of a science) not be experimentally fasifiable, one can obtain T2.

A relatively uncomplicated example of such a `reduction` is the Galilean law of free fall: that the distance an object has fallen is proportional to the square of the time of its descent, by the axioms of Newtonian mechanics plus the law of universal gravitation. The Galilean law is not exactly derivable- rather a more complicated law is derivable which gives experimental results which are quite close to the predictions of the Galilean law. The sentences expressing these laws are still different, however, and could only be said to be formally identical if the earth`s radius were infinitely large, which is not. Consequently the reduced theory is only derivable approximately from the reducing theory- and `approximation` introduces serious complications for any general formal analysis of reduction." (in SCHAFFNER, K 1967 pg. 138-139)

II.10.

"Qual a nossa posição agora, com respeito ao trilema de Fries? Os enunciados básicos em que nos detemos, que decidimos aceitar como satisfatórios e como suficientemente aprovados pelas provas, têm, reconhecidamente, o caráter de dogmas, mas apenas na medida em que desistirmos de justificá-los por argumentos outros(ou por outras provas). Essa espécie de dogmatismo é, todavia, inócua, pois que, surgida a necessidade, os enunciados podem facilmente ser submetidos a provas complementares. Admito, em princípio, que isso torna infinita a cadeia de deduções. Contudo essa espécie de `regressão infinita` é também inócua, uma vez que, em nossa teoria, não se coloca empenho em tentar provar, por meio dela, qualquer enunciado. Finalmente, no que concerne ao psicologismo, admito que a decisão de aceitar um enunciado básico e dá-lo por satisfatório está causalmente relacionada com nossas esperiências- em especial, nossas experiências perceptuais. Não tentamos, porém, justificar enunciados básicos através de recurso a essas experiências. As experiências podem motivar uma decisão e, conseqüentemente, a aceitação ou rejeição de um enunciado, mas um enunciado básico não pode ser juustificado por elas- não mais do que por um murro na mesa... Parece-me que a concepção aqui sustentada mais se aproxima das concepções da escola `crítica` (ou kantiana)- talvez como representada por Fries, do que das concepções do positivismo. Fries, em sua teoria a propósito de `nossa predileção por demonstrações`, enfatiza que as relações (lógicas) entre enunciados diferem muito das relações vigentes entre enunciados e experiências sensóriais. O Positivismo, de sua parte, procura abolir a distinção: ou bem toda a ciência é tornada parte do meu saber, de `minha` experiência sensória (monismo de dados sensoriais); ou bem as experiências sensórias se tornam parte da rede científica objetiva de argumentos- na forma de enunciados protocolares (monismo de enunciados)". (in POPPER, K. 1974 pg. 112-113)

II.11.

"A reduction will be termed a PFK reduction instance if and only if: (1) Of the primitive terms q1...qn of T2 there is at least on q1 which cannot be identified or correlated with a p1 of T1, or any combination of p’s of T1 without asserting a self-contradiction or a false statement. (2) Nevetheless theory T2 can be ‘explained’ by T1 in the nonformal sense (not the Hempel-Openheim sense) that T1 can yield a deductive consequence T2* which may result in predictions numerically ‘very close’ to the predictions of T2. (3) T2* should ‘correct’ T2 in the sense of providing more accurate experimentally verifiable predictions than T2; it should also point out why T2 was incorrect (e.g., that it ignored a crucial variable, and T2* (or T1 for that matter) should indicate why T2 worked as well as it did." (in SCHAFNER,K.

op.cit.pg.140)

II.12.

"Many of the problems formulated in connection with the question of reducing one science to another may be formulated as a series of problems using the notion of a representation theorem for the models of a theory. For instance, the thesis that psychology may be reduced to physiology would be for many people appropriately established if one could show that for any model of a psychological theory, it was possible to construct an isomorphic model within physiological theory...To show in a sharp sense that thermodynamics may be reduced to statistical mechanics, we would need to axiomatize both disciplines by defining appropriate set theoretical predicates, and then show that given any model T of thermodynamics we may find a model of statistical mechanics on the basis of which we may construct a model isomorphic to T." (SUPPES,P.cit.in SCHAFFNER, K.op.cit.pg.139)

II.13.

"A second meaning of ‘model’ for mathematical economists is closely related: the model for a theory is the set of all models for the theory in the ‘logicians’ sense. What the logicians call a model is labeled structure...The third meaning of ‘model’, the one most popular with empirical science, is what we have meant by ‘theory’ in preceding pages. In this sense, to give a mathematical model for some branch of empirical science is to state an exact mathematical theory. In such empirical contexts the word ‘theory’ is often reserved for non-mathematical, relatively inexact statements about the fundamental ideas of a given domain of science. The important difference between the first two senses of model and the third is that only in the third sense are models linguistic entities. When theories are formalized in first-order logic, theorems relating different models for the theory are necessarily metamathematical in their statement and proof. In contrast, theorems comparing various models for a theory may be stated in direct mathematical fachion, when the theory is axiomatized by defining a set-theoretical predicate." (in SUPPES, P.1969 pg.253-254)

II.14.

"A satisfactory general definition is isomorphism for two set-theoretical entities of any kind is difficult if not impossible to formulate. The standard mathematical practice is to formullate a separate definition for each general kind of ordered n-tuple. When the n-tuples arel complicated as in the case of models for the theory of particle mechanics, it is sometimes difficult to decide exactly what is to be meant by two isomorphic models; but for algebras or simple relation structures the choice of the appropriate conditions is clearer...The notion of isomorphism has important applications in all domains of modern mathematics. When the special situation obtains that any two models for a theory are isomorphic, then the theory is said to be categorial...When a theory is not categorial, an important problem is to discover if an interesting subset of models for theory may be found such that any model for the theory is isomorphic to some member of this subset. To find such a distinguished subset of models for a theory and show that it has the property indicated is to prove a representation theorem for the theory. Such a theorem may be proved for groups; namely, every group is isomorphic to a group of transformations."(SUPPES,P.op.cit.pg.262-263)

II.l5.

" A reduction will be termed a Suppes type reduction instance if and only if: For any model M2 of the reduced theory, we can find a model M1 of the reducing theory such that one can construct a model M1* (M1* may be M1) such that M1* is isomorphic to M2" (SCHAFFNER,K. op.cit.pg l4l). Para uma definição de isomorfismo SCHAFFNER lança mão da definição de CHURCH: "Two models of a system of postulates are said to be isomorphic if there is a one-to-one correspondence between the two domains of individuals used in the two models such that the values given in the two models to any particular free variable occuring in therepresenting forms of the postulates always correpond to each other according to this one-to-one correspondence. I.e., if in the first model the value a is given to the individual variable a , and in the second model the value a’ is given to a, then a must correspond to a’in the two domains of individuals; and if in the first model the value 0 is given to an n-ary function variable f, while in the second model the value 0’ is given to f, then the propositional functions 0 and 0’ must be so related that, whenever the individuals a1, a2,...an of the first domain of individuals correspond in order to the individuals a1’, a2’,...an’ of the second domain, the value 0 (a1,a2,...an) is the same as the value 0’ (al’,a2’,...an’)".(CHURCH,A. CITADO in SCHAFFNER,K. op.cit. pg l4l).

II.16.

"Reduction occurs if and only if: (1) All the primitive terms q1...qn appearing in the corrected secondary theory T2* appear in the primary theory T1 (in the case of homogeneus reduction) or are associated with one or more T1’s terms such that: (a) it is possible to set up a one-to-one correspondence representing synthetic identity between individuals or groups of individuals of T1 and T2* or between individuals of one theory and a subclass of the groups of the other, in such a way that a reduction function can be specified which values exhaust the universe of T2* forarguments in the universe of T1; (b) all the primitive predicates of T2*, i.é., any Fni are effectively associated with an open sentence of T1 in n free variables in such a way that Fn I is fulfilled by an n-tuple of values of the reduction function always and only whn the open sentence is fulfilled by the corresponding n-tuple of arguments; (c) all reduction functions cited in (a) and (b) above be specifiable, have empirical support, and in general be interpretable as expressing referential identity. (2) Given the fulfillment of condition (1) that T2* be derivable from Tl when T1 is conjoined with the reduction functions mentioned above. (3) T2* corrects T2 in the sense of providing more accurate experimentally verifiable predictions than T2 in almost all cases (identical results cannot be ruled out however), and should also indicate why T2 was incorrect (e.g., crucial variable ignored), and why it worked as well as it did. (4) T2 should be explicable by T1 in the non-formal sense that T1 yields a deductive consequence (when supplemented by reduction functions) T2* which bears aclose similarity to T2 and produces numerical predictionswhich are ‘very close’ to T2’s. Finally (5) The relations between T2 and T2* should be one of strong analogy- that is (in current jargon) they possess a large ‘positive analogy’. " (in SCHAFFNER,K.op.cit.pg.144)

II.17.

"Consider a set of electrical generators G, each of which produces alternating current electrical power at 60 Hz but with fluctuations in frequency of 10% around some average value. Taken singly the frequency variability of the generators is 10%. Taken joined together in a suitable network, their collective frequency variability is only a fraction of that figure because, statistically, generators momentarily fluctuating behind the average output in phase are compensated for by the remaining generators and conversely generators momentarily ahead in phase have their energy absorbed by the remainder. The entire system functions, from an input/output point of view, as a single generator with a greatly increased frequency reliability, or, as control engineers express it, with a single, more powerfull, ‘virtual governor’. The property ‘has a virtual governor of relibility f ‘ is a property of the system as a whole but of none of its componentes. Does this render it irreducible? Yes, and no. For, once the mechanism of the system operation is understood, it is seen that this property’s being a property of the system is entailed by the conjunction of laws for the individual generators plus specification of system structure. Its being so is not, however, entailed by the laws of the component generators alone. In this sense the property is irreducibly a property of the whole system- the system structure obtaining is essential to its obtaining. But from an ontological point of view there is nothing to the system over and above its components and the physical relations between them (however physically realised). This latter point is driven home by inquiring ‘what thing is the virtual governor?’ (a real governor, one on each component generator, is an actual physical device), ‘where is the virtual governor?’and so on. The answers must be that, in the sense of the questions, there is no thing which is the virtual governor, so ‘it’ isn’t anywhere, and even the property of being virtually governed cannot be localised more closely than the system as a whole. (This is why engineers refer to it as a virtual governor.)" (in HOOKER,C.1981 pg.508-509)

II.18.

"Now, if we happen to know what would be the effect of each cause when acting separately from the other, we are often able to arrive deductively, or a priori, at a correct prediction of what will arise from that conjuct agency...This condition is realized in the extensive and important class of phenomena commonly called mechanical, namely, the phenomena of the communication of motion from one body to another...I shall give the name of the composition of causes to the principle which is exemplified in all cases in which the joint effect of several causes is identical with the sum of their separate effects...This principle, however, by no means prevails in all departments of the field of nature. The chemical combination of two substances produces, as is well known, a third substance, with properties different from those of either of the two substances separately or of both of them taken together. Not a trace of the properties of hidrogen or of oxygen is observable in those of their compound, water...This explains why mechanics is a deductive or demonstrative science, and chemistry not...If this be true of chemical combinations, it is still more true of those far more complex combinations of elements which constitute organized bodies, and in which those extraordinary new uniformities arise which are called the laws of life. All organized bodies are composed of parts similar to those composing inorganic nature, and which have even themselves existed in an inorganic state, but the phenomena of live,which result from the juxtaposition of those parts in a certain manner, bear no analogy to any of the effects which would be produced by the action of the component substances considered as mere physical agents. To whatever degree we might imagine our knowledge of the properties of several ingredients of a living body to be extended and perfected, it is certain that no mere summing up of the separate actions of those elements will ever amount to the action of the living body itself." (in MILL,J.S.1950 pg.204-206)

II.19.

"Corresponding to the procedure in physics, the attempt has been made in biology to resolve the phenomena of live into parts and processes which could be investigated in isolation. This procedure is essentially the same in the various branches of biology. The organism is considered to be an aggregate of cells as elementary life-units, its activities are resolved into functions of isolated organs and finally psycho-chemical processes, its behaviour into reflexes, the material substratum of heredity into genes, acting independently of each other, phylogenetic evolution into single fortuitous mutations and so on. As opposed to the analytical, summative and machine theoretical, viewpoints, organismic conceptions have evolved in all branches of modern biology which assert the necessity of investigating not only parts but also relations of organisation resulting from a dynamic interaction and manifesting themselves by the difference in behaviour of parts in isolation and in the whole organism...Gestalt psychology has demostrated the existence and primacy of psychological entities, which are not a simple summation of elementary units, and are governed by dynamical laws...Behaviour as a whole and summative behaviour, unitary and elementalistic conceptions, are usually regarded as being antitheses. But it is frequently found that there is no opposition between them, but a gradual transition from behaviour as a whole to summative behaviour... The consideration of open systems is more general in comparison with that of closed systems; for it is always possible to come from open to closed systems by equating the transport terms to zero, but not vice versa. In physics, the theory of open systems leads to basically new, and partly revolutionary, consequences and principles. In biology it accounts, first, for many characteristics of living systems which have appeared to be in contradiction with the laws of physics, and have been considered hitherto as vitalistic features. Secondly, the consideration of organisms as open systems yields quantitative laws of basic biological phenomena, such as metabolism and growth, form development, excitation, etc.

In the case in which the variations in time disappear, systems become stationary...Open systems may, provided certain conditions are given, attain a stationary state...This is called steady state...Living systems are the most important examples of open systems and steady states... In most physical systems the final state is determined by the initial conditions...Vital phenomena show a different behaviour. Here, to a wide extent, the final state may be reached from different initial conditions and in different ways. Such behaviour we call equifinal." (in BERTALANFFY,L. 1950 pg.134-165)

II.20

"If to say that quality Q (or relation R) is absolutely emergent is to say that the law correlating Q (or R) with quantitative physical conditions is a priori unpredictable, it follows that absolute emergence is relative to a system of semantic rules. In this respect the concept of absolute emergence turns out to be surprisingly analogous to the concepts of indefinability and indemonstrability.

Is the relativist, then, wrong in denying the existence of absolutely emergent qualities? He is wrong if he denies the semantic truism that some descriptive terms must be given meaning by ostensive definition if it is to be possible to give meaning to any descrisptive terms by verbal definition. Perhaps he is right, on the other hand, in his claim that no descriptive term is, by some obscure kind of necessity, definable by ostension only." (in PAP,A. 1952 pg.310)

II.21.

"Gresham’s law says something about what will happen in monetary exchanges under certain conditions. I am willing to believe that physics in general in the sense that it implies that any event which consists of a money exchange (hence any event which falls under under Gresham’s law) has a true description in the vocabulary of physics and in virtue of which it falls under the laws of physics. But banal considerations suggest that a physical description which covers all such events must be widely disjunctive. Some monetary exchanges involve strings of wampum. Some involve dollar bills. And some involve signing one’s name to a check. What are the chances that a disjunction of physical predicates which covers all these events (i.e., a disjunctive predicate which can form the right hand side of a bridge law of the form ‘x is a monetary exchange <=>, if and only if,...) expresses a physical kind? In particular, what are the chances that such a predicate forms the antecedent or consequent of some proper law of physics? The point is that monetary exchanges have interesting things in common; Gresham’s law, if true, says what one of these interesting things is. But what is interesting about monetary exchanges is surely not their commonalities under physical description. A kind like a monetary exchange could turn out to be coextensive with a physical kind; but if it did, that would be an accident on a cosmic scale.

In fact, the situation for reductionism claims not only that all kinds are coextensive with physical kinds, but that the coextensions are nomologically necessary: bridge laws are laws. So, if Gresham’s law is true, it follows that there is a (bridge) law of nature such that ‘x is a monetary exchange <=> x is P’is true for every value of x, and such that P is a term for a physical kind. But surely there is no such law. If there where, then P would have to cover not only all the systems of monetary exchange that there are, but also the systems of monetary exchange that there could be; a law must succeed with the counterfactuals. What physical predicate is a candidate for P in ‘x is a nomologically possible monetary exchange iff Px ‘? (in FODOR,J. 1975 pg l5-l6)

II.22.

"For example, we can now characterize the respect in which reductionism is too strong a construal of the doctrine of the unity of science. If reductionism is true, then every kind is, or is coextensive with, a physical kind. (Every kind is a physical kind if bridge statements express nomologically necessary property identities, and every kind is coextensive with a physical kind if bridge statements express nomologically necessary event identities.)...I now want to suggest some reasons for believing that this consequence is intolerable. There are not supposed to be knock-down reasons; they couldn’t be, given that the question of whether reductionism is finally and empirical question. (The world could turn out to be such that every kind corresponds to a physical kind, just as it could turn out to be such that the property is transported to a distance of less than three miles from the Eiffel Tower determines a kind in, say, hydrodynamics. It’s just that, as thing stand, it seems very unlikely that the world will turn out to be either of these ways.)

The reason it is unlikely that every kind corresponds to a physical kind is just that (a) interesting generalizations (e.g., counterfactual supporting generalizations) can often be made about events whose physical descriptions have nothing in common; (b) it is often the case that whether the physical descriptions of the events subsumed by such generalizations have anything in common is, in an obvious sense, entirely irrelevant to the truth of the generalizations, or to their interestingness, or to their degree of confirmation, or, indeed, to any of their epistemologically important properties; and (c) the special sciences are very much in the business of formulating generalizations of this kind." (in FODOR,J. op.cit. pg.14-15).

II.23.

"A further consideration differentiates our models..(conexionistas)..from those inspired by the computer metaphor-that is, the constraint that all the knowledge is in the connections. From conventional programmable computers we are used to thinking of knowledge as bein stored in the state of certain units in the system. In our systems we assume that only very short-term storage can occur in the states of units; long-term storage takes place in the connections among units. Indeed it is the connections- or perhaps the rules for forming them through experience- that primarily differentiate one model from another. This is a profound difference between our approach and other more conventional approaches, for it means that almost all knowledge is implicit in the structure of the device that carries out that task rather than explicit in the state of units themselves. Knowledge is not directly acessible to interpretation by some separate processor, but it is built into the processor itself and directly determines the course of processing. It is acquired through tuning of connections as these are used in processing, rather than formulated and stored as declarative facts...A second class of constraints arises from our beliefs about the nature of human information processing considered at a more abstract, computational level of analysis. We see the kinds of phenomena we have been studying as products of a kind of constraint-satisfaction procedure in which a very large number of constraints act simultaneously to produce the behavior. Thus we see most behavior not as the product of a single, separate component of the cognitive system but as the product of large set of interacting components, each mutually constraining the others and contributing in its own way to the globally observable behavior of the system." (RUMELHART,D. in POSNER,M.1989 pg.135-136)

"As far as public visibility goes, the modern era in neural networks dates from the publication of this paper...The standard approach to a neural network is to propose a learning rule, usually based on synaptic modification, and then to show that a number of interesting effects arise from it. Hopfield starts by saying that the function of the nervous system is to develop a number of locally stable points in state space. Other points in state space flow into the stable points (called attractors). This allows a mechanism for correcting errors, since deviations from the stable points disappear. It can also reconstruct missing information since the stable point will appropriately complete missing parts of an incomplete initial state vector...The dynamics of evolution of the system state follows a simple rule and is asynchronous. An element, chosen at random, looks at its inputs, and changes state, depending on whether or not the sum of its input is above or below threshold. It can be seen from the form of the energy term that a state change leads either to a decrease in energy or to the energy remaining the same. The updating rule is, therefore, an energy minimizing rule. Modifications or element activities continue until a stable state is reached, that is, a energy minimum is reached." (in ANDERSON,J. e ROSENFELD,E. 1989 pg.457-458)

II.25.

(a) "A stronger, but still epistemic, sense of indefiniteness of the class of instances is given by any case where the reducing theory cannot provide explicit analytic descriptions of its system behaviour. If the mechanical n-body system is unsovabel in this sense, most micro-mechanical reductions will be examples of this kind of indefiniteness. But these situations provide no case for refusing to reduce the behaviour of a system to the behaviour of its constituents, it is rather simply a case of the logical inaccessibility of certain kinds of description of that behaviour. Where analytic functions are required as reduction functions this reduction must rest content with relatively more determinable property identities."

(b) "There is a sense in which explanation of the most specific events is limited, for the specific events which constitute the actual history of our world are not uniquely determined by general laws, in present theory, but by these in conjunction with unexplained initial conditions. In this sense, events are only explainable relatively to other events.This limitation perhaps explains the feeling that explanations are not truly general in the relevant sense, shich in tur may underlie the feeling that there are only specific, as opposed to (not instances of) general identities. Not until a categorial theory is developed, i.e. one which determines a particular condition of the world as the only nomicly possible one, will this limitation be escaped. This of course may neve happen (it is not even obviously internally coherent). What should be clear however is that there is a matching limitation on both sides of the reduction and that it is irrelevant to the nature of the reductive identity relation." (in HOOKER,C. o.cit. pg.52l)