Home > 000- ENGLISH - MATTER AND REVOLUTION > Chaos, Disorder, and Mixing

Chaos, Disorder, and Mixing

Tuesday 26 May 2009

Chaos, Disorder, and Mixing:

A New “Fin-de-Siècle” Image of Science?


Amy Dahan Dalmedico.*

["Chaos, disorder, and mixing: a new fin-de-siècle image of science?" in M. Norton Wise (ed.), Growing Explanations: Historical Perspectives on Recent Science (Durham and London: Duke University Press, 2004) 67-94.]

0. INTRODUCTION
Chaos and the sciences of disorder or mixing concern particular disciplinary sectors (e.g., mathematics, fluid mechanics, physics, engineering science, etc.) but they also form disciplinary crossroads, which symbolize a new mode of conceiving and practicing science. Moreover, they go hand in hand with a discourse proclaiming a new episteme or a new paradigm—the idea of a third scientific revolution has even been mentioned. To characterize this new area of science is a delicate matter since it involves both specific scientific practices and theories and also diffuse representations of them among audiences more remote from practice. Such a representation of science is always partial; even when it has a hegemonic ambition, it competes with other representations, built by other scientific subgroups, who regard it as partly ideological. But since my aim is to reflect on a certain “air du temps” in contemporary science rather than to establish a Kuhnian paradigm or a Foucaldian episteme, I will deliberately use somewhat vague terms such as images and representations of science.
A few historical remarks on representations seen as characteristic of an “air du temps” may be useful. Our historiographical tradition has constructed an image of 17th C. science as being about a world ruled by clockwork, whose harmony was mathematical. Nature was an automaton exhibiting repetitive phenomena ordained by a God who was likened to an architect (or a clockmaker). Several recent historical studies, however, have shown that this representation did not exclude others coexisting with it, even within one individual (Newton for example). Moreover, this representation differed significantly between different individuals and groups, for example, between continental mechanical philosophers (Descartes, Pascal, or Huygens) and English natural philosophers (Boyle and Newton). Perforce, it varied considerably among groups with more remote interests and practices: there was little in common between, on the one hand, the astronomers of the Academy of Sciences in Paris and the Royal Society in London, and, on the other, Baconian naturalists or the practitioners who experimented with thermometers and barometers, or tested the stiffness of wood and glass. In spite of this heterogeneous variety, however, the mechanical image retains some validity; it no doubt conveys something essential about the 17th century.
As a second example, consider the common representation of Laplacian science at the end of the 18th century. In a recent study, I have shown how Laplace’s vision has been demonized, and turned into a Manichean opposition between a Laplacian totalitarian universe and the freedom authorized by contemporary theories of chaos. In fact, Laplace merely presupposed in principle the possibility of total prescience and complete predictability, but he was perfectly conscious that their practical possibility was out of reach. For this precise reason, he judged it necessary to forge the tools of probability theory, allowing for a statistical description of some processes. We can, nevertheless, claim once again that Laplacian determinism expressed a “general conviction of the time,” i.e. a belief in the causal structure of the world, a faith in the mathematical intelligibility of laws which the scientist has to discover or to approach as nearly as possible.
In both of these cases, we meet constructed, but incomplete, images expressing some important part of the air du temps. Let me now summarize some key elements that characterize the representation of contemporary science:
1 the logic of scientific reasoning, i.e. the relation between causes and effects, has changed: in brief, "the world is nonlinear," and a small detail can bring forth a catastrophe;
2 complexity was formerly believed to be decomposable into elementary units, yet only a global approach is pertinent;
3 sand heaps and water droplets reveal the profound physical nature of matter’s behavior, which cannot be revealed by atomic physics;
4 reductionism has reached its limits and its end has been announced for the near future;
5 the scientific method associated with the names of Galileo and Newton and once believed immutable has now been supplanted by an historical point of view which has become dominant in the geosciences, and in the life and social sciences.
Finally, in the representation following from the above theses, the hierarchy of the disciplines has been upset. In particular, mathematics and theoretical physics have lost their dominent position, and the biological and earth sciences have been promoted into the new Pantheon of science.
Within the confines of this article, it will not be possible to discuss in detail either the many texts that have adopted these theses (and sometimes loosely extended them to economic, political, or social domains) nor the history of their development over the past 20 or 30 years (even 50 years in the case of cybernetics). Instead, I will review a few recurrent themes constituting this new representation of science and, for each, to examine what is at stake—in general, to show that everything is more complicated and subtle than is often supposed. Since the themes are intimately linked, to separate them as if their evolution were independent from one another is rather artificial, but it will enhance the intelligibility of the text. Finally, concerning intelligibility, we should recognize that the subtlety of the themes lies in the fact that most of the expressions involved, such as determinism, predictability, randomness, and order, assume a precise meaning only in the framework of a mathematical formalism. But in this remark, far from aiming at discouraging unavoidable, and probably desirable, non-formal discussions, I intend to underscore the difficulties of conducting a wide debate on the issues without exhibiting an excessive scientistic authoritarianism.
1. ORDER AND DISORDER
When people have talked about the science of disorder (revenge of the God Chaos) it has been to underscore an opposition to classical science, construed as the discovery of order and regularities in nature and the restitution of the world’s transparent intelligibility beyond confused, opaque appearances. In this opposition, two aspects may be distinguished, which I will treat in succession: (1) nonlinearity and the butterfly effect, and (2) determinism and randomness.
Nonlinearity and the Butterfly Effect
Associated with the classical conception of science was the principle of proportionality of causes and effects, which played a paradigmatic role for (linear) causality. In several 17th-century propositions, the idea was explicit: e.g., Newton’s law of cooling, Hooke’s law for springs, and Mariotte’s (and Boyle’s) pressure-volume law. In mechanics, Varignon invoked this principle as an argument in favor of a law of proportionality between force and velocity, on which he wished to rely. In the 18th century, the same principle underlay the controversy about the foundations of mechanics, involving Euler, d’Alembert, and Maupertuis, among others. At the turn of the 20th century, although scientists were by then beyond the proportionality of causes and effects, the linear harmonic oscillator remained an omnipresent model in the domain of electromagnetism. And the founders of quantum mechanics came back to it whenever they faced important difficulties, as evidenced by the Bohr-Kramer-Slater theory in 1924.
Now one often hears, “The world is nonlinear.” This affirmation is older than is often said. As early as the 1930s, for Soviet mathematicians and physicists at Moscow, Kiev, and Gorki (Mandelstam, Andronov, Bogoliubov, Krylov, Pontryagin, etc.), the world was already nonlinear, and what they then called “self-oscillations” (or self-sustained oscillations) played the role of a universal model replacing the harmonic oscillator. More generally, conceptual tools popularized by the “sciences of disorder,” whether mathematical (dynamical systems theory), or information-theoretical (Shannon and Kolmogorov), are rather old.
What is relatively recent and has stamped the development we are witnessing is the following:
1 a new credo and interest, not so much for disorder, but rather for a renewed appreciation of very particular, subtle combinations of order and disorder, which lends specificity to the science of chaos.
2 a different technological environment, which concerns both tools and social demands. New tools have appeared: above all the computer (an omnipresent tool for calculation, algorithms, images, construction of attractors, simulations, etc.) but also lasers and ultrasound devices to probe matter, its defects, irregularities, and asymmetries, and to exhibit transitions to turbulence (for example in the experiments of Gollub and Swinney in the 1970s). At the same time there has been a strong social demand for new technologies and materials, and for expertise on problems of vibration and stability, transmission, signal theory, etc.
Ever since the19th century disorder and its quantification via entropy have attracted attention, but this was statistical disorder, which, using the law of large numbers, can produce average values that obey simple laws. Although results obtained by Poincaré and Birkhoff, among others, showed that this opposition between order and disorder was not so clear-cut, until the late 1950s only statistical methods seemed adequate to deal with “disordered” phenomena. These were the methods privileged by physics, meteorology, hydrodynamics, and mechanics. A significant example of a departure is the numerical experiment on nuclear chain reactions started by Fermi, Pasta, and Ulam at Los Alamos in the 1940s and picked up again at the beginning of the 1950s. While ergodicity (which was expected) would have justified a statistical treatment, the semi-order revealed by the experiment resisted any mathematical formulation and remained very hard to interpret.
The same happened when Kolmogorov addressed the International Congress of Mathematicians in 1954 in Amsterdam and introduced what would soon be famously known as the Kolmogorov-Arnold-Möser (KAM) theorem. At least for mechanics and probably elsewhere too, this theorem contributed to a change in the conception of the order/disorder relation. The theorem showed that order was much more powerful than usually assumed, that it resisted perturbations, and that disorder, conceived in terms of ergodicity, contradicted the stability observed in several physical systems. In his work on celestial mechanics, Poincaré had exhibited the homoclinic tangle, a mesh of intertwined curves picturing the “chaotic ” nature of a multitude of possible solutions and the infinity of allowed scenarios. Afterwards, mathematicians concluded that no solution to the problem of perturbed Hamiltonian systems was a smooth curve, or that stable orbits did not exist. But this was shown to be erroneous. When perturbations are sufficiently small, the KAM theorem shows that a majority of orbits are stable; although nonperiodic, they never stray far from the periodic orbits of the unperturbed system. Such orbits are said to be quasiperiodic. Others are chaotic and unpredictable. Still others are caught in islets of stability in an ocean of chaos. Thus it could be said that the “KAM theorem was order without law” ! For that matter, the methods of the KAM theorem have become “paradigmatic,” and are today applied in contexts far from their origins. The succession of the names given to this domain are indicative of the progression in successive understandings of order, disorder, and law. While at first the terms disorder and chaos were used almost interchangeably, by the end of the 1970s the habit developed of associating order and chaos to designate the domain. In 1986, the publication of Bergé, Pomeau, and Vidal’s L’Ordre dans le chaos exhibited, at least in France and in the scientific community, the desire of specialists to distance themselves from confusions about the meaning of chaos evident among a few intellectuals and writers. Henceforth, the terminology deterministic chaos was established.
Concerning the question of causality, in the 1950s the theme of feedback and circular causality became very important, especially as a result of Wiener’s work (much preoccupied with holistic conceptions, where interactions among a whole set of elements were present) and the first wave of cybernetics. The first conference on cybernetics was titled: “Conference on Circular Causal and Feed-Back Mechanisms in Biological and Social Systems.” At that time, simple-return action was already familiar to control engineers and was later developed in automatism, but, with cybernetics, more was envisioned, namely, a retroactive action on the very model of action.
From this point of view, the work of meteorologist Edward Lorenz in the 1960s constituted a crucial moment. If indeed sensitivity to initial conditions and the existence of systems whose trajectories diverge exponentially were already known to Poincaré and Hadamard, the impact of Lorenz’s work would be crucial on two counts. On the one hand, Lorenz discovered that, to give rise to chaotic behavior, it sufficed to have only three variables, that is, a nonlinear system with three degrees of freedom—a very simple formal system indeed—could exhibit a very complex dynamical behavior. This result would profoundly upset the usual understanding of the relation between the simple and the complex. On the other hand, the intervention of the computer in Lorenz’s work is in itself crucial at two levels: (1) the sensitivity to initial conditions—later called the butterfly effect—was revealed through numerical instability; (2) on his computer screen Lorenz also exhibited a surprising image of the “Lorenz attractor”, which he succeeded in representing graphically as a two-dimensional projection. While Poincaré had long before, in a few complicated sentences, imagined and described the homoclinic tangle, Lorenz could explain his construction by means of iterative processes and, moreover, could visualize it.
Some fifteen years—roughly from 1963 to 1976—would be needed for these results to be assimilated by various groups of scientists, from meteorologists to mathematicians, from astronomers to physicists and population biologists. Through many kinds of numerical explorations, each group could then reclaim the domain as its own: logistic equations, Hénon’s mapping, Metropolis, Stein, and Stein’s iterations, Feigenbaum’s bifurcations, etc. Initially, this exploration was not performed on computers, but above all on simple calculators, which were much more common than computers in the late 1970s. Interestingly, in their attempt to counter the craze for chaos by underscoring the rather old age of most fundamental results, pure mathematicians specializing in the abstract topological methods of dynamical systems and focused on their great classification programs often missed this striking new computational dimension.
This explains the great a posteriori theoretical importance of the demonstration of the “shadowing lemma.” When the trajectory of a dynamical system is computed for a given initial position, numerical instabilities make it probable that this trajectory is false due to the noise resulting from an accumulation of rounding errors. The shadowing lemma, however, insures that there exists a “true” trajectory—i.e. an “exact” trajectory corresponding to an initial position close to the original—which follows the computed trajectory with any desired degree of precision. This means that the accumulation of rounding errors is counterbalanced by a simple translation of the initial condition. Thus, the property of structural stability legitimates the computation of trajectories for dynamical systems: computers indeed plot “real” trajectories. For some fifteen years, however, a few mathematicians had done away with this justification in order to explore chaotic systems numerically One may add that the mathematical properties of the Lorenz attractor were only established in 1998, according to the traditional requirements for rigor of the discipline. This underscores the specific, experimental character of the science of chaos in which mathematics partly intervenes in an a posteriori manner, that is, as a justification for methods, results, or processes explored by means of numerical simulations on the computer.
The New Articulation of Randomness/Non-randomness and Speculations about Determinism
As a starting point, it must be recognized that, at least in France, the status of randomness and probability theory has considerably changed in the course of the last decades. Until the 1970s, probability theory sat at the bottom of the hierarchy of mathematical branches. Outside pure mathematics, it was not even judged useful enough to be taught in preparatory classes. Even though the situation was not the same everywhere and even though statistics and probability received considerable development in the 20th century, the field often remained outside the official boundary inscribed in the institutions of the mathematical community. Only Kolmogorov’s work in the Soviet Union, as early as the 1930s, unfolded over two domains: mathematics and mechanics on the one hand; and probability and information theories on the other, with his supreme objective being the understanding of the essence of the concepts of order and chaos. In its ambition—but not in spirit, which remained radically different—this monumental program can only be compared to Hilbert’s and to the Bourbakist movement.
What is characteristic of this domain is a blurring of borders—between order and disorder, as we saw above, but also between randomness and non-randomness, between what is deterministic and what is not. In an experimental system, even one with a small number of degrees of freedom, such as Edward Lorenz’s model, when the parameters are unknown, one may be facing either a chaotic dynamical system for some values of the parameters, or a stochastic perturbation, a noise, for other values of the parameters. A choice is allowed between two models, one that is deterministic and the other stochastic. Taken by assuming an a priori deterministic system, measurements themselves will enable the observer to distinguish deterministic systems from a whole set of systems, not all of which may be deterministic. Moreover, even if deterministic, a system about which information is partial may appear stochastic. The baker’s transformation whose first initial condition is “hidden” provides an example of this situation. Lastly, the fact that, following Kolmogorov’s work in particular, an algorithmic definition of randomness is now available has sensibly changed the understanding of the notion of randomness and its mathematical handling through processes generated by computers (random sequences of numbers, random walks, etc.). To say that the trajectory of a system is random amounts to saying that it cannot be defined by an algorithm simpler than the one continuously going over the trajectory and that information concerning this trajectory cannot be compressed (by means of a formula, a process, etc.).
In a chaotic dynamical system, after a long evolution with respect to its Lyapunov time—which locally measures an exponential divergence of trajectories—there is a loss of memory about the initial state. Consequently, the trajectory no longer seems a pertinent idealization. One needs a statistical description of a probabilistic type that is incompatible with the notion of a trajectory. This is what Sinai has termed the “randomness of the non-random” while showing that a true dialectics is established between the instability of a chaotic dynamical system and its structural stability (in this case, one is on an attractor that exhibits a large degree of structural stability). Sensitivity to initial conditions, memory loss, and algorithmic complexity become organically linked with each other and refer back to local instability. This is a leitmotiv of the work of Prigogine especially, who claims that the dynamical description of chaotic systems should be made in terms of probability distributions. While for trajectories the Lyapunov time is indeed an element of instability, in terms of probability distributions it becomes an element of stability: the larger the Lyapunov time the faster the damping and convergence to uniformity. In this view, irreversibility is due to the very formulation of unstable dynamical systems’ dynamics, and one has: instability (or chaos) Þ probability distributions Þ irreversibility. For Prigogine, probability distributions and irreversibility are intimately linked with one another and therefore lead to the introduction of the arrow of time.
In the 1980s, deterministic chaos and this new dialectics between randomness and non-randomness brought back fantastic speculations about determinism. The need to revert to statistical methods for the study of deterministic chaotic systems raised a profound question: did the description in terms of probability distributions have an intrinsic character, or did it derive principally from our ignorance? Symbolized by the figures of René Thom and Ilya Prigogine, a polemic arose, (slightly losing steam today), which showed that the problem belonged to metaphysics: nature’s global, ontological determinism is neither falsifiable nor provable but that does not imply that the philosophical options have no consequence for heuristic choices. On this matter, Sinai’s demonstration (at the end of the 1960s) of the mixing character of plane billiards (i.e. the stochastic character of a billiard ball’s behavior) bestowed legitimacy on the application of deterministic chaos to every conception of the physical universe. A few specialists (Ruelle, Ekeland, etc.) even believed they could spot, in deterministic chaos, a scientific, existential virtue for reconciling human freewill and determinism. Again, one is facing the tension mentioned above between the precision implied by the use of specific terminology and extreme idealizations.
2. Is Everything Chaos?
By raising this question, we also touch upon the legitimacy of analogies derived from chaos theory and their audacious (or aggressive) transfer to other fields. As understood by practitioners, chaos theory is restricted to a specific framework: the theory of deterministic (generally nonlinear) dynamical systems whose behaviors are studied in phase space with particular attention to final states and their global aspects (hence the attempts at classifying final states and attractors). A priori, the theory does not deal with anything outside this frame. This offers the advantage of abstraction since no information about phase space coordinates nor any interpretation of them is given by the theory. As a consequence, the universality of the theory is greatly enhanced. But this hardly means that everything is chaos, nor that the theory is everywhere applicable.
The quantitative study of chaos in a system requires the quantitative understanding of its dynamics; if time-evolution equations are well known, they can be integrated on the computer, which is the case for solar-system astronomy, hydrodynamics, and even meteorology. In the case of oscillating chemical reactions, evolution equations are not known, but long, precise recording of experimental data, called time series, can be obtained and, since they are rather simple, dynamical equations can be reconstructed with their help. But this is far from being the case in other areas. There is indeed an important problem: when one observes a physical, econometric, or biological system whose differential equations are not known and only measured information is available, what can be done? Sometimes even the number of variables in the system is unknown and only one is measured.
Practitioners often try to reconstruct the attractor of the system, and the question is to know which observables are accessible through such reconstructions? It is regularly emphasized that possible artifacts should be avoided by checking the independence of results from the choices made during a reconstruction. Here, the property of ergodicity becomes decisive (but very hard to show rigorously), since an average over a large number of initial conditions would then essentially reproduce the same image with the same density as a single typical orbit observed over a long period. Several observables are sought. The more widespread is (grossly speaking) the dimension of the attractor, i.e. the number of coordinates necessary to locate a point on the attractor (the Grassberger-Procaccia algorithm then proves very useful), or else the Kolmogorov-Sinai entropy, i.e. the number of information bits gained by extending the duration of the observation by one unit of time. There also exist algorithmic procedures to obtain the Lyapunov exponent (which measures local instability), but with the express condition that the properties of the system remain unchanged during the course of the measurement. Hence there are great difficulties in applying chaos theory to the biological or social sciences. In these domains, not only is it difficult to obtain long time series with good precision, and not only is the dynamics generally quite complicated, but often the system “learns” over time and its nature changes. By its formulation the problem no longer belongs to dynamical systems theory. Ruelle writes: “For such systems (ecology, econometrics, the social sciences), the impact of chaos stays at the level of scientific philosophy rather than at that of quantitative science.”
Controlling Chaos
In engineering science, one starts from the observation of a finite series of discrete states. To characterize a chaotic behavior and to distinguish it from noisy effects then constitutes a difficult, fundamental problem. Large classes of discrete dynamical systems give rise to models given in terms of non-one-to-one functions: this is the case in control engineering, where systems use sampled data, pulsed modulations, neural networks, etc.; in nonlinear electronics and radiophysics, where various feedback devices are used; and in numerical simulations, signal theory, and other areas. As P. Bergé and M. Dubois wrote, the notion of chaos being inseparable from the phenomenon of transition to chaos, various “roads” to chaos, and several “scenarios”—by period doubling, by intermittency, or by quasiperiodicity—were studied, exhibited, and explored as early as the 1970s.
Despite those who lamented a technical, utilitarian orientation in contemporary science and dreamed of having found their revenge in a mainly qualitative, morphological mode of understanding with no possibility of action on reality, engineering science today seeks to control and master chaos. The extreme sensitivity of chaotic systems to subtle perturbations can be used both to stabilize irregular dynamical behaviors (stabilization of lasers or of oscillatory states) and to direct chaotic evolution towards a desired state. The application of a small, correctly chosen perturbation to an available parameter in a system can enhance its flexibility and performance. Various applications have been studied in signal theory (synchronization of chaotic signals), in cryptography, and even in automatic vision (with the help of a method of pattern recognition inspired by Lyapunov’s theory). In brief, applications in the engineering sciences are highly promising.
Chaos and the Social Sciences
For the social sciences, consider briefly the case of economics. Here chaos broadens the spectrum of available models mainly by introducing models with internal mechanisms explaining change in activity cycles (irregularities, disorders, etc.), which earlier were associated with external random shocks. Having expressed the classic theory of general equilibrium in terms of equilibrium cycles, neoclassical econometric theory integrated motion, first under the rather simple form of growth, then through fluctuation mechanisms associated with external shocks and perturbations. For economists defending evolutionist approaches there is a need to go beyond this fundamentally static, ahistorical viewpoint—in the sense that it produces an essentially stable view of the economy with only exogenous stochastic perturbations. In this view, changes, if they occur at all, would happen according to universal mechanisms and within structurally invariable frameworks. According to common epistemologies, history is indeed dialectically opposed to science, the ideographic mode of the former being incompatible with the nomological ambition of the latter. In a way, the structuralist episode has shown how the social sciences internalized this principle, according to which scientificity is constructed by excluding history.
For the evolutionist movement, the essential idea is to use dynamical systems theory—which according to S. Smale’s definition truly constitutes the “mathematics of time”—in order to introduce a time dimension in its starkest possible form: history. In particular, the concept of bifurcation could be used to introduce structural changes and transformations of capitalism in a particularly radical form, that is, structural historicity, which more or less corresponds to the regulationist position (insisting on structural transformations of capitalism). I shall simply list here various means that have been mobilized to introduce history into economic dynamics: multiple equilibrium situations with different basins of attraction, path-dependence, the concept of hysteresis (by analogy with physics), the role of historical contingency, modeling with positive feedback (Polya’s urns), bifurcations (passages through critical points) as the phenomenology associated with great crises, the articulation of dynamics with different time scales, etc. This effort in economics does not seek to free itself from the quantitative substrate of chaos theory in order to retain only a qualitative rhetoric; on the contrary, following a long tradition of mathematical modeling in economics, it mobilizes sophisticated, properly mathematical results from dynamical systems theory.
The transposition of chaos to the social and political domains has also gone on freely. In these cases, however, it has mostly been confined to a metaphorical point of view, even in the discourses of some technical specialists. Starting from Sinai’s billiards or the butterfly effect, philosophers and politicians have speculated on the profound difficulties of practical rationality or the conflict between rationality and democracy. Concerning the conduct of complex, evolving, and potentially chaotic, systems, David Ruelle has, with a slight irony, suggested that politicians’ random action might be for the best. But is it possible to accept this transposition of modern society taken as a whole into a chaotic dynamical system that should be directed? Whatever the answer, these tropes of discourse themselves are indicative of a contemporary air du temps.
3. COMPLEXITY, REDUCTIONISM, AND MIXING
As noted above, the discovery of the Lorenz system and the recognition of its stakes challenged the traditional understanding of the relation between simplicity and complexity. One can find complexity in simple systems and simplicity in complex ones. Richard Feynman’s classic image—the world as a gigantic chess game in which each move taken in isolation is simple and in which complexity (and irreversibility) only comes from the very large number of elements in the game—is no longer valid. While there was, formerly, a tendency to associate complexity with an extrinsic, accidental character linked to a multiplicity of causes, complexity can now appear as an intrinsic property of a system.
The Erosion of the Reductionist Program
Underlying the development of physics up until the beginning of the 20th century, the reductionist program found early formulations in Galileo’s Assayer (1623) and in a famous passage of Locke’s Essay Concerning Human Understanding (1690): to discover the ultimate elements of reality, the “atoms” of matter bearing primary qualities (solidity, form, extension, motion, and number), and to show how their combinations and interactions with our senses explain secondary qualities (color, flavor, sound, and diverse other effects). Nearly two centuries later, chemistry would reduce the apparent variety of substances to combinations of a restricted number of atoms, and physicists would quite naturally exalt this program as a norm. No matter what difficulty it might face, from then on they would seek to resolve the problems of reductionism by reiterating the same process: atomic physics, then nuclear physics, then (elementary?) particle physics.
Long before the advent of mixing physics, some physicists stressed the limitations of reductionism, as well as some of the failures of its attempts at synthetic ascent. On balance, the type of explanation that accounts for phenomena at one level in terms of properties of constituents belonging to the next lower level is far from satisfactory. It seems impossible, for example, to predict the very complex properties of water on the sole basis of the composition of its molecule H2O. Similarly, the discovery of superconductors was a total surprise which could not be explained theoretically. Recent investigations of the impenetrability of solids, density questions, sand-heap physics, etc. raise enormous difficulties for the explanation of macroscopic properties, without mentioning the fact that the “reduction of complicated visible things to simple invisible ones,” as Locke said, supposes a conception of simplicity which is now being challenged. At the quantum level, the “simplicity” of invisible things seems to be relying on the introduction of new ideas that remain counterintuitive.
Driven by the reductionist program and the search for a grand unification, fundamental physics probably witnessed the apex of its heroic age with the “standard model,” which despite its 19 parameters represents, according to Feynman, a “réussite sans précédent de l’intellect humain pour la description unifiée des forces de la nature.” [original English quote ? Source ?] But the golden age may now have lost this image. Ironically, the very concepts on which the model was built have eroded the bases of the program whose accomplishment it was, namely the concept of symmetry breaking and the renormalization group.
Concerning critical phenomena, as well as the understanding of the elementary world, Kenneth Wilson’s works have great importance. But above all, they seem to mark a decisive epistemological turn. A particle physicist, and a Gell-Mann student who was also close to Hans Bethe, Wilson nevertheless promoted a conception of physics very different from the dominant one, which sought a theory a priori valid for all scales, from the astronomically large to the inconceivably small, meaning at unattainably high energies. Wilson thought that at energies higher than the accessible, unknown phenomena must intervene (quantum fluctuations, unification of forces, etc.). Starting from the unknown at short distances and, by successive iterations, eliminating unobservable fluctuations with too short a wavelength, he sought to construct a theory at accessible scales. Ultimately, Wilson ended up with a theory coinciding with the one constructed by the founders of quantum electrodynamics. The lasting image of his accomplishment was of “a theory valid not because it is susceptible of being applied down to the shortest distances but because it is an effective theory at great distances, a universal one resulting from a hidden world that was even more microscopic.” The same program would be developed in the domain of critical phenomena and condensed matter, where the “physics of mixing” emerged.
Experimental as well as theoretical tools contributed to the physics of mixing. In the 1960s, new experimental means served as a way both to get closer to critical points and to explore the nature of local order and its fluctuations. The diffusion of neutrons—sensitive to magnetic order—and that of laser light changed the experimental landscape. For the 1991 Nobel laureate in physics P.-G. de Gennes, for example, the same tool—neutrons and their interactions—served both for the exploration of condensed matter and the study of critical phenomena and phase transitions. His earliest work on disordered, mixed materials (in particular, alloys of magnetic and non-magnetic materials, and then superconductors) led him to the concept of “percolation” and to consideration of characteristic length scales, large with respect to inter-atomic distances. K. Wilson’s works on phase transitions, involving geometrical studies of self-similar systems, showed that scaling explained their universal characteristics, and in 1972 led to the famous theorem permitting the application of renormalization-group methods to the study of polymers. Once more, the cancellation of short-scale quantum fluctuations led to a simple large-scale image in which scaling laws and phenomena are said to be universal, independent of specific systems. In short, even in complex situations, simple effective models can capture large-scale physics. De Gennes then used this method for tangled polymer chains, dealt with liquid crystals, and tackled percolation in heterogeneous media, moistening, colloidal suspensions, etc.
Interested in the analogy between phase and hydrodynamic transitions, theoreticians (F. Dyson or D. Ruelle in statistical physics for example) as well as experimenters (Ahlers at Bell Labs, Gollub and Swinney at New York University, Bergé and Dubois at the Commissariat à l’énergie atomique, Saclay) went from the study of critical fluctuations to that of instabilities. The idea that deep connections existed between a large variety of disciplines, like statistical mechanics, information theory, biology, and a few others was already present in Wiener’s cybernetics. But having forged the tools necessary for studying the manner in which systems pass from a mechanical or thermodynamic equilibrium to an unstable state with emergent phenomena (phase transitions, critical phenomena, onset of turbulence, crystal growth, solid breaking, etc.), a group of physicists, in the 1970s, (K. Wilson, Kadanoff, P.-G. de Gennes, I. Prigogine, and others) resonated with those who dealt with complex or chaotic systems (Ruelle, Feigenbaum, Libchaber, Pomeau). For example, M. Feigenbaum’s observations about the appearence of chaos in simple deterministic systems by period-doubling cascades owed much to the fact that he had been introduced to K. Wilson’s ideas at Los Alamos. Perhaps the idea of analyzing the appearance of a singular threshold in terms of scaling laws and of showing its universality would never have occurred to pure mathematicians specializing in the study of dynamical systems.
In this interaction between critical phenomena and complex chaotic systems, relatively old conceptual tools, like entropy, Brownian motion, random walks (for diffusion phenomena), and Ising models, were revived along with the new tools that played a crucial role, such as fractals and self-similarity, multi-scale processes, and percolation (abnormal random walks on a badly connected disordered network). Geometry, on the one hand, and statistical methods, on the other, increasingly entered this “physics of disorder.”
Scaling laws truly constituted an epistemological turn, namely, the indispensable joint consideration of atomic, molecular, mesoscopic, and macroscopic levels with no level being regarded as less fundamental than others, nor reducible to the properties of the others. This “solidarity among scales,” which is characteristic of mixing physics is opposed not only to the former reductionist faith but also to the basic epistemology of theoretical physics developed over the 19th century. The new physical theories of the 19th century—kinetic theory of gases, electromagnetic theory, thermodynamics—were as distinct from classical mechanics as they were from each other and were accompanied by the view that the scientific method consisted in separating out relatively distinct systems in the universe while leaving room for the unknown. Since these systems were characterized by qualitatively different levels of organization or pertained to different phenomenological classes, distinct physical theories, with limited domains of validity, had to be conjugated. Two further conceptual outcomes of the classic scientific method appeared inescapable: (1) the concept of scale, characterized by the smallest volume inside of which everything remained uniform and by the shortest time interval within which everything remained constant, and (2) the concept of observation domain, characterized by the largest volume and the longest duration to which investigations were extended. In contrast the theoretical study of chaotic or complex systems as well as the experimental study of phase transitions and of disordered and mixing media have installed a potent new dialectic of the local and the global, hence breaking away from the former classic epistemology of bounded domains.
For a Mundane Science
The spectacular rise of mixing physics over the last 20 years has been associated above all with two elements.
1 a cultural change became perceptible as early as the late 1970s among various groups of physicists: on the one hand, an aspiration to a more mundane science and a return to the concrete and, on the other, a saturation, indeed a rejection, of both highly theoretical, abstract particle physics and the rigid forms of organization imposed by the needs of large, high-energy physics laboratories. Today, research programs in elementary particle physics invoke length scales below 10-30 cm, requiring energy levels several billion times higher than those reached by contemporary particle accelerators. Conceivable experiments are exceptionally expensive as well as dangerous. Concerning the horizon of usable applications, this scale has so little relation with the human scale that many people seem to envisage either the abandonment of this line of research, or at least the recognition of its necessarily marginal character. Meanwhile, mixing physics deals with mundane objects and natural phenomena encountered in everyday life.
2 The rise of increasingly effective and, increasingly omnipresent, technologies and tools has favored the study of mixing. First, lasers and ultrasound instruments transformed mixes into “transparent boxes” whose local concentrations or velocity fields could be observed and measured without being destroyed or perturbed. Second, numerical modeling and simulations became determinative as a means of exploring how various media ( fluids, colloids, granulated media) could be mixed, separated, and operated upon, even before delicate physical experiments could be undertaken.
As recalled by E. Guyon and J.-P. Hulin, the Hebrew verb in the Holy Scriptures that is translated by to create literally means to separate. To recount Creation, the Bible tells the process of separating or sorting out all things, which was also a process of organization: "Go said : ’Let there be light’ and there was light… God then separated the light from the darkness. […]Then God said : ’Let there be a dome in the middle of the waters , to separate one body of water from the other’. And so it happened. God made the dome, and it separated the water above the dome from the water below […] Then God separated the great sea monsters and all kinds of swimming creatures with which the water teems…" This text was written some thousands of years ago, but since the start of the modern era, the metaphysical quest for the pure and the simple has gone hand in hand with a degraded representation of mixing. This historical relation now seems to be undergoing inversion. In fact, the apparently opposed notions of separation and mixing often are quite close to one another and a slight evolution of the environment or small changes in experimental parameters can blur their border and invert the processes (which does not mean that they are reversible). This is the case, for example, when the constituents of a mixture maintain their identity in the form of droplets or particles (unstable emulsions, granular mixtures with aggregates and sediments, etc.).
Concerning mixing, the focus has recently moved from the static study of composite materials to that of dynamical processes leading to mixing. In this change, mixing physics has become a key element of process engineering. Indeed, if we call process engineering the whole set of industrial processes in chemistry, farming, civil engineering, etc., which involve manipulations, conditionings, moldings, and other transformations, we realize that mixing steps in these processes are often crucial at the level of efficiency and cost. From this point of view, mixing physics constitutes a fundamentally interdisciplinary domain that remains close to the engineering sciences. But mixing physics also concerns geophysics. All soils and, more generally, the terrestrial crust, are particular static mixtures—called “fixed mixtures”—built out of the porous piling up of small-size grains, whose efficiency, one most frequently hopes (especially in the case of industrial pollution), should be small. Finally, the study of mechanisms linking mixing with convective motions—be they of a thermal origin or due to composition or density variations—is essential for understanding motion in the atmosphere and in the ocean, as well as in the terrestrial mantle.
True, mixing physics and the study of complex systems employ theoretical tools that predate them: mathematical techniques coming from dynamical systems theory, results from turbulent-fluid studies, and concepts from the physics of critical phenomena. As a new domain at the disciplinary crossroads, however, mixing physics was not constituted as a field where a priori theoretical research predominated and applications followed, but, on the contrary, it started with the problem of resolving “real” problems in which the goal was to obtain precise mixtures (homogeneity or carefully balanced heterogeneity), to stabilize mixtures, to carry out anti-mixing processes (preventing pollution or diffusion in soils), to master the construction of complex systems, etc. Favored by the return to the macroscopic character of phenomena, the interaction between theory and application very quickly went back and forth.
4. HISTORICITY AND NARRATIVE IN THE SCIENCES
Consider now the question of historicity and of the arrow of time with respect to physics. The 19th century has bequeathed a dual heritage. On the one hand, we have the classical laws of microscopic physics, epitomized by Newton’s laws and their successors. They are deterministic and ahistorical, and they deal with certainties, in the sense that they univocally link one physical magnitude to another. Moreover, they are symmetric with respect to time; past and future play the same role in their formulation. On the other hand, through the second law of thermodynamics, expressing the increase of entropy over time and thereby introducing the arrow of time, we have the vision of an evolving universe. These two points of view (symmetrical microscopic descriptions and macroscopic irreversibility) have been reconciled by considering irreversibility as a result of statistical approximations introduced in the application of fundamental laws to highly complicated systems consisting of many particles. According to Prigogine, the existence of chaos (unstable dynamical systems) makes this interpretation through approximation untenable; in particular, one cannot rely on complicated systems since chaos can occur in very simple systems with a few degrees of freedom. From this fact, Prigogine concludes that instability and irreversibility are part of a fundamental, intrinsic description of nature. In this (controversial) interpretation, chaos would impose a revision of the very conception of laws of nature, which would express what is possible—not certain—in a manner analogous to the more purely natural-historical disciplines like geology, climatology, and evolutionary biology.
In all phenomena perceptible at the human scale, either in physics, chemistry, biology, or the human and social sciences, past and future play different roles. While classical physics dealt with repeatable phenomena, today’s physics is much more concerned with phenomena that cannot be identically repeated and with singular processes. In molecular diffusion, dispersion in porous media, or in turbulent chaotic mixtures, irreversibility of mixing is crucial. History, it can be said, is entering the physical sciences in these unique “narratives”. Especially through bifurcations, history enters the systems of chemistry, hydrodynamics, and engineering science: at each bifurcation, a “choice” governed by probabilistic processes emerges between solutions and in this sense, with its series of bifurcations, each chaotic evolution is truly a “singular” history. Hence the emphasis in the domain of chaos on the notion of scenario, in the sense of a possible road to chaos. Titling his book Of Clouds and Clocks in 1965, K. Popper had already used vivid terms to express this opposition between the two kinds of physics.
In physics as well as in biology, an extreme sensitivity to perturbations and parameter variations seems to be a specific trait in the formation of complex mixing systems and in the spontaneous emergence of complex structures. For example, the role of small intrinsic effects or microscopic thermal fluctuations in crystal anisotropy is well known, though this does not usually make possible the prediction or control of equilibrium forms, as in the cases of dendrite growth (snowflakes), microstructure formation (alloys), or fracture dynamics (solid or terrestrial crust). Confronting this extraordinary proliferation of emergent forms and dynamics, the theorist J. S. Langer writes that no certainty exists that complexity physics can be successfully reduced to a small number of universality classes. The prospects for complexity science, nevertheless, seem excellent, he concludes, even thought we may have to accept both the infinite variety of phenomena and the idea that we may never find simple unifying principles.
Observed, described, and simulated on computers, the evolving behavior of unstable complex systems enriches our knowledge of what is possible, as well as our understanding of emergence and the mechanisms of self-organization. Without necessarily leading to the formulation of general laws, it can also contribute to our understanding of why complexity emerges so easily in nature.
In summary, not only does chaotic and complex systems science install a new dialectics between the local and the global, it also forces us to rethink the relationship between the individual and the universal, and between the singular and the generic. In an article published in 1980, Carlo Ginzburg has distinguished two great modes of exploring and interpreting phenomena in the social sciences. Inspired by the Galilean natural sciences, physics and astronomy, and aiming at conquering universality, the first mode is Ginzburg’s Galilean paradigm. The second mode, the paradigm of clues, is concerned with the barely visible detail, the trace, the revealing symptom of a hidden reality. Associated with these two paradigms, are two distinct methodologies: the former hypothetico-deductive, the latter inductive. Even if we still do not know exactly how, the sciences of complex systems, including biology and physics, will soon have to come to terms with this duality for themselves.
5. AN IMAGE SHIFT IN MATHEMATICS
The promotion of what I have called a “fin-de-siècle” image of science sometimes goes hand in hand with an aggressive, confused dispute over the place of mathematics in the general configuration of knowledge. From various sources we have heard critiques aiming either to put an end to a status for mathematics deemed too prestigious, or to characterize its role as much less important for disciplines on the rise today, or to contest its overly abstract, overly theoretical representation. In these critiques, several levels interfere: epistemological, conceptual, political, institutional (notably with respect to questions concerning teaching and training). Here again, the polemic raging in France is perhaps more lively and more “overdetermined” than elsewhere. It would be superfluous to show how ridiculous is the claim that mathematics is not important in computer science, macroscopic mixing physics, modeling or any such scientific practice. I want instead to discuss the context and stakes of contemporary controversies about mathematics. These debates cannot be understood without considering what has been a true “image war” about what mathematics is, what it deals with, and how. Triggered by the end of World War II, the conflict went on with great intensity up until the 1980s. It was decisively stirred up by the French mathematical school, and especially the prestigious Bourbaki group. Spurred by the mathematical community itself, the battle mainly focused on the dichotomy between pure and applied mathematics. Let us look back a little to see more precisely what this was about.
Recent scholarship has emphasized that until the 19th century the dichotomy between pure science and applied science did not exist in present-day terms, with the former exclusively motivated by a disinterested pursuit of knowledge of the laws of nature and the latter as the production mode of knowledge constructed for technological mastery of things and processes in a context of markets, powers, and applications. In fact, it now seems that these two types of activities were intimately interwoven through complex links and through overlapping networks of actors.
For example, 16th- and 17th-century mathematicians were concerned with problems of artillery, fortification, surveying, astronomy, cartography, navigation, and instrumentation, which also left room for purely philosophical debates, especially about the “certainty” of mixed mathematics. In the 18th century, men such as the Bernouillis, Euler, Lagrange, Monge, and Laplace were still concerned with a variety of problems, mathematical as well as mechanical, navigational, astronomical, and engineering. They proposed laws for the whole corpus of analysis (Euler, Lagrange), profound conceptual reorganizations and new foundations (Lagrange), and developed important abstract research in number theory. But all this was done without constructing a value hierarchy over the whole collection of studies. Moreover, these men were invested with various political and institutional responsibilities; during the revolutionary period, for example, Monge and Laplace appeared as true scientific coordinators.
Around the mid-nineteenth century, the center of gravity of mathematics clearly moved from Paris to Berlin, where many factors contributed to the demarcation of that part of mathematics concerned with rigor (Weierstrass), generality, and proof from that part mainly occupied with providing tools for physics and engineering. The rise of the German university system, with its ideal of pure research, the practice of seminars which was then established, and the development of disciplines such as number theory and abstract algebra (Kummer, Kronecker) favored the figure of the academic mathematician, rather isolated from other disciplines and the rest of society, and engaged in research motivated above all by the internal dynamics of mathematical problems. But this image of the mathematician, represented in the early twentieth century by David Hilbert at Göttingen, coexisted with clearly distinct images, such as that of Felix Klein, who played the role of a true “Wissenschaftspolitiker” at Göttingen from 1893 to 1914. Diverging in their mathematical inclinations and styles, Klein and Hilbert jointly established Göttingen as the world’s mathematical center. Only gradually did the Hilbertian ideal of conceiving mathematics axiomatically impose an ordered, hierarchical architecture on the whole mathematical corpus. Originally developed by students of Hilbert like E. Noether and B. van der Waerden for the sole domain of algebra, the ideal of an axiomatic, structural, abstract mathematics would become the single privileged image of modern mathematics, especially in the hands of the group called Bourbaki . By the audacity of their joint enterprise of rewriting all of mathematics as well as by their individual research, Bourbaki’s founding members (H. Cartan, Chevalley, Dieudonné, Weil) systematically promoted the structural ideal. On a personal level, they projected an image of strong, elitist, and multi-gifted virtuosi (notably in music and ancient languages), mathematicians above the common lot who needed only their brains to rework the edifice of mathematical knowledge. Their imperial objective was immense in its ambitions.
World War II played a complex and ambivalent role in the evolution of mathematics. To start with, the Göttingen school was brutally destroyed by Hitler’s seizure of power. Then, applied mathematics went through major developments in the United States, redefining disciplinary boundaries and reshaping the figure of the mathematician. Benefiting from an intense cooperation with the military in the collective war effort, the new domains included the study of partial differential equations associated with wave propagation, the theory of explosions and shock-waves, and probability theory and statistics (prediction theory, Monte Carlo methods, etc.). One should also mention the birth of game theory and decision-making mathematics and operations research, which would soon become systems analysis. Although not alone in his efforts, John von Neumann is a highly symbolic figure for this mutation. Socially engaged and intervening in the technological and political choices of the United States, von Neumann was led by his interests to a blurring of the borders between pure and applied mathematics, between what concerned mathematics and what previously had come under various other disciplinary domains (mechanics, engineering science, physics). In particular, he strongly associated hydrodynamics with computer and numerical analysis.
Despite these considerable developments, however, one is forced to acknowledge that the international mathematical community did not care much about applied mathematics. Roughly from 1950 to 1970, pure mathematicians succeeded in maintaining a cultural hegemony over their discipline. Clearly, they privileged problems stemming from internal interfaces between branches of mathematics. The more valued branches have consistently been the more structural, more abstract ones: algebraic and differential geometry, algebraic topology, number theory. These areas constituted the profound part of mathematics, to which the best students were directed. Simultaneously, more applied branches (such as differential equations, probability theory, statistics, and numerical analysis) were devalued in higher education and research as well as in the institutions of the professional community. Thirty years later, Peter Lax commented on the American situation in the 1950s: “the predominant view in American mathematical circles was the same as Bourbaki’s: mathematics is an autonomous subject, with no need of any input from the real world, with its own criteria of depth and beauty, and with an internal compass for guiding further growth. Applications come later by accident; mathematical ideas filter down to the sciences and engineering.” The philosophy of mathematics that informed this conception is clearly expressed in a famous text signed by Bourbaki: “In the axiomatic conception, mathematics appears as a reservoir of abstract forms, the mathematical structures; and it happens—without one knowing quite why—that certain aspects of experimental reality are cast in certain of these forms, as though by a kind of preadaptation.” Thus did mathematicians find a legitimacy for neglecting the world. Their view, it must be said, was not independent of the role of structuralism as a dominant mode of thought in the 1960s: structures of language, mental structures, structures of kinship, structures of matter, structures of society. The ambition of virtually all disciplines was to discover fundamental structures and mathematics was the science of structure par excellence, providing a universal key for the intelligibility of all knowledge.
In brief, two concurrent images of mathematics were clashing. On the one side was pure mathematics developed “for the honor of the human spirit,” whose paradigmatic methodology was axiomatic and structural. It progressed through internal dynamics at the interfaces of several branches of mathematics and sought through set theory to reduce mathematics to a structurally unified corpus, to which, as for works of art, one applied a rhetoric of esthetics and elegance. On the other side was the image of applied mathematics stemming from the study of nature, from technological problems, and from human affairs (numerical analysis, approximations, modeling). It was less noble and less universal, because dependent on material interests and societal conflicts. Pure mathematicians, clearly more prestigious until the end of the 1970s, had put their stamp on this opposition.
In the course of the 1980s, within the new economic, technological, and cultural contexts of contemporary societies, the general landscape of mathematics was progressively modified. Domains left dormant for decades were rejuvenated and new domains opened, linked in particular with the computer and experimental mathematics. Distressed by their isolation and concerned to improve their image in society, mathematicians now promoted an open ideal of mathematics, mathematics in interaction with other disciplines, the world, and human needs. What P. Lax called “the tide of purity” receded. Ideological representations of the purity of mathematics now share the stage with other representations which promote different values: the pragmatic and operational character of results, links with state power and corporate wealth, and entrepreneurial dynamism.
If structure was the emblematic term of the 1960s, model is that of the 1990s. The practice of mathematical model building (in the physical sciences and climatology, in engineering science, in economics) has progressively been extended over the last decades. Today its range is immense and it is almost always accompanied by experimentation and numerical simulation. In some parts of the mathematical community, it also produces distress. What theorems have those mathematicians precisely and clearly demonstrated who study, with the help of the computer, supersonic fluid dynamics, plasmas in fusion, or shock waves, or those who model a nuclear reaction or a human heart in order to test, respectively, an explosion velocity or the viability of an artificial heart? Do they share the same profession with traditional mathematicians? In August 1998, at the Berlin International Congress of Mathematicians, the old opposition between pure and applied mathematics was expressed differently: “mathematicians making models versus those proving theorems.” But the respect formerly enjoyed by the theorem provers is now generally shared by the modelers.
The applied, concrete, procedural, and useful versus the pure, abstract, structural and fundamental: this set of oppositions expressing the shift in values and hierarchies that has occurred in mathematics resonates with that described in the earlier parts of this article: disordered, complex, mixed, macroscopic, and narrative versus ordered, simple, elementary, microscopic, structural. It appears that the shift in mathematics is contributing strongly to the formation of a general “fin-de-siècle” image of science.

Forum posts

  • Chaos and the sciences of disorder or mixing concern particular disciplinary sectors (e.g., mathematics, fluid mechanics, physics, engineering science, etc.) but they also form disciplinary crossroads, which symbolize a new mode of conceiving and practicing science. Moreover, they go hand in hand with a discourse proclaiming a new episteme or a new paradigm—the idea of a third scientific revolution has even been mentioned. To characterize this new area of science is a delicate matter since it involves both specific scientific practices and theories and also diffuse representations of them among audiences more remote from practice. Such a representation of science is always partial; even when it has a hegemonic ambition, it competes with other representations, built by other scientific subgroups, who regard it as partly ideological. But since my aim is to reflect on a certain “air du temps” in contemporary science rather than to establish a Kuhnian paradigm or a Foucaldian episteme, I will deliberately use somewhat vague terms such as images and representations of science.
    A few historical remarks on representations seen as characteristic of an “air du temps” may be useful. Our historiographical tradition has constructed an image of 17th C. science as being about a world ruled by clockwork, whose harmony was mathematical. Nature was an automaton exhibiting repetitive phenomena ordained by a God who was likened to an architect (or a clockmaker). Several recent historical studies, however, have shown that this representation did not exclude others coexisting with it, even within one individual (Newton for example). Moreover, this representation differed significantly between different individuals and groups, for example, between continental mechanical philosophers (Descartes, Pascal, or Huygens) and English natural philosophers (Boyle and Newton). Perforce, it varied considerably among groups with more remote interests and practices: there was little in common between, on the one hand, the astronomers of the Academy of Sciences in Paris and the Royal Society in London, and, on the other, Baconian naturalists or the practitioners who experimented with thermometers and barometers, or tested the stiffness of wood and glass. In spite of this heterogeneous variety, however, the mechanical image retains some validity; it no doubt conveys something essential about the 17th century.
    As a second example, consider the common representation of Laplacian science at the end of the 18th century. In a recent study, I have shown how Laplace’s vision has been demonized, and turned into a Manichean opposition between a Laplacian totalitarian universe and the freedom authorized by contemporary theories of chaos. In fact, Laplace merely presupposed in principle the possibility of total prescience and complete predictability, but he was perfectly conscious that their practical possibility was out of reach. For this precise reason, he judged it necessary to forge the tools of probability theory, allowing for a statistical description of some processes. We can, nevertheless, claim once again that Laplacian determinism expressed a “general conviction of the time,” i.e. a belief in the causal structure of the world, a faith in the mathematical intelligibility of laws which the scientist has to discover or to approach as nearly as possible.
    In both of these cases, we meet constructed, but incomplete, images expressing some important part of the air du temps. Let me now summarize some key elements that characterize the representation of contemporary science:
    1 the logic of scientific reasoning, i.e. the relation between causes and effects, has changed: in brief, "the world is nonlinear," and a small detail can bring forth a catastrophe;
    2 complexity was formerly believed to be decomposable into elementary units, yet only a global approach is pertinent;
    3 sand heaps and water droplets reveal the profound physical nature of matter’s behavior, which cannot be revealed by atomic physics;
    4 reductionism has reached its limits and its end has been announced for the near future;
    5 the scientific method associated with the names of Galileo and Newton and once believed immutable has now been supplanted by an historical point of view which has become dominant in the geosciences, and in the life and social sciences.
    Finally, in the representation following from the above theses, the hierarchy of the disciplines has been upset. In particular, mathematics and theoretical physics have lost their dominent position, and the biological and earth sciences have been promoted into the new Pantheon of science.
    Within the confines of this article, it will not be possible to discuss in detail either the many texts that have adopted these theses (and sometimes loosely extended them to economic, political, or social domains) nor the history of their development over the past 20 or 30 years (even 50 years in the case of cybernetics). Instead, I will review a few recurrent themes constituting this new representation of science and, for each, to examine what is at stake—in general, to show that everything is more complicated and subtle than is often supposed. Since the themes are intimately linked, to separate them as if their evolution were independent from one another is rather artificial, but it will enhance the intelligibility of the text. Finally, concerning intelligibility, we should recognize that the subtlety of the themes lies in the fact that most of the expressions involved, such as determinism, predictability, randomness, and order, assume a precise meaning only in the framework of a mathematical formalism. But in this remark, far from aiming at discouraging unavoidable, and probably desirable, non-formal discussions, I intend to underscore the difficulties of conducting a wide debate on the issues without exhibiting an excessive scientistic authoritarianism.

  • In physics as well as in biology, an extreme sensitivity to perturbations and parameter variations seems to be a specific trait in the formation of complex mixing systems and in the spontaneous emergence of complex structures. For example, the role of small intrinsic effects or microscopic thermal fluctuations in crystal anisotropy is well known, though this does not usually make possible the prediction or control of equilibrium forms, as in the cases of dendrite growth (snowflakes), microstructure formation (alloys), or fracture dynamics (solid or terrestrial crust). Confronting this extraordinary proliferation of emergent forms and dynamics, the theorist J. S. Langer writes that no certainty exists that complexity physics can be successfully reduced to a small number of universality classes. The prospects for complexity science, nevertheless, seem excellent, he concludes, even thought we may have to accept both the infinite variety of phenomena and the idea that we may never find simple unifying principles.
    Observed, described, and simulated on computers, the evolving behavior of unstable complex systems enriches our knowledge of what is possible, as well as our understanding of emergence and the mechanisms of self-organization. Without necessarily leading to the formulation of general laws, it can also contribute to our understanding of why complexity emerges so easily in nature.
    In summary, not only does chaotic and complex systems science install a new dialectics between the local and the global, it also forces us to rethink the relationship between the individual and the universal, and between the singular and the generic. In an article published in 1980, Carlo Ginzburg has distinguished two great modes of exploring and interpreting phenomena in the social sciences. Inspired by the Galilean natural sciences, physics and astronomy, and aiming at conquering universality, the first mode is Ginzburg’s Galilean paradigm. The second mode, the paradigm of clues, is concerned with the barely visible detail, the trace, the revealing symptom of a hidden reality. Associated with these two paradigms, are two distinct methodologies: the former hypothetico-deductive, the latter inductive. Even if we still do not know exactly how, the sciences of complex systems, including biology and physics, will soon have to come to terms with this duality for themselves.

Any message or comments?

pre-moderation

This forum is moderated before publication: your contribution will only appear after being validated by an administrator.

Who are you?
Your post

To create paragraphs, just leave blank lines.