3.4.1 Arguments for the insufficiency of reductionism
One of the earliest contributions here was by philosopher of biology Donald Campbell in 1974. His account of downward causation in biology relies on the fact that environmental selection largely accounts for the genetic blueprint of the remarkably well ‘designed’ jaw structures of worker ants (Campbell 1974: 179–186).
Van Gulick strengthened Campbell’s account of selection, writing that most lower-level systems or entities have a variety of causal capacities. Their incorporation into a more complex whole at a higher level in the hierarchy allows for selective activation of the simpler system’s causal capacities (Van Gulick 1995: 251). Ants, again, provide an example. An individual ant dropped on a table might run helter-skelter, or search for a food source. Steven Johnson (2001: 29–33), drawing on work by scholars such as E. O. Wilson, describes the organization of ant colonies. All of the ants except the queen are genetically identical, yet they are able to perform a number of tasks such as foraging for food, guarding the queen, caring for ant pupae. An example of selection is the fact that the relative density of foragers triggers some to return to the nest. They are born with something we might call innate ‘ant rules’: ‘A foraging ant might expect to meet three other foragers per minute — if she encounters more, she might follow a rule that has her return to the nest’ (Johnson 2001: 76–77). But note that this rule-governed behaviour is something quite different from the laws of the basic sciences, and following the rules does not in any way violate or override basic laws.
3.4.2 Nonreducible circumstances
It was noted above (section 3.2.3) that if one’s definition of supervenience requires consideration of circumstances that co-constitute a supervenient property, and those circumstances cannot be reduced to the supervenience base level, then supervenience does not entail reducibility of the supervenient. This subsection gives an example of relevant circumstances that cannot be reduced.
Consider two physical objects – say, two small copper-coloured discs, identical to the naked eye, with ‘United States of America’ and ‘ONE CENT’ stamped on one side. They differ only in that on the other side one is stamped ‘1962’ and the other ‘1990’. They are, of course, US pennies. They have identical uses as currency in stores and are worth exactly one cent each, despite the fact that the earlier one would be worth approximately 1.5 cents if melted down, and despite the fact that the later one cost approximately 2.1 cents to make. Nonetheless, their economic value remains one cent. What constitutes them as legal tender in the US? Their history is crucial: it is either a (real) penny or a counterfeit, with no such value. Explaining their value qua currency requires resort to ever higher and more complex systems. One needs to resort to issues such as the legitimacy of US government authority, monetary policy, international practices of economic exchange, and so forth.
Note that the two objects’ physical constituents play no role whatsoever. The 1962 penny was made of 95% copper; since 1983, pennies have been made of 97.5% zinc, with only a thin coating of copper; and briefly in 1943 they were made of steel with zinc coating.
Despite examples such as this, some philosophers and scientists still question the possibility of causally effective emergent entities or properties. The problem may involve word choices. Campbell himself was uncomfortable speaking of downward ‘causation’, saying that selection over time is only a ‘back-handed’ editing of products of direct physical causation (Campbell 1974: 181–182). Because it has become so common to think of causation on the model of physical, push-pull sequences, it may be more helpful to speak here of ‘whole-part constraint’. This term leads directly to the next topic.
3.4.3 Complex systems theory
It may be fair to say that from the advent of modern philosophy, with Descartes the dualist (d. 1650) and Thomas Hobbes, the reductionist monist (d. 1679), there has not been, until around the turn of the present century, any widely agreed-upon philosophical underpinning for either of the most common theories of the metaphysical compositions of humans. For dualism, beginning with Descartes’s pineal gland, the problem of mind–body interaction has not been answered satisfactorily. Nor has there been any widely accepted account of how a physically monist position could avoid reduction of the aspects of human nature required for religiosity. In fact, if higher order reasoning is reducible to neurobiology, then there cannot be rational arguments for reductionism itself. So, all of the conceptual ingredients proposed in aid of a physicalist theology may well be mere scratchings on paper. Yet in 1997 Ilya Prigogine wrote:
I believe that we are at an important turning point in the history of science. We have come to the end of the road paved by Galileo and Newton, which presented us with an image of a time-reversible, deterministic universe. We now see the erosion of determinism and the emergence of a new formulation of the laws of physics. (Prigogine 1997: viii)
Prigogine is joined by Peacocke (2007: 267–283); others include Alicia Juarrero (1999), Nancey Murphy (Murphy 2007), Warren Brown (Murphy 2007), Paul Cilliers (1998), Alwyn Scott (2007: 173–197), and numerous others in suggesting that we may be at a point of change as significant as that from medieval to modern.
This change can be seen in the development of ‘systems thinking’, largely derived from mid-twentieth-century general systems theory (associated with Ludwig von Bertalanffy), and from cybernetics (associated with Norbert Weiner). Both of these are concerned with systems that run on information as well as energy. Current contributors are information theory, nonlinear mathematics, study of chaotic and self-organizing systems, and non-equilibrium thermodynamics. Examples of the systems of interest range from autocatalytic processes, at the most basic, to weather patterns, insect colonies, social organizations, and, of course, human brains. Alwyn Scott, a specialist in nonlinear mathematics, states that a paradigm change (in Thomas Kuhn’s sense) has occurred in science beginning in the 1970s. He describes nonlinear science as a meta-science, based on recognition of patterns in kinds of phenomena in diverse fields. This paradigm shift amounts to a new conception of the very nature of causality (Scott 2004: 2).
Several authors call for what might be called a shift in ontological emphases. Alicia Juarrero says that one has to give up the traditional Western philosophical bias in favour of things with their intrinsic properties, for an appreciation of processes and relations (Juarrero 1999: 124). Systems have permeable boundaries, allowing for the transport of materials, energy, and information. The components of complex systems are not things but processes. So, for example, from a systems perspective a mammal is composed of a circulatory system, a reproductive system, and so forth, not of carbon, hydrogen, calcium. The organismic level of description is decoupled from the atomic level. Systems are different from both mechanisms and aggregates in that the properties of the components themselves are dependent on their being parts of the system.
Systems range from those exhibiting great stability to those that fluctuate wildly. This is due to the fact that complex systems are nonlinear, that is, the current state affects the development of each future state. The difference in stability is due to the extent to which the system is sensitive to slight variations in initial conditions, and also the extent to which there are feedback processes that do or do not dampen out fluctuations. Chaotic systems are now widely familiar. They result from having a sensitivity to initial conditions that falls into a narrow range, resulting in their behaviour falling into a predictable range of states.
More interesting are those at the edge of chaos. These systems have freedom to explore new possibilities and may ‘jump’ to new and higher forms of organization. Understanding how this can happen in terms of physics comes from the study of far-from-equilibrium thermodynamics. Such systems are called complex adaptive (self-organizing, autopoietic, or dynamical) systems. They are characterized by goal-directedness, at least insofar as they operate in order to maintain themselves. In this process they may create their own components. For example, in an autocatalytic reaction, molecule A catalyses molecule B, which catalyses more of A. The process will stabilize at some point unless additional materials are introduced into the system.
Complex adaptive systems theory has dramatic consequences for understanding causation. While ordinary efficient causation is presupposed, such causation is inadequate to describe complex systems. This is in part because complex systems operate on information as much as on energy and matter. More important is the fact that the relations among the components of a system need to be thought of in terms of constraints. An efficient cause makes something happen. A constraint reduces the number of things that can happen, due to the fact that the components are so related to one another that a change in one automatically changes the other. Juarrero says, [t]he concept of a constraint in science suggests ‘not an external force that pushes, but a thing’s connections to something else […] as well as to the setting in which the object is situated (1999: 132). For example, in successive throws of dice, the numbers that have come up previously do not constrain the probabilities for the current throw. In contrast, in a card game the constraints are ‘context-sensitive’; the chances of, say, drawing an ace at any point in the game are sensitive to history because the rules of the game, the number of cards in the deck, and so forth, create relations among the possible outcomes such that the probability of one occurrence is related to all of the others.
Due to the role of probability in complex systems, it is necessary to do away with the sharp distinction between determinism and indeterminism. The appropriate middle term is ‘propensity’, coined by Karl Popper to mean ‘an irregular or non-necessitating causal disposition of an object or system to produce some result or effect’ (Sapire 1995: 657, referring to Popper 1990).
Murphy and Brown (2007) argue that this set of new concepts, particularly that of context-sensitive constraints, gives us the conceptual tools to explain how downward ‘causes’ cause without violating the causal closure of the physical and without postulating causal overdetermination (see Kim above, section 3.2.3).