Do theistic metaphysical systems such as Thomism or Scotism have any stakes in the findings of the empirical sciences? A discussion of formal causes in science and challenges to the principle of causality.
According to the positivists, the very existence and success of these sciences is a refutation of religious and metaphysical ways of thinking; these are thereby proved either wrong or meaningless according to whether they have any empirical content. This view is notoriously self-refuting, and I trust few of my readers have much sympathy for it. Those who see value in both metaphysics and science will therefore claim that the two address different questions and different aspects of being.
For some, this conviction leads to a very sharp line drawn between the empirical sciences and the philosophy of nature; the former addresses the phenomenal world, the latter ontological reality, and the findings of one have very little significance for the other. The existential Thomist Jacques Maritain often seemed to take this view. Maritain thought he could intuit metaphysical principles, not by abstraction from the observed world, but directly through an “intuition of being” that he was convinced he enjoyed (see especially A Preface to Metaphysics). Using this “intuition”, things that seem logically perfectly possible (e.g. infinite space or a contingent being existing for an infinite time–c.f. his defense of the 3rd way in Approaches to God) can be judged actually impossible. Thus, Thomists often assert that the emergence of life, and sometimes of every new species, requires direct divine intervention, even though the observed laws of nature would seem to have no trouble accommodating such things. For an example of Maritain’s intuition at work, consider his treatment of general relativity in The Degrees of Knowledge. He considered the question of whether spacetime is really curved or if gravity simply mimics such an effect, a reasonable philosophical question. Maritain’s response, as I recall, is that when we imagine space, we must think of it as Euclidean, therefore space is really flat. What kind of an argument is this?
I am unsatisfied with this overly strict separation of philosophy of nature and science. It reduces science to a matter of data fitting with no connection to underlying truth. As Stephen Barr and I have argued, it is also false to say that science ignores formal causes. Emergent phenomena appear quite explicitly in solid-state physics (e.g. band structure, phonons) and thermodynamics. One can hardly imagine doing biology at all without invoking form and function. Aristotelians should be gratified that hylomorphic composition has proven inescapable even in the hard sciences. Since forms are in science, science can meaningfully contribute to discussions of what things are, not just how they are observed to behave. For example, physics at least strongly suggests that heat is random motion of constituent particles and light is an electromagnetic wave, a pattern of motion in an electric and magnetic field. One needn’t conclude then that light is “really” a colorless mechanical oscillation–that is Cartesian prejudice long made obsolete by field theory. Electromagnetic waves obviously do have color. That these discoveries about heat and light, although they are empirically based, are indeed formal is attested by their certainty. We may possibly learn much more about the behavior of nonideal gases or the behavior of light at high energies or in nonlinear media, but it’s impossible to imagine this affecting the overall identifications with random motion and electromagnetism, just as future refinement in our understanding of human physiology can’t possibly shake our recognition of human beings as distinct biological organisms. The role of philosophers, then, is to identify the distinctly formal element of scientific discoveries. They point out when a distinct pattern has been identified, one that can be recognized and understood independently of refinements in our knowledge of the underlying matter. It is often said that every scientific theory is one experiment away from refutation. However, these nuggets of formal knowledge, obtained by scientific-philosophical cooperation, are more solid.
What of the more general principles of the philosophy of nature and metaphysics, such as the principle of causality (for the Thomist) or sufficient reason (for the followers of Leibniz)? Do the sciences address these at all? I admit that I have never experienced Maritain’s “intuition of being”, and I am not privy to its secrets. I also have no way of knowing if it is anything other than his private fantasy. Thus, I prefer to build metaphysical principles on abstraction from the sensible world, just as Aristotle himself did. General principles having to do with identity and causality should be thought of as the general requirements that any understanding of nature must obey if it is to describe a coherent, intelligible universe. Logical consistency is one obvious such prerequisite, and hence the laws of identity and noncontradition, as the metaphysical bases of logical consistency, are seen as metaphysical truths, assumed rather than tested by science. However, logical consistency may not be the only prerequisite for an intelligible universe. Many metaphysical systems assert that some laws on the operation of causality are also needed.
Restrictions on the operation of causality are the lynchpin of any cosmological argument for the existence of God, the necessary self-subsistent Being who holds all contingent beings in existence. The argument must take as one of its premises some statement that contingent/finite/composite beings can’t come into existence or maintain in existence “by themselves”. Why not? How is it logically or mathematically impossible to say, as a brute law of nature, that elephants come into existence out of nothing with a rate/probability of one per year per cubic light year? It isn’t. One could make a consistent mathematical model of a universe in which this is true. But would it be coherent?
In a universe where finite beings pop into existence out of metaphysical nothing at a certain rate, we must ask where this rate for each type of object comes from. What is it’s ontological ground? There are three possibilities that I can think of.
- The rate is grounded in some background reality into which the created being emerges. Then we don’t really have creation from nothing; we have creation from this background object by exercise of the potencies of this existing object.
- The rate is grounded in “the laws of nature”. That is, these laws are regarded not as descriptions of the nature of existing objects, but as causally active entities (or their enforcer, whatever it is) in their own right. Physicists talking to the public about “the laws of physics” allowing creation from nothing often sound as if this is what they believe. However, once we reconceptualize the laws of physics as actual beings, position 2 really becomes a version of position 1. In fact, it’s a version of position 1 that suggests a Platonic Demiurge, although I doubt the New Atheists realize this.
- The rate is grounded in the created object itself. Part of the nature of each object is its probability for self-creation.
Only position 3 presents a threat to the cosmological argument. Thus, it is sufficient for natural theology to prove that position 3 is incoherent. This can be argued as follows. Take an object A with self-creation rate p. Now imagine another object B whose nature is identical to the first except its self-creation rate is y*p, where y is some arbitrarily large number, large enough that the universe should momentarily fill up with Bs. Why doesn’t this happen? The only response is that the hypothetical universe is only one with As but not Bs. However, before they self-create, every type of object is equally non-existent. There is no way of saying that only certain types of objects can self-create if position 3 is true, i.e. if creation is grounded in the emerging object. Thus, every logically conceivable object must self-create at every rate, and an intelligible universe is impossible.
It has been objected that creation from nothing happens all the time in particle physics, in the form of particle-antiparticle pairs spontaneously popping out of the vacuum. This is an objection that philosophers should take very seriously. The phenomenon arises in several contexts. In the presence of strong electric or gravitational fields, real particles can be created spontaneously (in the sense that the effect is probabilistic), although the mass-energy, and hence presumably the causal agent, of the new particles comes from the background field. The temporary creation of virtual particles also appears in perturbation theory calculations of various scattering rates, decay rates, and correlations. As an aside, I think the ontological status of virtual particles is far from clear. One can use the same Feynman diagram methods to solve certain classical problems (e.g. the harmonic oscillator), and the virtual states that appear in the calculation are pretty obviously artifacts of the perturbation expansion. So in QED, one can “correct” the noninteracting photon propagator by adding contributions from the photon temporarily splitting into an electron-positron pair. Does that mean photons really spend part of their time as pairs, or only that the noninteracting photon is just an approximation to the “true” photon of the full nonlinear theory and the perturbative expansion shows us how to build the true propagator from analytically tractable simpler ones? In any event, even these virtual pair creations are not “from nothing” since their diagrams always attach (eventually) to the real particles. In the above example, the pair comes from a photon. In any case, the identification of the vacuum in quantum field theory with metaphysical nothing is the most egregious misstep of all. The vacuum can have a nonzero energy, and it is possible that cosmologists have already measured it. (The so-called “cosmological constant” or “dark energy”). Indeed, the existence of a vacuum is not even a generally covariant fact, since an accelerating observer will see particles where an inertial observer sees none. Actually, the vacuum has a great deal of structure loaded into it via the Hilbert space we erect to describe it as one state among many and the Lagrangian or Hamiltonian to describe its evolution, which includes information about all possible particles and their energies. What grounds these expressions? Why can’t I stick energy terms from non-existent particles into the Lagrangian? If I do, my answers will be wrong. If one attaches quantum field theory to position 3, it becomes vulnerable to all of the objections to that position. It therefore seems that modern particle physics, in spite of its ability to “create” particles, must be wed to position 1 or 2. Position 1 seems to me more natural, given the mathematical similarity of particle physics to the theory of perturbations in solids; in the latter case it is clear that the particles are oscillations of an underlying lattice which provides the metaphysical ground for all the particles’ properties. I repeat what I said about light: asserting that elementary particles are excitations of something doesn’t mean that they’re mechanical oscillations in some sort of ether.
A more serious challenge comes from quantum cosmology. In particular, Vilenkin and Hawking and Hartle have proposed models (based on reasonable semiclassical and mini-superspace approximations to a full quantum treatment of the metric of the universe) in which a closed spacetime manifold, representing the whole universe, apparently tunnels into being out of nothing. These models evade many of my above objections. There is no background space or field to provide the obvious “something”. Since spacetime manifolds themselves are not (or needn’t be) embedded in any background space, this scenario doesn’t fit into the above disproved case of a creation rate per time per volume. One might assert that there is no way to identify points on different manifolds and that therefore universes can’t interact, so there is no empirical problem with saying that an infinite number of different universes–every possible one–can and do pop into being out of nothing.
Note that it is irrelevant whether any particular published scenario describes our actual universe. Their authors themselves regard them as simplified models, but even nothing like this ever happened, the damage to the cosmological argument would be the same, since that only depends on what is coherently possible. A more pertinent inquiry is whether these universe-creation models, e.g. the Hawking-Moss instanton, have been interpreted correctly. My impression is that discerning reality is more difficult when working with instantons than when working with the full quantum theory. Tunneling “from nothing” is an obscure and problematic notion. Again, something along the lines of position 1 would probably be more natural: the spacetime metric is an excitation of some background entity, and the “nothing” from which it emerged was that entity’s ground state. (“Third quantization” models of universe creation lend themselves very easily to such a reading.) (A position 2 interpretation could also work, and is at least rhetorically the one preferred by most cosmologists. To this, my points on position 2 would apply.)
However, even if I could argue that a position 1 interpretation is more natural, the causality requirement needed by the cosmological argument is still overthrown if a position 3 interpretation is still tenable. To genuinely prove the principle of causality, we must grant the alternative every leeway and show that it still can’t work. The proponents of self-creation, however, must grant their opponents the right to take the proposed self-creation principle and apply it categorically to every possibility that the principle allows and wreck as much havoc as he can. Is it really true that universes can’t interact? If the terms in the action are grounded in self-creating metrics, than I can stick anything into them that I like, even if I have to introduce arbitrary mappings between manifolds to make interactions between universes work. And I’ll bet I can do this in ways that make two universes interacting distinct from one bimetric universe. This is only the first idea that comes to my mind for how to try to wreck a cosmological model that espouses a position 3 interpretation. One senses the opportunity for an arms race between theist and atheist theoretical physicists. Probably this would not be decisive in itself, but onlooking philosophers would have their imaginations stretched, and they would have a wider sense of possibility than everyday experience provides when formulating their putatively necessary principles.
In my defense of religion, I presented something like the no-popping argument above. I then proposed an explanation of this principle, namely that beings for whom multiple instantiation is possible must be receiving their existence from outside. This allowed me to disregard possibilities such as that contingent beings have perpetually existed on existential inertia or that there is a force that suppresses self-creation in an already occupied universe (what I call the “crowding out” possibility) without having to find a particular inconsistency in any of them. This has always struck me as the weakest part of my argument, because I’ve never proven that there is no rival metaphysical principle that could explain no-popping without ruling out the other atheist alternatives. An argument that took direct aim at a stronger alternative, the best theoretical physicists could muster, would have had a stronger effect and made the leap to metaphysical principle smaller. Thus, I have entitled this post part 1 of a series, hoping the implicit promise will prompt me to address this issue properly.