Friday, May 06, 2005

Inferences to Design (I.D. XI)

In The Search for Extra-Natural Intelligence? (I.D. X) I criticized William A. Dembski's Intelligent Design for ruling out "theistic evolution" — the idea that Darwinian evolution theory is right on the money but there's still a God behind it all — because it lacks empirical verifiability. And, in my A World of Doubt blog, I said in my recent post about Spanish philosopher Miguel de Unamuno that I feel Dembski's approach "empties faith and hyperinflates reason."

I'd like to try to make my objection clearer: I see no a priori reason to accept that "suitable patterns" of "information content" which we may be able to locate in biological beings such as ourselves necessarily allow us to infer that these beings were desgined by some higher intelligence — i.e., God.

I say this, I should note, even though I believe in God as a matter of faith — thus making me, I admit, more sympathetic to theistic evolution than to intelligent design.

The reason I doubt Dembski is that his approach is really a kind of commonsense one, albeit glorified through sophisticated logical argumentation and an appeal to branches of science like probability theory and complexity theory. It is, at bottom, our common sense that tells us that if God made the world, there ought to be empirical evidence of his design in the world.

But my understanding is that religion is about faith — the kind of faith that moves mountains. But this kind of mountain-moving faith has nothing whatever to do with common sense.


Dembski doesn't really get down to brass tacks about his intelligent design theory until chapter 5. Prior to section 5.2, "Why Reinstate Design?", beginning on p. 124, he has spent a great deal of time showing us how design — overarching purpose or meaning, goal-directedness, Aristotelian "final" causes, teleology, all ideas verboten to modern science — came to be verboten.

Basically, the reasoning of the pure naturalists whom Dembski abhors boils down to this: for something to have purpose, there must be a purposer, a goal-setter, a designer. A purposeful designer is necessarily a subjective being whose providential will imperils science's foundational assumption, that nature is objectively real, independent of any person's subjective wishes.

Questioning that foundational assumption is in my opinion meta-scientific argumentation, whichever way you decide it. So it was with a sigh of relief that I finally arrived at something that at least resembles real science. Potential readers of the book ought to consider starting reading it at p. 124, finding out what design theory is all about, and then going back and reading the first 120+ pages.


Dembski says that seeking the wherewithal to infer design is like what the scientists did in Contact, the movie about SETI, the search for extra-terrestrial intelligence (see pp 127ff.). The investigators received what might be a radio signal from space — assuming it was not the natural emission of a quasar or pulsar. The "signal" consisted of strings of beats (series of 1's) separated by single pauses (0's). In all there were 1,126 beats and pauses. Once the scientists saw what the pattern was, they knew that the numbers of beats in each string progressed thus: 2, 3, 5, 7, 11, ... . In other words, all the prime numbers from 2 to 101 were presented in their correct order.

To know it was not a natural phenomenon, they had to establish that it was (these are Dembski's terms) contingent, complex, and specified.

Contingency: it was not the automatic product of natural laws. None of the laws of physics could account for it. (Contrast that with the pattern in which carbon atoms are arranged in diamond. This pattern results directly from the relevant underlying physics.)

Complexity: Had the sequence been just the first three primes, there could have been many other explanations, including chance. But because there were 1,126 "bits" of data, the odds against this (or any other) one particular combination of 1's and 0's happening by chance were simply astronomical. High complexity means low probability of happening purely by chance.

Specificity: If you took just any combination of 1,126 "bits" of data, not this particular one, it would likewise appear by chance with the same astronomical odds. But it wouldn't mean anything. To mean something, it would have to be a "suitable pattern" (see p. 131). A list of primes from 2 through 101 is just such a suitable pattern. Presumably, only an intelligence could have generated it.

Per Dembski, if you can show contingency, sufficient complexity, and suitable specificity in anything, you can infer intelligent design.


I doubt, though, that you can always infer design under the conditions Dembski suggests. I don't just mean that there's a (vanishingly small) probability that a "designed" phenomenon arose by chance alone. I mean that I question the epistemological and ontological basis for inferring design when the "designer" is supernatural.

The inference to design makes good sense when the putative "designer" is worldly — as in the case of far-off extra-terrestrials. But when the "designer" is above this world, not in it, does the same logic apply?

Epistemologically, a "yes" to that question presupposes that Dembski's putative divine designer works in ways we can detect and know. Because this commonsense assumption may not be compatible with the kind of faith that moves mountains, I question it.

Ontologically, a "yes" implies that the very being of the world can have been instituted by God in such as way as to allow it to leave, along the paths of its evolutionary history, empirical "bread crumbs" by which we can infer divine design. But what if such a world could only have produced beings that lack free will? Then (assuming we humans have free will) these so-called "bread crumbs" must really mean something else entirely.

Or, they might mean nothing at all. Dembski mentions this possibility in endnote 15, pp. 290-291. The note's purpose is to show how the modern scientific community "sanitizes design" right out of the realm of explanatory options. He quotes biologist Julian Huxley, writing in 1945, to the effect that

In any case, if we repudiate creationism, divine or vitalistic guidance, and the extremer forms of orthogenesis, as originators of adaptation, we must (unless we confess total ignorance and abandon for the time any attempts at explanation) invoke natural selection.

By adaptation Huxley means evolutionary change. Orthogenesis and vitalistic guidance are old-fashioned ideas which modern science has also rejected. What interests me, however, is Huxley's "total ignorance" option. Naturally science, which seeks explanatory models of objective reality, rejects it out of hand. But on what a priori grounds, I wonder?

These epistemological and ontological objections of mine are philosophical. Specifically, they concern the philosophy of science itself, but more generally, they concern the branch of philosophy called epistemology and, within the branch called ontology, the sub-branch called metaphysics.


Dembski's methodology itself seems to me to be sound enough (but see a possible objection related to theories of self-organinzed complexity below).

Basically, what Dembski does to establish design is show that it is much, much easier to produce a contingent event according to some (not necessarily known) "side information" than it is to produce it by pure chance. Knowledge of prime numbers is the "side information" in the Contact example. This knowledge has nothing to do with our knowledge of the signal itself, the 1's and 0's, but (once we twig to the "side information") it explains the signal fully.

In most cases, we don't have any idea what the "side information" may be. So Dembski cleverly measures the probability that such "side information" exists by applying complexity theory itself.

Dembski also rules out certain events on their own merits — the ones that aren't freakish enough to impute design. In another of his books (The Design Inference) he justifies taking a "universal probability bound" of 10 to the power -150 to be the threshold of probability above which intelligent design ought to be ruled out (see pp. 142-144). A possibly non-coincidental event with a (much higher) probability of 10 to the -8, for example, is not freakish enough to be attributed to design.

Additionally, Dembski insists that the probability of "side information" as the explanation of an event in question be so much higher than the probability that the event happened purely by chance that false positives — erroneous decisions in favor of design — are effectively impossible. False negatives — wrongly attributing what is actually a designed phenomenon instead to chance — remain quite possible.


Methodologically speaking, I'm not quite sure how Stuart Kauffman's theories of self-organization (see his book At Home in the Universe) fit in. Do they challenge Dembski's measure of contingency (which they might if emergent properties are guaranteed to emerge, given the right initial conditions and enough time) or that of complexity (which they might if they are only probabilistic, not certain).

Or do they require a whole separate category of investigation, call it "emergence." It would possibly work in the same yes/no way as contingency. If, yes, the "information" in a contingent biological system which Dembski's tests have shown to be "designed" could instead be explained by laws of emergent self-organization, then that fourth causal category — the other three are necessity, chance, and design — would be imputed in place of design.

Why is emergence a whole fourth category? Contingency is assumed by Dembski to apply when no reductionistic explanation necessitates a particular phenomenon. But self-organization is a holistic explanation. And I don't really think its champion, Stuart Kauffman, can say for sure whether the laws of self-organization make us "we the expected" or "we the guaranteed to appear" — again, given the right starting conditions and enough time. If Kauffman is right, though, and we are only the expected, not guaranteed, results of evolutionary history, then (unless emergence can be subsumed under Dembski's probabilistic complexity-specification test) it belongs in a separate category.


As far as I can tell, then, Dembski's theory of intelligent design is a perfectly sound one — even a clever one — as long as on meta-scientific or theological grounds you exclude supernatural intelligence from the range of consideration, and as long as you come to some well-justified accomodation with the theories of emergent, self-organized complexity which may force reevaluation of the contingency and complexity criteria, or which may require their own separate test.

With those caveats, I can't see any reason not to call Dembski's theory "good science."

Still and all, I can't agree with Dembski that he has (in the first 120+ pages of Intelligent Design) given sufficient reason to believe his theory of empirically detectable causation from above even applies when the putative designer is God. Maybe a God who envisioned free-willed creatures depended not on his own blueprints but upon the ability of living systems to self-organize as a basis for their evolution.

0 Comments:

Post a Comment

<< Home