Wednesday, May 11, 2005

The Argument Thus Far (I.D. XV)

To date, I have read up through "Generating Information via Law and Chance,"section 6.4 in William Dembski's Intelligent Design: The Bridge Between Science & Theology. This is my 15th post about the book; post number 14 was The Handmaiden of Design (I.D. XIV). Herein, I want to try to summarize Dembski's argument thus far.

Dembski says that there is in nature, particularly biological nature, that which can be shown to be "complex specified information," or CSI. CSI and "design" are, mathematically speaking, one and the same. CSI is, as information, something which is capable of reducing uncertainty. More than that, it is both complex and specified.

• That CSI is complex says that the odds against it happening by chance fall below a "universal complexity bound" of 10 to the -150 power, which equates to 500 bits (0's and 1's) of information (see p. 166).

Dembski sets the "probabilistic cutoff" for CSI-implying complexity at that conservative level for good and sufficient reasons. However, he also admits that

... just where the probabilistic cutoff is can be debated, but that there is a probabilistic cutoff beyond which chance becomes an unacceptable explanation [for the existence of CSI] is clear. The universe will experience heat death before random typing at a keyboard produces a Shakespearian sonnet. (p. 166)

• That CSI is specified means that there is an independent body of "side" information by which its internal pattern of information can be constructed, without reference to the CSI itself. The piece of CSI is, in the language of probability theory, an "event." It might be a sequence of 100 coin flips, either heads or tails. This multi-flip event can be converted into a pattern of 1's for heads and 0's for tails. If there is no separate, independent pattern from which this pattern can be derived, then this pattern is not specified. Though it has or is a pattern in and of itself, the event in question is not CSI, because there is no side information that can be used to construct it independently of the event.

Dembski gives this example (pp. 135-138): an event consisting of 100 coin flips looks entirely random, as if really generated by flipping a "fair" coin 100 times. It has exactly 50 alternations between heads and tails, as expected. There are 49 heads, total, and 51 tails, as expected. There is also one sequence where one of the two possible results (tails) comes up 7 times in a row, just as statisticians know "ought to happen" in a random 100-flip event.

But when the heads-tails sequence is converted into 1's and 0's and then chopped up into appropriate subsequences, the progression 0, 1, oo, 01, 10, 11, 000-to-111, 0000-to-1111 reveals itself. This arrangement is the ordered set of binary numbers, ranked low to high, consisting first of one digit, then two digits, then three digits, and so on. This set is obviously the side information by which the original event could be constructed, if one had no knowledge of the event per se.


The side information is, in Dembski's terms, the specification of the probabilistic event which is, or contains, the CSI. Dembski is at pains to make certain that this side information is truly an independent specification, not an ad hoc fabrication. He gives an example on pp. 131-133, which I will modify slightly.

Suppose an archer fires 100 arrows at a large wall and asks us to determine how skillful he was. The arrows wind up in various spots on the wall, so after shooting them, he simply goes up to the wall and paints circular targets around each arrow. Then he says, "What a wonderful archer am I!"

That's no good, Dembski says. It's an ad hoc fabrication. Not an independent specification at all, the supposed pattern is "read off" the event which has already transpired.

Let's take a different case. Suppose the archer took an event set like the "coin flips" of the previous example — which was actually generated by performing some simple binary arithmetic, not by tossing a coin — and extended it until he had a sufficient number of digits in the string to take eight bits as an X coordinate and the next eight bits as a Y coordinate, and generate as many coordinate pairs as he has arrows: 100. The X and Y coordinate pairs would determine at which points to paint small targets on the wall.

Then, suppose he shot his arrows and hit every target dead center.

In this case, the specification of the targets would be independent of the arrow-shooting event. If the event of hitting all the targets were sufficiently complex (which it would be if there were enough targets), then the event would represent complex specified information. This particular CSI would prove the archer didn't hit the targets by combining blind-chance arrow shooting with post-facto ad hoc target painting.

In other words, the fact that the event passes Dembski's complexity-specification criterion proves that the archer is a master of his skill, not a cheater or incredibly lucky. This criterion is another way of determining that an event equates to CSI. Importantly, what probability theory calls an event and what information theory calls a message or a piece of information are basically the same thing.


The side information which serves as the specification of an event E, which is indeed a piece of CSI, may not be known, oddly enough, even though the event is in fact specified. Accordingly, we need some way of showing that the side information exists, despite its being unknown and unguessable. The method Dembski devises comes from probability theory and complexity theory taken in tandem (see pp. 138-139; see also his book The Design Inference).

First, probability theory is used to confirm a "conditional independence condition" insuring that the side information (assuming it exists) must be "conditionally independent" of the event itself: our ignorance of the event in no way would affect our ability to know the side information, and vice versa. If someone happens to reveal to us what the side information is, we can reconstruct the event perfectly even though we have as yet been given no knowledge of the event itself.

Put another way, our at some point learning what the side information is — and thus our gaining the sure knowledge that the side information exists — makes no difference to our original estimate of the probability of event E happening. Nor does our knowledge of E's probability change the likelihood that E is specified by some side information as well.

The conditional independence condition is what ensures that the supposed pattern behind the event was not fabricated after the fact, like painting targets where the arrows happen to go.

Second, complexity theory allows us to impose a "tractability condition" which makes sure that the side information would indeed, if learned, "provide the resources necessary for constructing the pattern in question" (p. 130). If so, the ability to construct the pattern of information associated with event E is assured, assuming the side information which is the event's specification is somehow discovered.

The underlying assumption here is apparently that if complexity theory can determine that the pattern associated with event E is "intractable," then there must be no side information which could tamely generate E.

I must admit that I find both of these rules for determining that a candidate event's information content is in fact specified, by Dembski's definition, a bit murky. To a certain extent, then, I have to take Dembski at his word when he says:

Taken jointly, the tractability and conditional independence conditions mean that side information enables us to construct the pattern to which an event conforms, yet without recourse to the actual event. (p. 139)

I consider the fact that Dembski doesn't lay out the actual methodology for how to apply the conditional independence criterion and the tractability criterion to be a flaw in his presentation. I assume he remedies this in his more technical book, The Design Inference. But as things stand, I have no way of knowing whether, for example, the math used to demonstrate tractability is independent of the math used to calculate the complexity of the event in question. If not, then who knows for sure if Dembski's argument for intelligent design is sufficiently rigorous.

At any rate, such an event which has the requisite side information, known or unknown, accordingly represents specified information. If it is sufficiently complex, it is complex specified information, or CSI. In other words, any event which satisfies the complexity-specification criterion is CSI.


And CSI must be freely and intentionally caused, i.e., intelligently designed, Dembski says. For there are only two other causal options: chance and necessity. CSI presumably does not result from chance if its probability is less that 10 to the -150 power, the "universal complexity bound." And, for mathematical reasons Dembski explores in section 6.2, "Generating Information via Law," on pp. 160-165, CSI cannot originate in natural laws, computer-style algorithms, or math functions, the various vessels of necessity — so it cannot be a product of necessity, period. CSI must be contingent, if only because all information is contingent. If it were not contingent, it could not reduce our uncertainty.

Dembski, again using a mathematical argument, goes so far as to show that neither can law (i.e., necessity) when working in tandem with chance produce CSI (see section 6.4, "Generating Information via Law and Chance," pp. 167-170). In fact, he says that the type of mathematical function known as a "stochastic process" can stand in for any conceivable combination of chance and necessity, from all chance to all necessity to chance-necessity blends. Stochastic processes can be shown in general to be powerless to originate CSI. That leaves only intelligent design as an explanation for CSI.

There are only three categories of designing intelligence that can fill the bill of originating CSI: us, extraterrestrials (if they exist), and God (if he exists). We are fairly certain E.T. didn't make earthly aardvarks, for example, and we know very well we didn't, so there must be a God!

Or at least there must be an intelligent designer of the universe. Dembski leaves it up to theologians to tell what this designer is like.


Thus, Dembski's argument in a nutshell. If the mathematics of information theory, probability theory, and complexity theory can be combined to rule out chance and necessity as causes of a worldly event, then the event must represent complex specified information that vouchesafes intelligent design. If the designer is not human, then (barring extra-terrestrials) it must be divine.

0 Comments:

Post a Comment

<< Home