Skip navigation.
Home
The Critic's Resource on AntiEvolution

Of Frauds and Fingerprints

Over on his weblog, William Dembski has a post making reference to an article on a means of "fingerprinting" textured surfaces, like paper. It is an interesting article. But look what Dembski has to say about it:

The Logic of Fingerprinting

Check out the following article in the July 28th, 2005 issue of Nature, which clearly indicates how improbability arguments can be used to eliminate randomness and infer design: “‘Fingerprinting’ documents and packaging: Unique surface imperfections serve as an easily identifiable feature in the fight against fraud.” I run through the logic here in the first two chapters of The Design Inference.

Well, it is a little troubling how to proceed from this point. Did Dembski fail to read the article? Is Dembski simply spouting something that ID cheerleaders can nod sagely about without regard to whether it happens to accord with reality? Whatever excuse might be given, the plain fact of the matter is that the procedure and principles referred to in the short PDF Dembski cites have nothing whatever to do with Dembski's "design inference", and cannot be forced into the framework Dembski claims.

The problem here is one pointed out by critics before -- Dembski claims certain things demonstrate "specified complexity" (SC) or "complex specified information" (CSI) without the slightest effort being made to show that the claim has any support. I'm going to go a bit further than has been done in the past and note this as an example from a class that Dembski has particular difficulty with. Dembski has often claimed that certain events exemplify CSI without explaining how, and a significant set of them concern assignment association, as does the present example. I will argue that Dembski's "design inference" framework is inherently unsuited to dealing with the class of assignment associations.

Take, for example, Dembski's claims that Visa credit card numbers or phone numbers represent CSI concerning the cardholder or phone owner (see Intelligent Design. p. 159). In general, phone numbers are simply assigned by the phone company to customers, with only a fraction of customers insisting upon being able to choose certain phone numbers. In the case of simple assignment, we have an association being made by the phone company that links a phone number with a customer. Again, with Visa credit cards, the credit card issuer simply associates by assignment an available number with a particular user. There is no agent-applied rule that determines a match between a customer and at least several digits of the number that is assigned.

Setting aside the issue that Dembski regularly uses to fend off crtics, that of the fact that these sorts of things don't approach or exceed Dembski's "universal small probability", the problem here concerns Dembski's notion of "specification". For any of these things to be "specified", a subject S must be able to independently determine the relevant "pattern" from "side information". The pattern for Visa cards and phone numbers are the sequences of numbers. The "side information" would be items of information that we know about the cardholder or person we want to phone. And it is here that Dembski's framework founders on assignment associations. While "side information" can determine some of the numbers in the sequences examined (the first four digits of Visa numbers must come from the Visa prefix pool; the area code and exchange parts of a phone number are usually determined by geographic information about the phone customer), there remain a significant fraction of the numbers in the sequence which cannot be determined by "side information". "Side information" about the customer does not tell us their Visa card number. In fact, Visa would be very disappointed if someone were able to guess Visa card numbers given just customer information. Visa has a vested interest in keeping the assignment association between a particular customer and his card number obscure: it reduces credit card theft and other fraudulent activity. And similarly for assigned phone numbers, the "side information" about the phone owner is uninformative about the ten thousand possibilities in the last four digits of the phone number.

In fact, what assignment association offers is specifically what Dembski's method stays away from, that category of patterns that Dembski refers to as "fabrications" or "read-off information". The Visa card example is clearly one of these, and the final four digits of a phone number also falls in that category. And now that we have that basis firmly in mind, let's examine the current instance Dembski is going on about.

Buchanan et al. 2005 describes how rough-textured surfaces have their own "fingerprint", comprised of the highly contingent way that the fibers or other components of the textured surface come together. They used laser scanning to characterize a part of two pieces of white paper from the same batch of paper. While a cross-correlation of the two pieces of paper did not reveal similarity between the two, an auto-correlation of the scan of one piece of paper with itself showed a strong match at zero offset -- in other words, the pattern from the scan matched itself. They go on to show that this matching was robust after handling the paper and rescanning the area again, at least if they got within 1mm and 2 degrees of matching the position and orientation of the item in the first scan.

Now, Dembski says that "improbability arguments can be used to eliminate randomness and infer design" and points to this study. Here's where things go wrong for Dembski. "Design" is, in Dembski's cited The Design Inference, the set-theoretic complement of laws and chance. But the patterns that Buchanan et al. 2005 are scanning and using for identification of rough-surfaced items are due to chance. They note that the manufacturers don't control this, and, in fact, cannot control it:

Most existing security validation schemes rely on a proprietary manufacturing process that would be difficult for a fraudster to reproduce (for example, holograms or security inks). Our findings open the way to a new approach to authentication and tracking — even the inventors would not be able to carry out a physical attack on this fingerprint as there is no known manufacturing process for copying surface imperfections at the required level of precision. There is no need to modify the protected item in any way through the addition of tags, chips or inks, so protection is covert, lowcost, simple to integrate into the manufacturing process, and immune to attacks directed against the security feature itself.

In fact, Buchanan et al. 2005 are pretty explicit concerning that "chance" thing:

Naturally occurring randomness in the physical properties of an attached tag or token is a means of ascribing a unique identifier to documents and objects 1–4. We have investigated the possibility of using the intrinsic roughness present on all non-reflective surfaces as a source of physical randomness. This has the potential to provide strong, in-built, hidden security for a wide range of paper, plastic or cardboard objects.

Note that there are four references cited concerning randomness. Are any of those to Dembski's work? Of course not. Dembski's "design inferences" have nothing to do with what is going on here.

If Dembski insists that this is a validation of his "design inference", I submit that it actually instantiates one of those "false positives" that Dembski dislikes talking about. The pattern found by Buchanan et al. 2005 is precisely a "fabrication", a series of bits literally "read-off" the surface imperfections of the object of interest. And it is precisely like fingerprints themselves, which form by a process of development to yield diverse enough results that our legal system considers a good set of points on fingerprints a reliable means of uniquely identifying a person who bears them. These are another form of assignment association that is immune to Dembskian analysis of "side information" and the like.

Here's another point of departure between Dembskian "design inferences" and assignment association: Dembskian approaches to measure "complexity" estimate the probability of the origination of an observed pattern, but assignment association occurs with probability one. A piece of paper, as scanned by Buchanan et al. 2005, always has its very own identifying set of surface imperfections. Buchanan et al. 2005 calculate the odds of two pieces of paper ending up with the same scanned result as less than 1e-72. The odds that a particular piece of paper gets a particular set of surface imperfections is less than 1e-72, but the odds that it gets some set of surface imperfections that can be measured is just 1. Nor can one somehow take the "side information" about the batch of paper being manufactured and derive the particular scan result for a piece of paper resulting, as Dembskian specification requires. No, this enterprise of Buchanan et al. 2005 is entirely outside of Dembskian probability juggling.

An essential distinction to make concerns using probability as Buchanan et al. 2005 do to infer identity, and using probability as Dembski asserts in finding "design". Let's say someone in the not-too-distant future takes the "Where's George?" idea of tracking money one step further and does a Buchanan-style scan on every piece of paper currency that comes their way. They are victimized by a pickpocket, who by the time he is apprehended has ditched the victim's wallet and is only carrying the paper money from it. The victim offers to the police his Buchanan-scans for his currency, and the police find several matches to those scans among the bills the pickpocket (who has loudly proclaimed his innocence) had in his possession. Haven't we used small probabilities to exclude the notion that the pickpocket just happens to be carrying around those particular bills by chance and inferred design on his part, right? We've made a Dembskian "design inference" here after all, some would claim. That's not the correct way to look at this. The probability here is employed to decide upon the identity of each bill examined: whether the provided scan matches the bill being analyzed. Once that identity is established, then we start another inferential chain with substantiated possession of the bills by the victim (this is the part that those small probabilities established) and subsequent possession by the pickpocket to infer that the pickpocket really did pick that pocket. But in establishing the identity of the bills themselves, at no time did we utilize Dembskian methods of inference. The assignment association of the surface texture of each bill determined the specific characteristics represented by the scan data. The matching done did not depend upon any "side information" whatever, and instead is accomplished using -- and remembering -- "read-off" information.

Dembski has a problem in that anytime someone mentions "probability" he seems to think that this means that the work somehow is related to his own in a non-trivial sense. But, in fact, work in probability went on before Dembski's odd notions were published, goes on now without reference or reliance on those notions, and will continue into the future when Dembski's work is no more than a footnote in reviews of socio-political oddities. In the present instance, Dembski claims that his "design inference" in some way describes the work performed by Buchanan et al. 2005, but in fact their descriptions of procedures make it clear that they are dealing with unplanned contingent features as aids to identification, and not in any sense making guesses at some "designed" pattern. This makes clear the fact that even Dembski doesn't understand when and how to deploy his own notions of "design inference". Perhaps if he had taken up my suggestion back in 2001 of developing a workbook of examples he'd have some idea of how to actually use his own "calculations".