Antievolution.org :: Antievolution.org Discussion BoardThe Critic's Resource on Antievolution

 Antievolution.org Discussion Board > From the Panda's Thumb > After the Bar Closes... > Thread for Cryptoguru

 Pages: (12) < ... 6 7 8 9 10 [11] 12 >
 Topic: Thread for Cryptoguru, Evolution, Evolutionary Computing, etc < Next Oldest | Next Newest >
cryptoguru

Posts: 53
Joined: Jan. 2015

 Quote I don't think anyone is claiming a random distribution contains information that would make a useful target.

Yup that's exactly what the other contributors are claiming

 Quote A random distribution has maximum entropy.  It thus has maximum uncertainty about the next piece of data, and thus has maximum information.  It is incompressible.

Entropy, is a measure for how compressible an information stream is, random noise is not compressible, just like high frequency information. But that doesn't mean the signal contains meaningful information. We're not talking about wanting to losslessly transmit random noise here, we're talking about if noise contains meaningful information. Shannon's theorem isn't interested in the content of the signal ... it treats all signals as information. We're talking about the content of the signal, is it noise or is it information.
You are using entropy theorem which is concerned with lossless compression of data to fallaciously determine whether a noisy signal contains meaningful information or not. Do you not see the difference?

Maybe we should use a different word if you're finding it hard to understand what we're talking about (or deliberately trying to muddy the waters with equivocal terms). Let's call it a MESSAGE instead of information. So I can encrypt a message and make it highly entropic so that the message is hidden, this is not random noise ... it can appear random, but the message is hidden in the signal. Does a random signal contain a message? NO! Random noise does not contain ordered and meaningful messages. DNA is a message ... a message to the cell about how to build an organism. So let's stop pretending that we can get MESSAGES automatically from random data.

OgreMkV

Posts: 3668
Joined: Oct. 2009

So is a signal of a message encrypted on a purely random one-time pad random noise or a meaningful message?

eta: I guess the real question... can you determine whether information is meaningful or not, just by looking at it? If you can, then you have done more than every ID proponent ever. But I don't think that it is even possible mathematically.

Edited by OgreMkV on Feb. 20 2015,16:42

--------------
Ignored by those who can't provide evidence for their claims.

http://skepticink.com/smilodo....retreat

Posts: 4003
Joined: Mar. 2008

 Quote Yup that's exactly what the other contributors are claiming

You keep aiming for the floor and missing. It's quite a talent.

Random noise (white noise) contains all frequencies. So you can apply filters and get pure frequencies. At least in theory.

So the random noise is how the new alleles arrive. Random mutation creates the new sequence.

Now apply selection (purifying or adaptive, or both). You get a "meaningful signal". Out of all the random mutations, a few survive and reproduce. (Bear in mind that, meanwhile, zillions of unmutated individuals also survive and reproduce.)

The selector is the properties of chemistry. Some sequences create folds or regulators that either don't change viability or improve it. This is not a designer. It's chemistry.

And it's not a target. It's just that some sequences are functional equivalent to other sequences. That's why genomes can change while the phenotype remains static. That's why we have alleles and variants.

If a genome reaches one of Doug Axe's dreaded local maxima, it is not stuck, because every time a neutral mutation occurs, it opens new pathways, new dimensions in the search space. A new mutation might produce a breakthrough in functionality. See Lensky.

But viable alleles are not targets. They are not searched for.

--------------
Any version of ID consistent with all the evidence is indistinguishable from evolution.

N.Wells

Posts: 1834
Joined: Oct. 2005

 Quote (cryptoguru @ Feb. 20 2015,15:16) N Wells: Shannon would laugh in your face, the whole point of Shannon's theory is that a purely random distribution has maximal entropy (i.e. no information) ... he's saying the exact opposite of what you are trying to get him to say.

From Peter Grunwald and Paul Vitanyi, 2008, Shannon Information and Kolmogorov Complexity
http://homepages.cwi.nl/~paulv.....nfo.pdf

 Quote [with my emphasis] Both [Shannon information theory and Komogorov complexity theory] aim at providing a means for measuring ‘information’. They use the same unit to do this: the bit. In both cases, the amount of information in an object may be interpreted as the length of a description of the object. ***In the Shannon approach, however, the method of encoding objects is based on the presupposition that the objects to be encoded are outcomes of a known random source—it is only the characteristics of that random source that determine the encoding, not the characteristics of the objects that are its outcomes.*** In the Kolmogorov complexity approach we consider the individual objects themselves, in isolation so-to-speak, and the encoding of an object is a short computer program (compressed version of the object) that generates it and then halts.  In the Shannon approach we are interested in the minimum expected number of bits to transmit a message from a random source of known characteristics through an error-free channel. ..............In Kolmogorov complexity we are interested in the minimum number of bits from which a particular message or file can effectively be reconstructed: the minimum number of bits that suffice to store the file in reproducible format. This is the basic question of the ultimate compression of given individual files. A little reflection reveals that this is a great difference: for every source emitting but two messages the Shannon information (entropy) is at most 1 bit, but we can choose both messages concerned of arbitrarily high Kolmogorov complexity. Shannon stresses in his founding article that his notion is only concerned with communication, while Kolmogorov stresses in his founding article that his notion aims at supplementing the gap left by Shannon theory concerning the information in individual objects.

The ellipsis is a quote from C.E. Shannon, 1948, The mathematical theory of communication,  Bell System Tech. J., 27:379–423, 623–656: “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.  The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design.”

Let's recap:
Cryptoguru:

 Quote 1) PURE UNDIRECTED randomness, which simply creates random output and not information

Me:

 Quote #1 creates a perfectly good form of information (Shannon information), but you don't like that definition.  You want information to have "intended meaning".

From the source above: "outcomes of a known random source", i.e. random output constitutes Shannon information.  Shannon wouldn't have said that messages frequently have meaning if he hadn't recognized that sometimes they don't.  Yes, random strings equate with maximum entropy and little "meaning", but they still comprise "Shannon information". Shannon information is not concerned with contained meaning, but with the number of bits required for transmitting it. Kolmogorov was concerned about how much the meaning in the information could be compressed.  Different things.

You have it correct that "random noise is not compressible, just like high frequency information. But that doesn't mean the signal contains meaningful information.  .....  Shannon's theorem isn't interested in the content of the signal ... it treats all signals as information."  However, you also denied that random noise is information, which is untrue: for Shannon it is maximally entropic information (and it is incompressible), but it is still information by his definition, contrary to your statement #1.

 Quote Take a functioning gene.  Duplicate it (that happens all the time).  That is new Shannon information, because your compressed version has now been increased by the need to say "x2".  It probably doesn't do anything useful, although an extra copy of a gene can have its uses.  Now if one of the two copies suffers a mutation that disables its function, that's no longer a problem, and the duplicated can go on to suffer additional changes, some of which might develop other uses.  Actually, the first mutation probably didn't disable the primary function, but shifted it, thereby contributing to a different suite or range of capabilities.
I wrote my third sentence very poorly (ironically, I overcompressed what I meant to say :) ): The duplication comprises new information, both new Shannon information and new Kolmogorov information: in the case of the former the message has doubled in length, while in the case of the latter, you at least have to expand your previous message by saying "times 2".  New meaning comes in with the events in second half of the paragraph: that this sort of thing happened is the most parsimonious interpretation of all the genes that aren't ORFans. You have yet to address this.

cryptoguru

Posts: 53
Joined: Jan. 2015

 Quote Sorry, Cryptoguru, but your Ph.D. apparently gave you little insight into algorithmic information theory (AIT). (Maybe it's from the one of the fine institutions that "Drs." Barnes, Baugh, Bliss, Burdick, and Hovind got their paper from? BTW, my Ph.D. is from TAMU, and that can be confirmed easily.)

I am still writing up my PhD part-time at the request of the department after leaving years ago part-way through writing-up; (due to being offered commercial opportunities that I would have been an imbecile to decline) ... it is for a reputable University in the UK.

Hehe ... you're the one who studied in the Bible-Belt  :D

Posts: 4003
Joined: Mar. 2008

You are trying to reify a metaphor. In biology, information is a metaphor. It's an abstraction. Abstraction are maps, not territories.

You can't get away with arguing that because a feature isn't on the map, it doesn't exist.

If you encounter a glitch in your understanding based on information theory, yoy are mistaking the map for the territory, You are stretching a metaphor beyond applicability.

--------------
Any version of ID consistent with all the evidence is indistinguishable from evolution.

Wesley R. Elsberry

Posts: 4936
Joined: May 2002

ABD is not Ph.D.

And it isn't even certain that ABD is the status.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

cryptoguru

Posts: 53
Joined: Jan. 2015

N Wells: Amazing ... actually unbelievable, you've just equivocated over the word MESSAGES now instead of INFORMATION.

The "message" or "information" that Shannon is describing is something to be transmitted across a noisy channel. He is not interested in the content of the "message" or "information". He merely wants to recover the "message" or "information".

We are not talking about THAT kind of "message" or "information" ... we are talking about information (or a message ... call it what you like) that is the content of the transmission. Sure you can send random noise to a recipient, but that's NOT what we're talking about ... we're talking about something that is ordered with intent for the purpose of being understood by the recipient i.e. it has meaning.

A biological cell when it reads DNA understands the format of the information and deterministically processes that information as instructions that it understands to build an organism. This is why I call DNA computer code (but everyone gets upset with that too) as it is processed as a turing machine ... so you can't just say it's ANY kind of message or information ... it's meaningful sets of non-linear instructions. (similar to AVIDA)
THIS kind of information (e.g. computer code) is only known to originate from an intelligence.
AVIDA attempts to refute this by trying to build a computer code through random mutation and natural selection. But it enforces a known target by measuring if sequences of commands produce a known logical function. The randomness isn't generating a useful set of instructions (meaningful code) by itself. The randomness is being filtered using the known target to eliminate functionally useless output and just leave the stuff we wanted.
YES you can say that all the unwanted random variations are information too if you like, but they're not the kind of information we're talking about, we're talking about information that solves a problem, not just randomly distributed instructions that have no use.

If I randomly throw a pile of sticks on the floor, it's a pile of sticks ... you could say that the arrangement of them is information, but it has no meaning if their orientations are randomly distributed. Then I arrange the sticks to spell out the word "DANGER". You understand the message, it is written in a language that you understand and you presume that an intelligent agent left the sticks in that arrangement to communicate that there is danger (this may not be the originator's intention, but you would not be stupid to assume the message could be intended for you). You would say this is a message or information. You would not claim the original pile of sticks is information (in the normal use of the word).

My argument (and it pains me to have had to explain to adults what the normal use of the word information or message means) is that the meaningful information isn't expected to arise randomly. Evolutionists also claim that it can't arrive purely randomly; except that natural selection can work on random material and preserve good stuff and filter out bad and eventually we're left with meaningful information that solves a problem.

I put it to you that the only kind of process that would allow successive random arrangements of the sticks problem to result in the message, are those that can test against correct stick positions and attempt to preserve them somehow. This is similar to what AVIDA is doing by testing against known logical functions except that there are an infinite number of ways that Avidian commands can produce each logical function, so it becomes relatively easy for it to find combinations that work and similarly to get to a final solution (e.g EQU) and carry on optimising until it has the smallest and most efficient set of commands possible to perform the task.

Anyway I've got a life and I think the current topic has run its course .... so I'm going to leave you all to argue amongst yourselves about whether a stick is information and leave you with this last almost off-topic thought.

... what came first? the genetic instruction set or the cell to run it in? If it was the genetic instructions, how would it even arise without the mechanism to run it? (i.e. it would die without replicating) ...  If it was the cell that came first, how did it get built? and how then did it generate the instruction set to run on it? And how did it not die before it was able to generate the instruction set to allow it to replicate etc.?
Remember a minimal genome would need enough complexity to build, fold and assemble proteins, respire and replicate.
Is that not a good example of irreducible complexity?

OgreMkV

Posts: 3668
Joined: Oct. 2009

Quote (midwifetoad @ Feb. 20 2015,17:44)
 Quote Yup that's exactly what the other contributors are claiming

You keep aiming for the floor and missing. It's quite a talent.

Random noise (white noise) contains all frequencies. So you can apply filters and get pure frequencies. At least in theory.

So the random noise is how the new alleles arrive. Random mutation creates the new sequence.

Now apply selection (purifying or adaptive, or both). You get a "meaningful signal". Out of all the random mutations, a few survive and reproduce. (Bear in mind that, meanwhile, zillions of unmutated individuals also survive and reproduce.)

The selector is the properties of chemistry. Some sequences create folds or regulators that either don't change viability or improve it. This is not a designer. It's chemistry.

And it's not a target. It's just that some sequences are functional equivalent to other sequences. That's why genomes can change while the phenotype remains static. That's why we have alleles and variants.

If a genome reaches one of Doug Axe's dreaded local maxima, it is not stuck, because every time a neutral mutation occurs, it opens new pathways, new dimensions in the search space. A new mutation might produce a breakthrough in functionality. See Lensky.

But viable alleles are not targets. They are not searched for.

I think it important to add that local maxima in biology/DNA is only the case as long as the environment is totally static. Any minor change (even a few months of drought) can result in what was the maxima one month being useless the next.

--------------
Ignored by those who can't provide evidence for their claims.

http://skepticink.com/smilodo....retreat

OgreMkV

Posts: 3668
Joined: Oct. 2009

Let me ask this again, since it's critically important

 Quote So is a signal of a message encrypted on a purely random one-time pad random noise or a meaningful message?eta: I guess the real question... can you determine whether information is meaningful or not, just by looking at it? If you can, then you have done more than every ID proponent ever. But I don't think that it is even possible mathematically.

But since you have been soundly trounced, it's time to run away.

I give it a week before you're back on my blog, whining about something totally unrelated to all of this.

I really did want to talk about radiometric dating, but we already know you lost that battle... Mr. Never-Provide-the-Source-of-My-Claims.

--------------
Ignored by those who can't provide evidence for their claims.

http://skepticink.com/smilodo....retreat

OgreMkV

Posts: 3668
Joined: Oct. 2009

 Quote My argument (and it pains me to have had to explain to adults what the normal use of the word information or message means) is that the meaningful information isn't expected to arise randomly.

That must be a record, three conflations of terms in the same sentence. Even Joe couldn't accomplish that.

--------------
Ignored by those who can't provide evidence for their claims.

http://skepticink.com/smilodo....retreat

cryptoguru

Posts: 53
Joined: Jan. 2015

Don't worry Kevin I'll stay away from your very sad excuse for a blog from now on.

An encrypted message that looks like random noise is a meaningful message even if you can't see it is ... it's about the intended recipient. If the intended recipient can recover and understand the message and it has meaning to them, then it is a meaningful message, you can't ascertain this by analysing the encrypted message.
This is my point ... we are not concerned with transmitted data, we are purely talking about meaningful messages to a biological cell. If the DNA instruction set when processed by the cell doesn't result in a meaningful result (i.e. functionally advantageous trait) then it is not information I am interested in seeing you create with your magical natural selection formula.

Will happily come back here another time to discuss radio-dating.

Peace!

The whole truth

Posts: 1554
Joined: Jan. 2012

cryptoguru said:

"Dawkins would laugh at the stupidity of some of the claims made in the last few posts by evolutionists."

And your point is ? Here, let me clue you in on something: I really don't think that anyone here sees Dawkins as some sort of 'God' or as THE SPOKESMAN for all evolutionists or atheists. You obviously believe that what Dawkins may or may not laugh at is of HUGE importance to what all evolutionists and/or atheists think or don't think, and say or don't say. Some or all of us evolutionists and/or atheists may or may not agree with some or all of what Dawkins thinks, says, or may or may not laugh at. He's a man, not a 'God', and we all know that.

"This discussion has taken a surreal and utterly ridiculous turn for the irrational"

Dang, you IDiot-creationists sure are hard on irony meters.

P.S. If you're really want to know what or who Dawkins would laugh at, ask him to look at this thread and see what he says.

--------------
Think not that I am come to send peace on earth: I came not to send peace, but a sword. - Jesus in Matthew 10:34

But those mine enemies, which would not that I should reign over them, bring hither, and slay them before me. -Jesus in Luke 19:27

Wesley R. Elsberry

Posts: 4936
Joined: May 2002

Cryptoguru:

 Quote AVIDA attempts to refute this by trying to build a computer code through random mutation and natural selection. But it enforces a known target by measuring if sequences of commands produce a known logical function.

Wrong.

I don't recall exactly how many times Cryptoguru has been corrected on this very point, but it's a lot.

Avida does not examine sequences of genomes for rewarding merit. Avida rewards Avidian *behavior*. As I said, Avidians are rewarded for what they do, not what their genomes are.

As I related about my own research, the merit reward has nothing to do with the sequence of instructions in the genome. If that were true, it would be difficult to be able to deliver multiple examples of different Avidian sequences accomplishing the same task, as I have related.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

Wesley R. Elsberry

Posts: 4936
Joined: May 2002

And I'll note that in the research I related about my work in Avida, there was no target, no fitness function, no hierarchy of rewards, and yet there was evolution of multiple lineages to a diversity of programs in the optimal class of gradient ascent programs.

--------------
"You can't teach an old dogma new tricks." - Dorothy Parker

N.Wells

Posts: 1834
Joined: Oct. 2005

 Quote we are talking about information (or a message ... call it what you like) that is the content of the transmission.

YOU are talking about the content of the message, not "us".  YOU said that "1) PURE UNDIRECTED randomness, which simply creates random output and not information".  I'm saying that that claim is wrong, according to Shannon's definition of information.  If the message being transmitted is a randomly generated string (i.e. random output), then that random string is information, according to Shannon's definition of information as the number of bits required to transmit a message, without regard to whether the message has "meaning" or not.  A message that consists of a random string is maximally entropic and is not compressible, but it is nonetheless information that can be quantified in bits.  Noise is something else, basically whatever is added or subtracted that degrades the original message (not its meaning, but just the "recapturability" of the original string, again regardless of whether it carried "meaning" or not).

 Quote Maybe we should use a different word if you're finding it hard to understand what we're talking about (or deliberately trying to muddy the waters with equivocal terms). Let's call it a MESSAGE instead of information. So I can encrypt a message and make it highly entropic so that the message is hidden, this is not random noise ... it can appear random, but the message is hidden in the signal. Does a random signal contain a message? NO! Random noise does not contain ordered and meaningful messages. DNA is a message ... a message to the cell about how to build an organism. So let's stop pretending that we can get MESSAGES automatically from random data.

"Message" (in your sense, not Shannon's) and "meaning" are elusive.  White light is a mixture of all wavelengths in the visible range.  While not exactly random, this is not very promising as a meaningful signal or a message.  However, you can process it to carry messages: morse code, light in optical fibres, "one if by land, two if by sea", etc.  Animals that have eyes find all kinds of meanings and significance amidst the noise of all its many reflections from all the various surfaces around them.  Filtering or scattering can separate out any particular narrow range of wavelengths (giving us a blue sky and a sun that looks yellow), or white light can be refracted by raindrops into a rainbow, which fervent christians used to take as a direct message from their god reminding them of his promise never to create another Noachian flood.  So, how much meaning exists in the rainbow, or just in the ray of light, and when did it get there?  Shannon shortcircuited all that by ignoring meaning and shortcutting to "how many bits are needed to transmit a quantity of information?"

Fire away - that's a topic I really love.  Do you want to start with concordia curves, or would you prefer to work up to them?

Texas Teach

Posts: 1783
Joined: April 2007

Quote (N.Wells @ Feb. 20 2015,20:03)

Fire away - that's a topic I really love.  Do you want to start with concordia curves, or would you prefer to work up to them?

I think he'd like to start with "were you there?" and then work through the (ir)relevant parts of this list

--------------
"Creationists think everything Genesis says is true. I don't even think Phil Collins is a good drummer." --J. Carr

"I suspect that the English grammar books where you live are outdated" --G. Gaulin

OgreMkV

Posts: 3668
Joined: Oct. 2009

Here's where he started: http://www.skepticink.com/smilodo....-dating

I think he hit most of the major ones, including refusing to supply sources for his claims.

Anyway, the point I was making about information and encryption is the same point that crypto isn't getting about ID. To him, things only have meaning if he knows that they have some meaning to him. An encrypted message won't have meaning to him... and therefore, by his definitions, contains no information.

So a gene that he can't identify a protein for or one that's a known regulatory sequence contains no information. But when it is found to have a function or a protein, then it will have information.

Which makes absolutely no sense, because to measure the information content in DNA, he has to know, in advance, what every single part does.

--------------
Ignored by those who can't provide evidence for their claims.

http://skepticink.com/smilodo....retreat

N.Wells

Posts: 1834
Joined: Oct. 2005

Sorry for overkill, but as I said, I like radiometric dating, and over at Smilodon's Retreat, Cryptoguru is wrong out of the gate and rapidly gets worse:
 Quote There are 3 faulty assumptions with radiometric dating ... all 3 have been proven to factor into dating inaccuracies1) we know the initial conditions2) there has never been any contamination3) the decay rate is constant

Three out of three statements that are misleading or wrong right up front!  #2 is not an assumption; #3 is a conclusion that is known to work in crustal conditions but is not claimed for other situations; and #1 is a testable inference that is re-investigated with each study.

First, decay rates, like any other process, can of course be sped up or slowed down.  This is not news.  Some types of acceleration of decay rates are well known: we call them nuclear bombs and nuclear power plants.  The trouble is that to accelerate decays appreciably require the sorts of temperatures and pressures found inside stars, or massive bombardment by decay products nearby concentrations of other radioactive isotopes, as at the Oklo natural nuclear reactor site.  On the basis of experiments and calculations, the comparatively mild conditions of metamorphism (2,000 degrees K and 100 kbar should melt pretty much anything in the crust) aren't sufficient to influence the process.  Intense gamma ray bombardment can trigger decays in some isotopes (as with neutrons in other isotopes).  Embedding in metal and cooling to a few degrees above 0K can do in alpha emitters.  You get the idea: there are ways to speed these up, as with any physical or chemical process, but there aren't any that will operate in crustal conditions without leaving evidence of an Oklo type reaction or a nuclear bomb going off.  In short, in crustal environments, they're constant, so a) decay rates are known to vary but b) this doesn't happen under crustal conditions absent an Oklo reactor.

Second, there's always contamination.  It's inevitable.  This is not news either but again it is not a fatal flaw for all of radiometric dating.  The questions are how much, can we evaluate it, and can we adjust for it?  If you handle a dinosaur bone with bare hands, oil and sweat and dandruff can contaminate it and give the bone a very old radiocarbon age (30,000 years or older for trace contamination by modern organic carbon).  If the bone was prepared with shellac or glue or cleaned with detergent or carried in a burlap bag or picked up with leather or cotton gloves, or got mold growing in it in the museum basement, or (most commonly) was invaded by plant root hairs while lying in the soil prior to discovery, that's potentially significant contamination.  But those are known issues and are easily avoided: don't do those things, and extract the sample for analysis by drilling a sample out of a pristine part of the interior, treat it with peroxide, and inspect it for any root hairs, fungal hyphae, etc.  If the sample is in bad shape, then you can't date it.   However, there are even more subtle and pernicious sources of contamination: the lubricant that you use in your instrument's vacuum pump;  the grease that you use to seal your access port; does your instrument have any plastic piping?; modern carbon in the acids you used to dissolve the sample, and so on.  These can all be mitigated to a degree, but ultimately there will always be enough contamination to give some kind of a very old date to something like diamond dust or ancient coal.  The nature of exponential decay is that a little contamination will not throw off a date for a sample that only a few half-lives off, but as you get to 40kyr to 60 kyrs, there is so little original C14 left that contamination will be a problem.  Several decades ago, people hoped to get improved C14 dates on materials 100 to even 200 kyr by individually counting C14 atoms with a cyclotron (accelerator mass spectroscopy) dating, which did extend dating ranges a little, but contamination issues dashed those hopes.  40-60 kyr is about the limit, depending on the quality of the sample and other issues.

A reverse example of contamination is if you have a modern lacustrine clam or snail living in a lake on limestone bedrock, it may well be getting some or most of its carbon not from atmospheric CO2 that becomes dissolved in the lake water but from "dead" carbon i.e. carbonate via dissolution by the lake water of the calcium carbonate in the bedrock.  That way, living organisms can easily date as 10-20 thousand years old.  Moral, you usually can't reliably date things living on limestone.

Other decay systems have other contamination issues and other solutions.  K-Ar for igneous rocks is very nice for this: not only is argon a natural gas, which won’t link up in crystals, but argon in the magma will bubble out before it has a chance to get trapped in a crystal.  However, once in a while argon will flood up a hotspring pipe or a fracture or an apatite vein and will soak into adjacent crystals, but those issues are expected in those situations and can be tested for by testing samples progressively farther from the fracture (and then throwing out the contaminated samples, or simply not sampling near veins and the like in the first place).  For U-Pb and Th-Pb, we rarely any longer try to do whole-rock analyses because there’s there’s too high a possibility of some old lead from previous decays that got into the magma from melted host rock.  Magmas do in part melt their way up to the surface, so a lava that erupted 10 million years ago can have picked up chunks of 2 b.y. old rock that it passed through (the chunks may be visible xenoliths) or a few crystals that melted out of the host rock but floated around in the magma (more on those in a moment), or simply melt products that included some old radiogenic lead from long-ago decay events.  The last is resolved by no longer doing whole-rock dates.  Instead, we pick out crystals of minerals that for various reasons will incorporate the parent isotope but won’t naturally incorporate the daughter element, such as zircon and monazite.  Zircon has the added advantage of forming after most of the lead has gone into other minerals, of having a very high melting point, of being chemically and physically very resistant to weathering and breakage.  This means that it won’t leak daughter radon during the decay process, and is comparatively hard to mess up during mild metamorphism.  As I mentioned, it is possible for old zircons to melt out of host rock, not dissolve in the magma, and get included in the new igneous rock, or to be recycled though an even older rock cycle.  However, when this happens, you get a corrosion rim around the old crystal and then a new growth band around the old core with nice new crystal faces: these are easily recognized, and we date several sample spots across the crystal and can see date the old core and the new rim separately
http://cjes.geoscienceworld.org/content....rge.jpg
http://specialpapers.gsapubs.org/content....rge.jpg
It’s an extremely cool technique.

Still on #3: It is always possible that something else went wrong that we haven’t yet anticipated or corrected for.  Most typically for Precambrian rocks, we had a little later metamorphism that caused partial to total recrystallization, kicking out some to all of the daughter isotope formed since the original crystallization.  We have multiple lines of defense here including one truly awesome technique that gives us two useful bits of information for the price of one.  First, we don’t just date one sample, and we don’t just date by one technique.  All radiometric decay chains decay at their own unique decay rates.  This means that it is impossible to mess up a crystal in such a way as to force a wrong date*, while having two different decay chains agree on the same wrong result.  (*If you completely melt the crystal, you will force out all the accumulated daughters for all decay chains, resetting the clocks to 0, but the new crystal is brand new, so 0 is now the correct age.)  So if different isotopes in the same crystal give you the same dates, you’re golden.  We typically date multiple different decay chains in multiple different crystals in a rock, which always give different dates because different minerals crystallize at different temperatures, and a granite can easily take tens of millions of years to cool from crystallizing a temperature with a high melting point to another with a low melting point.  That’s a colling history for the granite, and we expect it to show a logical progression, so if that is present, good.  Most important, we can calculate how the U-235/Pb-207 ratio should be changing over time and how the U-238/Pb-206 should also be changing over time, if everything worked.  This gives you a concordia curve, and if your sample has exactly the right pair of ratios then everything is peachy keen
http://www.tulane.edu/~sanels....ull.jpg

When rocks experience a little metamorphism, the crystals can be partially reset, by losing a fraction of their daughter isotope.  The exact fraction is going to vary: more from little crystals than big ones (it’s harder for lead to diffuse out of the center of a large crystal); more from broken crystals or crystals with cracks than from pristine ones; and so forth.  This results in discordant dates.  However, the result of different percents of losses is typically a line (or long narrow field) of dates on the concave side of the concordia curve.
http://www.uwgb.edu/dutchs.....dia.gif
The math for this is inexorable: where the line through the discordant points hits the older part of the concordia curve, that would be the age for crystals that have lost 0% of the daughter, i.e. the original age of crstallization.  Where the line hits the younger end of the concordia curve, that would be the age of crystals that lost 100% of the daughter atoms, i.e. the age of metamorphism.

 Quote We can't possibly assert that we can prove 1 or 2 because neither is observable, except we can maybe enforce 2 in the case of diamonds. We have cases where we have later proved that our initial assumptions for 1) and 2) were wrong ... so how do we know when we've got it right?For 3 we do know that decay rates can be effected by external factorshttp://www.earth.sinica.edu.tw.../....u.t....u.tw...

Well, that’s you making bogus arguments because they sound good to you, rather than because you understand what’s going on.

 Quote So answer me the followingA) how is there C14 in diamonds?B) how is there C14 in dino bones?C) why do professionally processed rock datings on the same sample give WILDLY varying results?D) why do professionally processed rock datings on known age samples (100 years) give billions of years?

I’ve answered most of these already, but there’s a little more to say on all of them, especially diamonds.

C & D) Stratigraphically young lavas that passed through much older rocks on the way up can get contaminated in terms of chunks of old rock (recognizable in outcrop and thin section, so don’t take dates of xenoliths as dates of the rock), & old crystals that get melted out of the host rock and end up in the new rock (SEM  / EDAX examination, & date the cores and the rims separately).  Different minerals in a single rock can and should give different ages when they have different crystallization temperatures, meaning that they formed at different times, and one with the lower melting temperature can get reset by low-grade metamorphism that will not affect the more refractory crystal.

There are some excellent and famous examples in Hawaii, where comparatively young lavas have yielded ancient dates.  Hawaiian volcanoes are “hot spot” volcanoes, with magmas coming up from mantle depths, and they have a habit of picking up chunks of mantle and lower crust as they come up.  These are easily identified, and can be avoided or dated as you wish.
http://peridot1.blogspot.com/....pot....pot.com
Creationists implied that these are legitimately confusing dates: they cited a study of the xenoliths as if it were a study of the lava, and they failed to point out that these sorts of studies are done routinely to see if a rock can be dated, and the authors concluded that their data showed that dating could not be done and that their results had no geological meaning.

Most of your “data” here is likely to come from the creationist literature, which involves a truly mind-blowing quantity of lies, misrepresentations, misunderstandings, and mistakes regarding legitimate work done by legitimate sciencists.  These are dealt with in a wonderfully readable essay by Brent Dalrymple, available at
http://www.talkorigins.org/faqs.......ng.html
Note in particular how creationists use data such as tests of a potential dating system where the scientists decided it was too vulnerable to getting screwed up so they concluded that it should not be used, but the creationists used it as evidence of geologists being unable to justify radiometric dating:

 Quote The two ages from gulf coast localities (Table 2) are from a report by Evernden and others (43). These are K-Ar data obtained on glauconite, a potassium-bearing clay mineral that forms in some marine sediment. Woodmorappe (134) fails to mention, however, that these data were obtained as part of a controlled experiment to test, on samples of known age, the applicability of the K-Ar method to glauconite and to illite, another clay mineral. He also neglects to mention that most of the 89 K-Ar ages reported in their study agree very well with the expected ages. Evernden and others (43) found that these clay minerals are extremely susceptible to argon loss when heated even slightly, such as occurs when sedimentary rocks are deeply buried. As a result, glauconite is used for dating only with extreme caution. Woodmorappe’s gulf coast examples are, in fact, examples from a carefully designed experiment to test the validity of a new technique on an untried material.

Another classic example of abuse of this sort involves the Plateau and Cardenas Basalts in the Grand Canyon: see http://www.talkorigins.org/faqs.......ce.html for details.

B) How is there C-14 in dinosaur bones?  First, see the answers for diamonds below.  Second, contamination: a very little modern Carbon can give a very old age to a rock with no carbon of its own.  Contamination is hard to avoid in the best of times.  There is also an issue of trust: people who want young ages and fib about other aspects of science perhaps shouldn’t be trusted to have handled the samples carefully enough to avoid contamination, when just a little contamination could appear to validate their beliefs. Third, misuse of data.  I attended a creationist talk in the late sixties where the creationist presented a C14 date on a geologically ancient fossilized wood and showed a lab report that he claimed said the fossil was only 36000 years old.  Unfortunately, the photo of the lab report showed the lab report saying “>36000 yrs" (or something close to that), which I presume was radiocarbon infinity for that lab at that time, and in any case just says “>”.  However, I haven’t seen any creationists do that since, and this guy was a jaw-droppingly sorry example (he also complained about paleontologists not being able to produce any transitional fossils between a fish and a starfish).

A) How about C14 ages on diamonds?  Scientists use industrial diamonds to test their C14 instruments and methods precisely because diamond should have very little C14 in it.  (They have also used coal and oil.)  This is how they learn how much contamination is entering their samples and whether it’s a problem.  They publish their results to show the resolution achievable in their labs (and bear in mind that if you have so little contamination that you only get a date of 50,000 years, you have done a very good job of avoiding contamination, and dates only a little less than that will be trustworthy).  Typically, chemical preparation of samples adds about 1 microgram of modern carbon, so a standard sample of 1 milligram will inevitably have 0.1% modern carbon in it, which will give a date younger than radiocarbon infinity (57136 yrs, to be precise).  It is therefore dishonest of creationists to say that this proves that all C14 dating is problematic, because the work shows that dates are reliable nearly up to those ages, but not beyond.  Because decay is exponential, “backing up to the recent” a little means that any samples of a slightly younger age will have a LOT more of their own original C14, so the trace amount of C14 that your procedures or machine introduced will be insignificant.  There is a great treatment of these issues by Karl Bertsche:
http://www.talkorigins.org/faqs.......ue.html
Bertsche has an interesting discussion of “instrument background” (the values reported by an instrument with no sample in it).  This includes
 Quote ion source “memory” of previous samples, due to radiocarbon sticking to the walls of the ion source, thermally desorbing, and then sticking to another sample    mass spectrometer background, non-radiocarbon ions that are misidentified as radiocarbon, sometimes through unexpected mechanisms    detector background, including cosmic rays and electronics noise

I would also add that coal and diamonds contain nitrogen as a trace component (up to 1% of a diamond is nitrogen).  If that gets bombarded by beta particles (neutrons, from nearby radioactive atoms), then some nitrogen could get converted to C14.  I don’t know if the amounts are significant, however.

These topics are complicated, way more than I presented here, so there is always endless scope for trying to throw doubt by bringing up more complications.  Geochronologists are always testing their systems, investigating situations where something didn't work, and trying out new techniques and new dating methods, where some of them work while others turn out to get discarded as unworkable or problematic.  Regardless, creationists have made a dishonorable but standard practice of taking those complex results where scientists determined that a potential technique was unworkable or where they determined why an enigmatic result happened and then representing them as evidence that dating overall does not work.

Soapy Sam

Posts: 659
Joined: Jan. 2012

 Quote (midwifetoad @ Feb. 20 2015,14:12) Regarding ORFan genes:What percentage of genomes have been sequenced and cataloged?Does having no cousins logically prove that one has no parents?

One of the reasons I follow Creo-debate is that I learn stuff, via unexpected avenues. You also may find this interesting.

http://bioinfo2.ugr.es/PDFsCla....011.pdf

--------------
SoapySam is a pathetic asswiper. Joe G

BTW, when you make little jabs like “I thought basic logic was one thing UDers could handle,” you come off looking especially silly when you turn out to be wrong. - Barry Arrington

NoName

Posts: 2721
Joined: Mar. 2013

 Quote (cryptoguru @ Feb. 20 2015,20:13) ...An encrypted message that looks like random noise is a meaningful message even if you can't see it is ... it's about the intended recipient. ...

You've given the whole game away with that statement.

You can never know that you have a 'meaningful message' unless you are omniscient.  There is always, always, the possibility that there is a recipient for whom the item under consideration is meaningful.
There is no a priori mechanism to determine that any given item is meaningless.
This is why we talk about information.  Meaning is found, everything has meaning.  If we don't know the meaning yet, well, we keep looking.
You, on the other hand, stop dead in your tracks as soon as there is no meaning to you in your current context and under your current focus.  You then commit the astounding hubris of insisting that because there is no meaning there for you, why, then there is no meaning there for anybody.  Meaning magically springs into existence when your focus changes, when your context changes, when you learn something from someone else.  As, for example, stellar spectra.  Or learning Morse Code and suddenly seeing that those 'random' piles of dust were, all along, a secret message by a person held prisoner in the room at some point.  As we have seen you do throughout this thread.
You have no way to determine intent or the existence of a recipient in all cases.  You must inject a massive subjectivist load of presuppositions into everything you encounter, and thus you are incapable of doing science or even understanding the scientific enterprise.  The world and everything in it is meaningful, abundantly, fully, maximally densely packed with information.  With no intent or intended recipient necessary a priori.  It is the nature of existence that it is self-revealing, and what is revealed, exposed, the simple 'standing forth' of being what it is is meaningful.
You prefer to take on board culture superstitions and insert an intender into this to somehow 'insure' that information is  meaning is message, that the world becomes the meaning of the one who intended it as a message.
That attempt must fail because the presumed intender is itself an existing thing.  it is either meaningful in and of itself, or its meaning comes from its nature as an intended message from some 'intender-intender'.  It becomes meaning-assigners all the way down, it is a vicious regress.
And you're stuck with it because you have denied that things are meaningful, that things exhibit/provide/contain/are information in and of themselves.  If everything gets its meaning from something else, then nothing ever has meaning.  QED
You have to cheat and assert, along with the assertion of some 'prime intender' that it and only it is information in its own right.  Piling unjustified nonsense on top of unjustified nonsense.
Our approach is much more straight-forward, much more honest, doesn't violate Ockham's razor, and, btw, works.  We have evidence, you don't.

stevestory

Posts: 11216
Joined: Oct. 2005

Quote (OgreMkV @ Feb. 20 2015,20:06)
 Quote My argument (and it pains me to have had to explain to adults what the normal use of the word information or message means) is that the meaningful information isn't expected to arise randomly.

That must be a record, three conflations of terms in the same sentence. Even Joe couldn't accomplish that.

Well, Joe's just an idiot with a rage disorder. Cryptoamateur's thing is incredulity as argument. It's not any more scientific, but it's a little less boring. There's hope for crypto, not so much for Joe or Gary.

NoName

Posts: 2721
Joined: Mar. 2013

 Quote (cryptoguru @ Feb. 20 2015,20:01) ...A biological cell when it reads DNA understands the format of the information and deterministically processes that information as instructions that it understands to build an organism. ...

Rank anthropomorphism.
Cells no more understand DNA than organic acids understand equilibria equations or conditions.
Nor than hydrogen understands how to bond with oxygen to form water.
Nor than water knows how to dissociate into H+ and OH- ions until they are in balance at pH 7.
It is entirely improper to speak of cells 'understanding' DNA except in metaphorical language.  Easily seen once you accept that there is nothing going on but chemistry and physics, just as in water or organic acids in solution.
What's that you say?  There is *too* something else going on, there just has to be?  Well, what is it?  Point to a real phenomenon that cannot be explained by chemistry and physics, prove that it cannot, or that there is demonstrably another cause other than chemistry or physics operating there.  But remember, incredulity is neither a proof nor an argument.
The reality is that chemical reactions are chemical reactions.  There is no 'understanding' going on.
Understand?

JonF

Posts: 633
Joined: Feb. 2005

[quote=OgreMkV,Feb. 20 2015,20:04]
 Quote (cryptoguru @ Feb. 20 2015,19:01) I really did want to talk about radiometric dating, but we already know you lost that battle... Mr. Never-Provide-the-Source-of-My-Claims.

Ooh!  Oooh!  Oooh1oneelevetyone!

{ABE} I see I was right.

Cryppie ol' pal, the one thing that is absolutely sure about anyone who brings up those old tired errors is that the person knows nothing of radiometric dating.

More to come...

JonF

Posts: 633
Joined: Feb. 2005

 Quote (cryptoguru @ Feb. 20 2015,19:01) I really did want to talk about radiometric dating, but we already know you lost that battle... Mr. Never-Provide-the-Source-of-My-Claims.

1) In the most widely used (by far) modern methods the initial conditions are known by basic physics or are produced as a part of the application of the method.  In U-Pb dating we know that zircons strongly reject lead when forming and the only way that significant amounts of lead can get into a zircon is by radioactive decay.  The RATE group, made up apparently of the only creationists who have somewhat of a clue, acknowledged this in Helium Diffusion Rates Support Accelerated Nuclear Decay:

 Quote Samples 1 through 3 had helium retentions of 58, 27, and 17 percent. The fact that these percentages are high confirms that a large amount of nuclear decay did indeed occur in the zircons. Other evidence strongly supports much nuclear decay having occurred in the past (Humphreys, 2000, pp. 335–337). We emphasize that “old” radioisotopic ages are merely an artifact of analysis, not really indicating the occurrence of large amounts of nuclear decay. But according to the measured amount of lead physically present in the zircons, approximately 1.5 billion years worth—at today’s rates—of nuclear decay occurred. Supporting that, sample 1 still retains 58% of all the alpha particles (the helium) that would have been emitted during this decay of uranium and thorium to lead. It is the uniformitarian assumption of invariant decay rates, of course, that leads to the usual conclusion that this much decay required 1.5 billion years....

In the second most widely used method, Ar-Ar, "excess argon" at solidification is shown by a high  value for the lower-temperature samples.  Even with excess argon Ar-Ar often produces a valid date, as in the Berkeley Geochronological Lab's  tour-de-force of dating the eruption of Vesuvius in 79AD.  See 40Ar/39Ar Dating into the Historical Realm: Calibration Against Pliny the Younger (free registration required for full text):

 Quote Laser incremental heating of sanidine from the pumice deposited by the Plinian eruption of Vesuvius in 79 A.D. yielded a40Ar/39Ar isochron age of 1925 ± 94 years ago. Close agreement with the Gregorian calendar–based age of 1918 years ago demonstrates that the 40Ar/39Ar method can be reliably extended into the temporal range of recorded history. Excess 40Ar is present in the sanidine in concentrations that would cause significant errors if ignored in dating Holocene samples.

And from Radiogenic Isotope Geochronology, the textbook on radiometric dating, section 10.2.3:

 Quote Because the potassium signature of a sample is converted in situ to an argon signature by the 40)39 technique, it is possible to liberate argon in stages from different domains of the sample and still recover full age information from each step. Merrihue and Turner (1966) demonstrated the effectiveness of this ‘step heating’ technique in their original Ar)Ar dating study of meteorites, adapting the method from its previous application to I)Xe analysis of meteorites (section 15.3.1).The great advantage of the step heating technique over the conventional ‘total fusion’ technique is that progressive outgassing allows the possibility that anomalous sub-systems within a sample may be identified, and, ideally, excluded from an analysis of the ‘properly behaved’ parts of the sample. This can apply to both separated minerals and whole-rock samples. Most commonly the technique is used to understand samples which have suffered argon loss, but it may also be a help in interpreting samples with inherited argon.

2) The same most widely used dating methods detect whether there has been gain or loss of relevant material over time, and they often produce a valid age even when there has been such gain or loss. See the two references above.  Also from Radiogenic Isotope Geology section 10.2.3:

 Quote To construct a spectrum plot, the size of each gas release at successively higher temperature is measured in terms of the magnitude of the 39Ar ion beam produced. Each gas release can then be plotted as a bar, whose length represents its volume as a fraction of the total 39Ar released from the sample, and whose value on the y axis is the corrected 40Ar/39Ar ratio from equation [10.12]. The latter is proportional to age, which is sometimes plotted on a log scale, and sometimes linear. Determination of a reliable crystallisation age from the spectrum plot depends on the identification of an age ‘plateau’. A rigorous criterion for a plateau age is the identification of a series of adjacent steps which together comprise more than 50% of the total argon release, each of which yields an age within 2 standard deviations of the mean (Dalrymple and Lanphere, 1974; Lee et al. 1991). However, plateaus have been ‘identified’ in many instances on the basis of weaker evidence.

Here's an example of a step heating plot showing loss of argon from the portions of the sample in which it was less strongly contained, from Fluid inclusion study on mesothermal gold deposits of the Pataz province (La Libertad, Peru):

Note the strong plateau indicating that argon was not lost from those portions of the sample and a solid age could be obtained.

In U-Pb dating of zircons (and some other materials) we know that the initial lead is essentially zero from basic physics.  The technique is to do a simple-accumulation date based on 235U decaying to 206Pb and another simple-accumulation date based on 235U decaying to 207Pb. if the dates are the same that's strong evidence that the date is good and if the date is plotted on a standard plot it will fall on the "concordia curve". Geochronologists have gotten very good at sample preparation and obtaining concordant dates. But sometimes you get discordant dates that don't fall on the curve, usually due to loss of lead which is comparatively volatile. In that case you do multiple measurements on the same sample or co-genetic samples and plot them. Many times for various reasons that are too technical to go into right now the points will form a line and the upper intersection of that line with the concordia curve is the date. For example, the Jack Hills zircons analysed in Evidence from detrital zircons for the existence of continental crust and oceans on the Earth 4.4 Gyr ago (the oldest known minerals formed on Earth):

3) Finally, the constancy of radioactive decay rates (under conditions that could apply on earth)  is an extremely fundamental property of our universe, and any changes at any time would have left unmistakable traces. Many have looked for those traces and they aren't there.  Also theoretical physicists understand radioactive decay pretty well, and know theoretically why they are constant.  At The Constancy of Constants, Part 2 physicist Steve Carlip lists some effects that variable decay rates would have:

• searches for changes in the radius of Mercury, the Moon, and Mars (these would change because of changes in the strength of interactions within the materials that they are formed from);
• searches for long term ("secular") changes in the orbits of the Moon and the Earth --- measured by looking at such diverse phenomena as ancient solar eclipses and coral growth patterns;
• ranging data for the distance from Earth to Mars, using the Viking spacecraft;
• data on the orbital motion of a binary pulsar PSR 1913+16;
• observations of long-lived isotopes that decay by beta decay (Re 187, K 40, Rb 87) and comparisons to isotopes that decay by different mechanisms;
• the Oklo natural nuclear reactor (mentioned in another posting);
• experimental searches for differences in gravitational attraction between different elements (Eotvos-type experiments);
• absorption lines of quasars (fine structure and hyperfine splittings);
• laboratory searches for changes in the mass difference between the K0 meson and its antiparticle;
• searches for geological evidence of "exotic" decays, such as double beta decay of Uranium 238 or the decay of Osmium to Rhenium by electron emission, which are impossible with the present values of basic physical constants but would become possible if these changed;
• laboratory comparisons of atomic clocks that rely on different atomic processes (e.g., fine structure vs. hyperfine transitions);
• analysis of the effect of varying "constants" on primordial nucleosynthesis in the very early Universe.

The constancy of decay rates is not an assumption; it is a conclusion based on decades of experiments and theoretical development.

JonF

Posts: 633
Joined: Feb. 2005

This is fun!  I want to address Cryppie's questions:

 Quote C) why do professionally processed rock datings on the same sample give WILDLY varying results?D) why do professionally processed rock datings on known age samples (100 years) give billions of years?

Short answer; either fraud on the part of the creationist performing the research or misunderstanding/misrepresentation on the part of the creationist presenting the data.

Nick's given you an example of the latter.  My personal favorite creationist fraud is an example of the former.  It's my favorite because Snelling made the fraud so obvious in the "technical" version of his paper.

From the "feed the sheeple" version at Radioactive “Dating” Failure: Recent New Zealand Lava Flows Yield “Ages” of Millions of Years:

 Quote The radioactive potassium-argon dating method has been demonstrated to fail on 1949, 1954, and 1975 lava flows at Mt Ngauruhoe, New Zealand, in spite of the quality of the laboratory’s K–Ar analytical work. Argon gas, brought up from deep inside the earth within the molten rock, was already present in the lavas when they cooled. We know the true ages of the rocks because they were observed to form less than 50 years ago. Yet they yield “ages” up to 3.5 million years which are thus false. How can we trust the use of this same “dating” on rocks whose ages we don’t know? If the method fails on rocks when we have an independent eye-witness account, then why should we trust it on other rocks where there are no independent historical cross-checks?

Of course the cognoscenti immediately realize that using K-Ar dating at the end of the 90's was a pretty stupid thing to do; he could have used Ar-Ar.  But that wouldn't yield the results he wanted.

When we look at the "technical" paper we need to know two technical terms:

• "Whole rock": pretty much says it all. Grind up the entire sample into a fine powder and test the powder, which is a mixture of everything that was in the rock.
• "Xenolith": literally "foreign rock". Chunks of foreign material embedded in the rock matrix, unmelted or partially melted, which formed long before the parent rock solidified.

Ready?  From The Cause of Anomalous Potassium-Argon "Ages" for Recent Andesite Flows at Mt. Ngauruhoe, New Zealand, and the Implications for Potassium-Argon "Dating":

 Quote All samples were sent first for sectioning one thin section from each sample for petrographic analysis. A set of representative pieces from each sample (approximately 100 g) was then despatched to the AMDEL Laboratory in Adelaide, South Australia, for whole-rock major, minor and trace element analyses. A second representative set (50–100 g from each sample) was sent progressively to Geochron Laboratories in Cambridge (Boston), Massachusetts, for whole-rock potassium-argon (K-Ar) dating first a split from one sample from each flow, then a split from the second sample from each flow after the first set of results was received, and finally, the split from the third sample from the June 30, 1954 flow. ...Steiner (1958) stressed that xenoliths are a common constituent of the 1954 Ngauruhoe lava, but also noted that Battey (1949) reported the 1949 Ngauruhoe lava was rich in xenoliths. All samples in this study contained xenoliths, including those from the 1975 avalanche material. However, many of these aggregates are more accurately described as glomerocrysts and mafic (gabbro, websterite) nodules (Graham et al., 1995). They are 3–5 mm across, generally have hypidiomorphic-granular textures, and consist of plagioclase, orthopyroxene, and clinopyroxene in varying proportions, and very occasionally olivine. The true xenoliths are often rounded and invariably consist of fine quartzose material. Steiner also described much larger xenoliths of quartzo-feldspathic composition and relic gneissic structure. ...Xenoliths are present in the Ngauruhoe andesite flows (Table 3), but they are minor and less significant as the location of the excess 40Ar* residing in these flows than the plagioclase and pyroxene phenocrysts, and the much larger glomerocrysts of plagioclase, pyroxene, or plagioclase and pyroxene that predominate. The latter are probably the earlyformed phenocrysts that accumulated together in the magma within its chamber prior to eruption of the lava flows. Nevertheless, any excess 40Ar* they might contain had to have been supplied to the magma from its source. The xenoliths that are in the andesite flows have been described by Steiner (1958) as gneissic, and are therefore of crustal origin, presumably from the basement rocks through which the magma passed on its way to eruption.

Note the handwaving dismissal of the importance of the xenoliths without presenting any data supporting his claim.

He dated a mixture of old and new material with an inappropriate method and expressed surprise when the result was not the date of the young material.  What a breakthrough!

I also note that the "technical" paper contains an excellent example of the latter type of misrepresentation, possibly fraud, in his discussion of  Dalrymple's results as presented in Snelling's table on the page numbered 8.  Dalrymple studied 26 recent lava flows using Ar-Ar which detects excess Ar.  21 of those flows had no excess Ar.  Four of them had a little excess Ar but not enough to effect an age if it were but a few million years old rather than very young. One had enough excess argon to confuse the dating of a rock that is only a few but not several million years old.  Dalrymple concluded, quite correctly, that these results imply the excess argon is rare in concentrations high enough to confuse the dating of rocks several million years old and older:

 Quote With the exception of the Hualalai flow, the amounts of excess 40Ar and 36Ar found in the flows with anomalous 40Ar/36Ar ratios were too small to cause serious errors in potassium-argon dating of rocks a few million years old or older. However, these anomalous 40Ar/36Ar ratios could be a problem in dating very young rocks. If the present data are representative, argon of slightly anomalous composition can be expected in approximately one out of three volcanic rocks.

But Snelling says:

 Quote However, these dogmatic statements by Dalrymple are inconsistent with even his own work on historic lava flows (Dalrymple, 1969), some of which he found had non-zero concentrations of 40Ar* in violation of this key assumption of the K-Ar dating method. He does go on to admit that “Some cases of initial 40Ar remaining in rocks have been documented but they are uncommon” (Dalrymple, 1991), but then refers to his study of 26 historic, subaerial lava flows (Dalrymple, 1969). Five (almost 20%) of those flows contained “excess argon,” but Dalrymple still then says “that ‘excess’ argon is rare in these rocks!”

I think the misrepresentation is clear.

(I have the Dalrymple paper 40Ar/36Ar analyses of historic lava flows in an OCR'd PDF if anyone is interested and doesn't have appropriate access.)

NoName

Posts: 2721
Joined: Mar. 2013

Is anyone else getting flashbacks to afdave?  This is just so reminiscent of his stuff here and at other sites he moved to after giving up here.
Sadly, the marvelous 'Formal Debate:  Dendrochronology  and C-14' has vanished from the web.  Or at least from my limited google-fu.

JonF

Posts: 633
Joined: Feb. 2005

Nyah Nyah Nyah!  Gone from the Web but not gone.  I and a few others have pretty much the whole shootin' match but only as HTML files.  Once you are into the thread the links work.  Drool away:

JonF

Posts: 633
Joined: Feb. 2005

Nick, one comment.  If lead was lost in a single brief heating event the lower intersection of discordia and concordia is the date of the event.  But there are many scenarios in which that intersection does not have age-significance, and it's often difficult to determine if it does have age-significance, so you don't see a lot of interpretations of that intersection.

N.Wells

Posts: 1834
Joined: Oct. 2005

JonF, Yes, I need to back off that point, thanks.

The Hualalai Flow should get a Purple Heart or something for suffering so much at the hands of creationists.  From a summary by Brent Dalrymple, who has written a lot of great stuff about creationist abuse of data from his field, at http://www.talkorigins.org/faqs.......ng.html at

 Quote Two extensive K-Ar studies on historical lava flows from around the world (31, 79) showed that excess argon is not a serious problem for dating lava flows. The authors of these reports “dated” numerous lava flows whose age was known from historical records. In nearly every case, the measured K-Ar age was zero, as expected if excess argon is uncommon. An exception is the lava from the 1801 Hualalai flow, which is so badly contaminated by the xenoliths that it is impossible to obtain a completely inclusion-free sample.

Dalrymple provides further details:
Quote
The 1801 Flow from Hualalai Volcano

 Quote [from creationist literature]  Volcanic rocks produced by lava flows which occurred in Hawaii in the years 1800-1801 were dated by the potassium-argon method. Excess argon produced apparent ages ranging from 160 million to 2.96 billion years. (77, p. 200)    Similar modern rocks formed in 1801 near Hualalai, Hawaii, were found to give potassium-argon ages ranging from 160 million years to 3 billion years. (92, p. 147)

Kofahl and Segraves (77) and Morris (92) cite a study by Funkhouser and Naughton (51) on xenolithic inclusions in the 1801 flow from Hualalai Volcano on the Island of Hawaii.  The 1801 flow is unusual because it carries very abundant inclusions of rocks foreign to the lava. These inclusions, called xenoliths (meaning foreign rocks), consist primarily of olivine, a pale-green iron-magnesium silicate mineral. They come from deep within the mantle and were carried upward to the surface by the lava. In the field, they look like large raisins in a pudding and even occur in beds piled one on top of the other, glued together by the lava. The study by Funkhouser and Naughton (51) was on the xenoliths, not on the lava. The xenoliths, which vary in composition and range in size from single mineral grains to rocks as big as basketballs, do, indeed, carry excess argon in large amounts. Funkhouser and Naughton were quite careful to point out that the apparent “ages” they measured were not geologically meaningful. Quite simply, xenoliths are one of the types of rocks that cannot be dated by the K-Ar technique. Funkhouser and Naughton were able to determine that the excess gas resides primarily in fluid bubbles in the minerals of the xenoliths, where it cannot escape upon reaching the surface. Studies such as the one by Funkhouser and Naughton are routinely done to ascertain which materials are suitable for dating and which are not, and to determine the cause of sometimes strange results. They are part of a continuing effort to learn.

Note that the title of the Funkhouser and Naughton paper was "Radiogenic helium and argon in ultramafic inclusions from Hawaii."  That's "In inclusions" and 'of a different rock type'  - it's kind of hard to miss those details, so we are down to incompetency and/or dishonesty as a explanation for presenting the results as dates of the lava flows.

I liked two other cases as well:

 Quote The Liberian example (Table 2) is from a report by Dalrymple and others (34). These authors studied dikes of basalt that intruded Precambrian crystalline basement rocks and Mesozoic sedimentary rocks in western Liberia. The dikes cutting the Precambrian basement gave K-Ar ages ranging from 186 to 1213 million years (Woodmorappe erroneously lists this higher age as 1230 million years), whereas those cutting the Mesozoic sedimentary rocks gave K-Ar ages of from 173 to 192 million years. 40Ar/39Ar experiments4 on samples of the dikes showed that the dikes cutting the Precambrian basement contained excess 40Ar and that the calculated ages of the dikes do not represent crystallization ages. The 40Ar/39Ar experiments on the dikes that intrude the Mesozoic sedimentary rocks, however, showed that the ages on these dikes were reliable. Woodmorappe (134) does not mention that the experiments in this study were designed such that the anomalous results were evident, the cause of the anomalous results was discovered, and the crystallization ages of the Liberian dikes were unambiguously determined. The Liberian study is, in fact, an excellent example of how geochronologists design experiments so that the results can be checked and verified.
That seems to be a massive misrepresentation due to "I don't care because none of faithful are ever going to catch me on this".

Also from Dalrymple, an example of a creationist using an argument that is so stupid that he must have misunderstood the science:
Quote
The Hawaiian Basalts

 Quote [from creationist literature]   Still another study on Hawaiian basalts obtained seven “ages” of these basalts ranging all the way from zero years to 3.34 million years. The authors, by an obviously unorthodox application of statistical reasoning, felt justified in recording the “age” of these basalts as 250,000 years. (92, p. 147)

The data Morris (92) refers to were published by Evernden and others (44), but include samples from different islands that formed at different times! The age of 3.34 million years is from the Napali Formation on the Island of Kauai and is consistent with other ages on this formation (86, 87). The approximate age of 250,000 years was the mean of the results from four samples from the Island of Hawaii, which is much younger than Kauai. Contrary to Morris’ concerns, nothing is amiss with these data, and the statistical reasoning used by Evernden and his colleagues is perfectly rational and orthodox.

 336 replies since Jan. 16 2015,08:04 < Next Oldest | Next Newest >

 Pages: (12) < ... 6 7 8 9 10 [11] 12 >

 Forum Jump -----------   >All About Antievolution   -----------------------    +- Antievolution, Politics, and the Law    +- Intelligent Design    +- Young-Earth Antievolution    +- Old-Earth Antievolution    +- Collaborations   >Specifically About Intelligent Design   -------------------------------------    +- Intelligent Design News    +- Not a Book to Be Tossed Aside Lightly...    +- Cabbages and Kings    +- The ID-files   >Evolutionary Biology   --------------------    +- News & Events   >From the Panda's Thumb   --------------------------    +- After the Bar Closes...   >The TalkOrigins Archive   -----------------------    +- Feedback

 Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]