In part one of this series we discussed two dating methods that are used to calculate the ages of rocks and fossils- relative and absolute. We pointed out that the equations used to calculate ages via absolute dating methods, or radiometric dating, rely on three unreasonable assumptions which are “plugged in” as constants. Just as in any math equation, the accuracy of the values one uses for constants will directly affect the accuracy of one’s result. What are these assumptions underlying radiometric dating?
Constant Rates of Radioactive Decay
The first assumption is that the rate of radioactive decay is known, and has been constant, since the rock formed. While it is true that radioisotope decay rates seem to be constant today, to make the assumption that radioisotope decay rates have always been constant throughout history (a proposed history of billions of years) is unreasonable.
We have evidence that at some point (or points) in the past we have experienced accelerated rates of decay. The RATE group (Radioisotopes and the Age of The Earth) discovered this when examining zircon crystals. Dr. Jeff Miller explains their findings in an article for Apologetics Press:
“The rate team had several zircon crystals dated by expert evolutionists using the uranium-lead evolutionary dating technique and found them to be 1.5 billion years old, assuming a constant decay rate. A by-product of the break down of uranium into lead is helium. Content analysis of the crystals revealed that large amounts of helium were found to be present. However, if the crystals were as old as the dating techniques suggested, there should have been no trace of helium left, since helium atoms are known to be tiny, light, unreactive, and able to easily escape from the spaces within the crystal structure.”
According to Roger Patterson, writing for Answers in Genesis:
“Helium escapes from crystals at a known, measurable rate. If those rocks were over a billion years old, as evolutionists claim, the helium should have leaked out of the rock.”
New Scientist reported in 2009 that physicist David Alburger found that the nuclear decay rate of silicon-32 actually changed with the seasons. Similarly, a Purdue physicist found that nuclear decay rates speed up in winter. The decay rates were found to be altered by the sun, but they are unsure as to exactly how- possibly an unknown particle that the sun emits.
The second assumption is that the amounts of parent and daughter isotopes contained in a rock have not been altered (none gained or lost) by anything other than radioactive decay. This is what is called a “closed system,” meaning that the amount of the elements in the rock sample have never been affected by outside elements. So, in order to arrive at a correct date, this assumption requires that the elements in the rock sample have never- in the course of billions of years- been affected by weathering of the rock due to ground water, or diffusion of gases, lava flows, floods, mudslides, meteorite activity or anything else.
Is this a reasonable assumption? Dr. Miller refers to McDougall and Harrison’s “Geochronology and Thermochronology by the 40 Ar/39 Ar Method” published by Oxford University Press when he writes:
“Evolutionary geologists, again, recognize that this assumption oftentimes doesn’t hold up. According to Ian McDougall, professor of geology in the Research School of Earth Sciences at the Australian National University, and T. Mark Harrison, professor of geology in the Department of Earth and Space Sciences at the University of California, Los Angeles, ‘Departures from this assumption in fact are quite common, particularly in areas of complex geological history’ (1999, p. 11, emp. added). To suggest a closed system for a specimen that is believed to be very old is a reckless, unreasonable assumption, (1) when there is clear evidence that a closed system cannot be guaranteed, and (2) when, in fact, there is compelling evidence that ancient Earth was rocked by a global catastrophe that most certainly would have violated the ‘closed system’ assumption (cf. Whitcomb and Morris, 1961) and created an extremely ‘complex geological history.'”
Knowledge of Initial Conditions
The third assumption is that we know the original amounts of parent and daughter isotopes that were present when the rock was formed. More specifically, that the rock initially contained only the parent isotope and none of the daughter isotope.
Quoting Dr. Miller again:
“But how could one possibly substantiate an assumption about the initial conditions of a specimen’s decay process, especially when the commencement of its decay was hundreds or thousands (according to evolutionists, millions or billions) of years ago? Is it not possible, and even likely, that a specimen might have been initially composed of more than one element that blended together during a geologic phenomenon before that rock’s decay processes began?”
This assumption cannot be substantiated since no one was present when these rocks formed. It becomes particularly illogical when extrapolating billions of years into the past.
Dr. Miller is careful to clarify that assumptions are not all bad:
“Assumptions are oftentimes necessary in operational science, and they can be effective and productive in helping scientists to solve problems and make advancements and important breakthroughs; but assumptions must be made with caution.”
“The standard practice of geologists today, in light of this, is to ‘do what you can with what you have.’ However, if the dating assumptions are too unrealistic to allow for an accurate date of anything, shouldn’t the dating methods be deemed untrustworthy or even abandoned, if that is where the evidence leads?”
At minimum, it seems to me that where assumptions necessarily exist, dogmatism should not.