Originally posted by Bad wolf
'it's simple really, you take two samples from the same rock. One from a bit which is emitting radiation and a second, non-radioactive bit from the same rock. You use the non-radioactive bit to tell you the isotopic abundance of the two isotopes in the conditions when the rock was formed.'
OK, so if I understand you correctly the parent isotope can be ...[text shortened]... hasn't begun being radioactive, but will in sometime in the future?
I like details
🙂
My understanding is that whether a rock is non-radioactive (i.e. none, or very few ofthe nucleii are decaying), radioactive (the nucleii are decaying with moderate speed), or in a state of run away decay (such as the okla natural reactor) is determined by both the isotope and it's concentration within the rock. At low concentrations, neutron collisions between nucleii is too infrequent for any sort of steady state radioactive decay to occur (these are the rocks that you can measure the daughter-parent isotope ratio in). At medium concentrations there is a reasonable chance that whenever a decay happens, the neutron emitted will strike another atom, causing it to decay. At high concentrations most neutrons
will hit another nucleii, causing decay (this is exploited (but controlled) with nuclear reactors). The trick is to get rocks which are non-homogenous in their isotope distribution (i.e. concentrated enough to date in one section, and dilute enough to get the p-d ratio nearby).
Another trick is to date with two independent systems. You know the decay constants and the current ratio's. Extrapolate back, and where the two lines cross, that's your start point. This method is called an
isochron.