# Determining the start time of spot analyses for downhole fractionation correction in the UPb DRS

As with the last post, this is a follow-up to a question posted on the Iolite forum, this time by Jiri Slama (the forum thread can be found here). To summarise, he has previously used a raster pattern when measuring U-Pb ages in zircons, and wanted to know whether there is a best practice when using spot analyses. In particular, he asked how the start time of each analysis (i.e., when the laser shutter opens and ablation commences) is determined, and whether it is necessary to strictly maintain the same timing for baselines and analyses within a session in order for this “time=0” to be consistent between each spot analysis.

First of all, I think this is a great question, as the correct determination of time=0 is critical to properly treating the downhole fractionation in each analysis, regardless of the method used. If a spot analysis is corrected based on a start time that is inaccurate this will introduce a bias in calculated ages that may well be significant.

The way we do this in Iolite is quite different from many other data reduction strategies, so I think it’s worth clarifying those differences first. The most common approaches are using a linear fit, or regressing the data to the y-intercept (both assume that downhole fractionation is linear). In both cases a line is fit through the data (either from each spot analysis individually, or by assuming that all analyses are identical, and thus share the same slope). This is then either used to subtract the effects of downhole fractionation to produce a “flattened” analysis, or to infer what the ratio would have been at the time the laser shutter opened (because there was no hole at this point it is assumed that downhole fractionation was zero). The former produces corrected ratios for each timeslice of data, whereas the latter yields a single value and associated uncertainty on the regression. Obviously for these methods to work it is essential that the time=0 used is consistent between analyses to avoid either over- or undercorrecting ratios. In many cases the easiest way to achieve this consistency is by structuring analyses within a session so that the duration of the different components of each analysis (i.e., baselines, spot analysis, and washout) are always the same. It is then always straightforward to select and compare equivalent timeslices from each analysis.

In Iolite there are a couple of differences from the above – the first is that there are methods of correcting downhole fractionation (e.g., an exponential curve) that do not correct the data back to time=0 in the same way as when using a linear fit. This can have an influence on the apparent ages at intermediate steps of data reduction (there’s a blog post about that here), but does not have an impact on final ages. And most relevant to this post, the consistent selecting of time=0 is still every bit as important as when using linear fits. Although having said that, it doesn’t matter if time=0 perfectly coincides with the moment that ablation commenced, provided that it is the same for every single analysis (this is also true for linear fitting where all analyses are assumed to have identical slope).

The second difference is the big one – in Iolite there are different options available for the way in which time=0 is determined. These different methods have pros and cons, and it’s important to confirm that the method used is producing the correct outcome. The big advantage of this flexibility of approach is that it allows for freedom in how data are both acquired and reduced. For example, if analytical conditions are stable it may be preferable to acquire longer baselines every 5 analyses, instead of a short baseline prior to every spot analysis. Likewise, if during data reduction it becomes obvious that the early portion of an analysis is rubbish there is no problem with only selecting the latter range of data.

Regardless of what method is used to determine time=0, there is a specific channel in Iolite that stores information about how long the laser shutter has been open for. It is called “Beam_Seconds” and can be found in the intermediate channels list once it has been calculated (if you make changes it will be recalculated when you crunch data). Below is an image showing Beam_Seconds versus (red) overlaid on 238U signal intensity (grey), plotted against time on the x axis.

I realise that at first glance this may look a bit strange, but you can see that it shows the time that has elapsed since the laser began ablating steadily increasing until the beginning of the next analysis, at which point it is reset back to zero. It probably makes a lot more sense once it is converted into an x-y plot of Beam_Seconds (x-axis) versus 238U intensity (y-axis):

Now you can more clearly see each analysis and its subsequent washout down to the baseline. It is hopefully also obvious that by using Beam_Seconds it is easy to compare different analyses in relation to the time since the laser started firing (which we assume directly relates to how deep the laser pit is).

So that’s how we keep track of Beam_Seconds in Iolite, now the next thing is how it is determined. There are three different methods, with a 4th one in the pipeline:

“Cutoff Threshold” – This is the easiest one to explain: every time the intensity of the index channel increases over the threshold set using “BeamSeconds Sensitivity” the Beam_Seconds channel is reset to zero. This works really well in cases where there is not a sharp wash-in of the signal. But it should be noted that the value selected should be as low as possible (different thresholds can be tested until the Beam_Seconds wave produced the correct result), otherwise there may be a significant difference in the trip point between grains with high and low abundances of the index isotope.

“Gaps in data” – In the vast majority of cases this will work off the time since the beginning of each data file. Thus, in cases where each analysis is acquired in a separate file this will allow you to set a specific elapsed time (in seconds, set using the “BeamSeconds Sensitivity” variable) since the start of each file as the trip point for resetting Beam_Seconds.

“Rate of change” – This is the clever one, it uses a snappy little algorithm based on the rate of change of the signal to determine the time at which the laser starts firing (this is where the logarithm of the signal increases most rapidly). It does the best job of consistently finding the same point in each analysis, despite differences in signal intensity, but unfortunately it is also quite susceptible to noisy or spiky analyses, and is thus quite prone to failure. So, as usual, careful checking of results is important.

“Laser log file” – This one is still in the pipeline, but as the name suggests, it will use the laser shutter open events stored in the laser log file to determine when to reset Beam_Seconds.

One thing that is important to clarify is that (regardless of which of the above methods is used) the determination of Beam_Seconds is entirely independent of the masking of low signals. So even if the signal is masked up to the point at which the laser began to fire this does not necessarily mean that the Beam_Seconds wave will coincide. Similarly, the integration periods selected for each analysis are also entirely independent of Beam_Seconds. As such, editing an integration period to exclude the beginning of an analysis will have no impact on the calculation of time=0 and how downhole fractionation correction is performed).

Hopefully this provides some more detail to those not entirely sure of how these calculations are performed in Iolite, and as always if you have any questions feel free to post on the forum. Also, if you want to know more about making sure that Beam_Seconds is calculated correctly there is a blog post about that here.