Monday, March 21, 2011

Just call me a quant...

In the car on Friday I was thinking about analyzing the quantization error a little more precisely, so that we can more accurately subtract it out from other influences on the Allan deviation to get a clearer idea of how much of the deviation is due to actual frequency instability.

The nice thing about quantization error is that it can be well modeled by a simple uniform distribution over the interval [-50 ns, +50 ns], since 100 ns is the clock period of our OCXO, and we are doing a simple single-edge triggered input capture on the PPS signal from the GPS.

Actually, we could improve this to [-25 ns, +25 ns] by using a dual edge-triggered flip-flop: I found a VHDL gate-level structural design for this online the other day. I tested it just now and it works. Let's put that off until later, however...

For now, we can express the distribution over the value of a single y(t_1, t_2)=x(t_2)-x(t_1) between two adjacent times t_1 & t_2 (1 second apart) by integrating a delta function as follows (with x_1=x(t_1), x_2=x(t_2)):

We should be able to solve this in Mathematica. Actually, I know what the answer is already from visualizing it geometrically; it's a sawtooth distribution. Mathematica confirms it (here using the scale 1=50ns):


But really, what we are interested in is the distribution over values of the frequency drift between adjacent windows, d(t_1) = y(t_2)-y(t_1) = x(t_1) - 2x(t_2) + x(t_3). So then this is the triple integral:

p(d) = \int_{x_1=-50 ns}^{50 ns} \int_{x_2=-50 ns}^{50 ns} \int_{x_3=-50 ns}^{50 ns} \delta(d - (x_1-2x_2+x_3)) dx_3 dx_2 dx_1, or (with y=d, a=x_1, b=x_2, c=x_3, and letting 1/2=50ns):
I found this too hard to solve exactly in my head, but I got (a trial version of) Mathematica to evaluate it for me using this expression:

Simplify[Integrate[DiracDelta[d - (a - 2*b + c)], {a, -1/2, 1/2}, {b, -1/2, 1/2}, {c, -1/2, 1/2}]]

The answer should be a piecewise quadratic function in four parts, restricted to the range [-2,+2]. Unfortunately, Mathematica does not solve it properly - it has a missing piece:

Of course, the correct answer should be symmetrical around d=0. We could fill in the missing piece in [-2,-1] by mirroring the piece in [1,2]. Doing that, we get the answer:


The middle two cases can also be expressed as (d-2)^2/4, (d+2)^2/4. We can check the normalization by integrating it over [-2,2], and we get the expected result (1), so this is a real probability distribution.

Even with just three points, this is already starting to take on the character of a normal distribution, as would be expected from adding/subtracting independent random variables.

Anyway, we can model each experimental "d" sample as a sum of a sample d from the above distribution, plus a d value obtained from a normal-distribution model of the frequency deviations in each second.

However, we note that for larger window sizes, the magnitude of the quantization error stays the same (since no matter how large the windows get, the only thing that matters for the quantization error is still just the quantization error for the 3 points at the window boundaries), so it is effectively scaled down by a factor of the window size. Meanwhile, the magnitude of the inherent deviation of the frequency means (as an effect of sample size) scales down with the square root of window size. So, for larger window sizes the quantization error becomes insignificant.

Still, let's see if we can estimate its contribution. If we treat the above quadratic distribution as if it were a statistical population, we can find its variance by integrating y^2 times this distribution for y in [-2,2], and that result is just 1/2. So the standard deviation of this distribution is just 1/sqrt(2), or 0.7071, meaning (with 1=100ns) 70.71 ns.

Using the quadrature rule for the sum of the "real" frequency deviation d(real) and the quantization error d(quant), we have that the measured d is:

d^2 = d(real)^2 + d(quant)^2

And since d(quant) = 0.7071e-7, d(quant)^2 is 0.000,000,000,000,005 = 0.5e-14 sec^2 (exactly, it turns out).

So, we can write:

d(real)^2 = d^2 - 0.5e-14 sec^2
d(real) = sqrt(d^2 - 0.5e-14)

I put this calculation in my spreadsheet, and found that even in the worst case (1-second windows), the quantization error only accounts for a mere 4.2% of the deviation measured! And the proportion goes downwards from there inversely with the window size. What is happening?

Let's look at the worst case (window size 1). The "corrected" (inferred) frequency deviation sigma is about 167 ns. The standard deviation of the frequency deviation caused by quantization error is (as mentioned above) 70.7 ns, divided by sqrt(2) because of the factor of 1/2 in the Allan deviation expression, gives 50 ns. Putting these together in quadrature, the expected overall deviation is sqrt(166^2 + 50^2) = 174 ns, which is the measured deviation. In other words, because of the way errors combine in quadrature, even though the expected magnitude of the quantization error, considered by itself, is 50/174 = 29% of the measured deviation, it cannot by itself account for more than about 4% of the measured deviation.

No comments:

Post a Comment