Wednesday, March 30, 2011

Deep Thought

Today (Wed., 3/30/11) I am taking notes and trying to figure out exactly how to *correctly* derive the expected Allan deviation given the quantization error and an oscillator model. I've decided I want to use a Gaussian white noise model for the inherent frequency fluctuations in the oscillator. (Other behavior should show up in the Allan deviations for larger window sizes.)

Key parameters are:

1. Standard deviation of oscillator frequency (when averaged over 1-s window sizes) in cycles per nominal second. (We can plot the result as a function of this, and use that to do a Bayesan back-calculation of the probabilities of different standard deviations given the raw deviation data. Actually, this dist would be sharply peaked, given the large amount of data available - so we can just do a direct reverse lookup of the standard deviation given the Allan deviation.)

2. Fractional part only of mean oscillator frequency, in [0,1] say, because the integer part is irrelevant to the quantization error.

And computational variables include:

3. The possible values of the quantized frequency deviation samples d1=x1-2x2+x3 for adjacent 1-second windows. These are small integers; a range [-10,+10] would be more than adequate. For this we scan through the range of possible values.

4. Given d1, always let x1=0, and then scan possible values of x2. These will always be integers in a range close to the integer part of mean oscillator frequency. For example, 10,000,012 +/- 10. Given d1 and x2, compute x3=d1+2*x2.

5. Phase error phi1 of first (x1) measurement in oscillator cycles, in the range [-1/2, 1/2]. This is uniformly distributed so we can just integrate over the range. The actual initial phase is Phi1=x1-phi1.

6. Actual frequencies F1=Phi2-Phi1 and F2=Phi3-Phi2 can then be integrated over the range of values that will yield the target values of x2 and x3. And, at each value, the probability density values can be looked up from the normal dist. Overall answer of the integrations over phi1, F2, and F3: A probability for the particular (d1,x2) configuration in question. Add these up for the possible x2s, and you get a probability for the given d1 value. Get the probs for all d1 values and you can compute the RMS value of that distribution, half of that is the sigma_y(1) Allan dev.

Assuming this is a good model, the sigma_y(1) Allan deviation value should tell us, quite accurately, what the most likely standard deviation of the 1s-window oscillator frequency F is. And similarly, looking at the y values should tell us the mean frequency (although we know this drifts over time).

I think my next step is to write a short C or Matlab or Mathematica program to actually do this calculation. Only advantage of Mathematica: Maybe we can use it to evaluate the integral analytically. If not, I can do it numerically.

Monday, March 28, 2011

Journal Quest

Main task today: Working on some write-up of my recent analysis for the journal paper.

Gordon came by and I showed him the recent stuff. We looked at how to determine the time uncertainty from the Allan deviation. That would be a good thing to include in the paper.

Met with Ray, we went over the status of everything in the COSMICi system (HW at least).

Emailed Brian Stearns (contact at DeLorme) to ask him if the 0.2 ppm frequency instability we are seeing could be attributable to the TCXO used in the kit, requested part # for datasheet lookup. Also told him about the (10/20/200/400) millisecond glitches.

I'm wondering now if it is really a good assumption for us to say that quantization errors of neighboring phase readings are independent. Suppose the real OCXO frequency is close to an integer; the the quantization errors will be almost totally correlated, and will vanish from the y values. Or, if the OCXO frequency is close to a half-integer, then the quantization errors will be offset from each other by 0.5, and will tend to add up constructively in the y value, but then will tend to cancel again in the d (deviation) determination.

I'm thinking I need to make a more detailed model of the quantization error. Some parameters:

a - quantization error for first (x1) phase measurement, in cycles, in range [-0.5, +0.5). (That is, actual phase at the time of the PPS edge was the measured value minus a.)

f1 - fractional part of actual average frequency (phase accum. btw. PPS edges) during 1st second, in range [0, 1).

b - can be derived from f1 and a

f2 - fractional part of actual average frequency (phase accum. btw. PPS edges) during 2nd second, in range [0, 1). This can be derived from f1 and a model of the "real" instability, as a normal distribution.

c - can be derived from f2 and b.

Then, y1 and y2 can be derived from this, their error (relative to f1 and f2) can be derived, and the d=y2-y1 can be derived, and the error between this and the real delta of f2-f1 can be derived.

We can sweep through values of f1 and a, and plot the distribution of deviation measurement, including the effect of quantization error, as a function of the "real" underlying deviation.

This is all well and good, but I think the argument for phase offsets being pretty well independent for larger window sizes is pretty strong, otherwise I think we'd see some variation in the adjusted Allan deviation measurements depending on the window size, which we don't see except for very small window sizes. Still, doing the above more detailed calculation would be a good sanity check on our earlier calculations.

Tying Loose Ends

The rest of last week was just spent wrapping up bits of the analysis from Monday. Getting ready to start writing. Oh, and the XMath PO finally got generated.

Wednesday, March 23, 2011

Mathematica FAIL

* Ray & I discussed strategy w.r.t. how to proceed with FEDM board testing. We'll wait a few more days to hopefully get documentation from Sachin/Vaibhav, since they have been ill.

* Took forms to Lee hall to get Xmath past EIT approval. The secretary said, the guy who does the approvals is supposed to be in tomorrow.

* Since the result from Mathematica was wrong, I manually constructed a symmetrized version of the distribution of frequency drift values we'd expect to see as a result of quantization error; & updated Monday's notes accordingly. Today planning to numerically calculate the distribution in C to make sure the curve is correct.

* Checked our NetBeans installation to see if it had C/C++ support installed. No; apparently I only installed it with Java support before. Uninstalling & reinstalling to add support for other languages. In the meantime, maybe I should just use Python...

* Had trouble registering NetBeans with Oracle, so Python won the day. Here is my Python code:

def dist(d):
if d > 100:
return 0.25*(2-(d/100))**2;
elif d < -100:
return 0.25*(2+(d/100))**2;
else:
return 0.25*(2-(d/100)**2);

count = {}

for d in range(-200, 200):
count[d] = 0

for x1 in range(-50,50):
for x2 in range(-50,50):
for x3 in range(-50,50):
y1 = x2 - x1
y2 = x3 - x2
d = y2 - y1
count[d] = count[d] + 1

tot = 0
totd = 0
for d in range(-200,200):
print(d, count[d], dist(d), sep='\t')
tot = tot + count[d]
totd = totd + dist(d)

print("TOT", tot, totd)

The two columns, when normalized, are identical, except for tiny discrepancies which can be chalked up to the imprecision that's inherent to the numerical integration. So, this confirms my previous belief that the solution from Mathematica was correct except for the gap.

Monday, March 21, 2011

Just call me a quant...

In the car on Friday I was thinking about analyzing the quantization error a little more precisely, so that we can more accurately subtract it out from other influences on the Allan deviation to get a clearer idea of how much of the deviation is due to actual frequency instability.

The nice thing about quantization error is that it can be well modeled by a simple uniform distribution over the interval [-50 ns, +50 ns], since 100 ns is the clock period of our OCXO, and we are doing a simple single-edge triggered input capture on the PPS signal from the GPS.

Actually, we could improve this to [-25 ns, +25 ns] by using a dual edge-triggered flip-flop: I found a VHDL gate-level structural design for this online the other day. I tested it just now and it works. Let's put that off until later, however...

For now, we can express the distribution over the value of a single y(t_1, t_2)=x(t_2)-x(t_1) between two adjacent times t_1 & t_2 (1 second apart) by integrating a delta function as follows (with x_1=x(t_1), x_2=x(t_2)):

We should be able to solve this in Mathematica. Actually, I know what the answer is already from visualizing it geometrically; it's a sawtooth distribution. Mathematica confirms it (here using the scale 1=50ns):


But really, what we are interested in is the distribution over values of the frequency drift between adjacent windows, d(t_1) = y(t_2)-y(t_1) = x(t_1) - 2x(t_2) + x(t_3). So then this is the triple integral:

p(d) = \int_{x_1=-50 ns}^{50 ns} \int_{x_2=-50 ns}^{50 ns} \int_{x_3=-50 ns}^{50 ns} \delta(d - (x_1-2x_2+x_3)) dx_3 dx_2 dx_1, or (with y=d, a=x_1, b=x_2, c=x_3, and letting 1/2=50ns):
I found this too hard to solve exactly in my head, but I got (a trial version of) Mathematica to evaluate it for me using this expression:

Simplify[Integrate[DiracDelta[d - (a - 2*b + c)], {a, -1/2, 1/2}, {b, -1/2, 1/2}, {c, -1/2, 1/2}]]

The answer should be a piecewise quadratic function in four parts, restricted to the range [-2,+2]. Unfortunately, Mathematica does not solve it properly - it has a missing piece:

Of course, the correct answer should be symmetrical around d=0. We could fill in the missing piece in [-2,-1] by mirroring the piece in [1,2]. Doing that, we get the answer:


The middle two cases can also be expressed as (d-2)^2/4, (d+2)^2/4. We can check the normalization by integrating it over [-2,2], and we get the expected result (1), so this is a real probability distribution.

Even with just three points, this is already starting to take on the character of a normal distribution, as would be expected from adding/subtracting independent random variables.

Anyway, we can model each experimental "d" sample as a sum of a sample d from the above distribution, plus a d value obtained from a normal-distribution model of the frequency deviations in each second.

However, we note that for larger window sizes, the magnitude of the quantization error stays the same (since no matter how large the windows get, the only thing that matters for the quantization error is still just the quantization error for the 3 points at the window boundaries), so it is effectively scaled down by a factor of the window size. Meanwhile, the magnitude of the inherent deviation of the frequency means (as an effect of sample size) scales down with the square root of window size. So, for larger window sizes the quantization error becomes insignificant.

Still, let's see if we can estimate its contribution. If we treat the above quadratic distribution as if it were a statistical population, we can find its variance by integrating y^2 times this distribution for y in [-2,2], and that result is just 1/2. So the standard deviation of this distribution is just 1/sqrt(2), or 0.7071, meaning (with 1=100ns) 70.71 ns.

Using the quadrature rule for the sum of the "real" frequency deviation d(real) and the quantization error d(quant), we have that the measured d is:

d^2 = d(real)^2 + d(quant)^2

And since d(quant) = 0.7071e-7, d(quant)^2 is 0.000,000,000,000,005 = 0.5e-14 sec^2 (exactly, it turns out).

So, we can write:

d(real)^2 = d^2 - 0.5e-14 sec^2
d(real) = sqrt(d^2 - 0.5e-14)

I put this calculation in my spreadsheet, and found that even in the worst case (1-second windows), the quantization error only accounts for a mere 4.2% of the deviation measured! And the proportion goes downwards from there inversely with the window size. What is happening?

Let's look at the worst case (window size 1). The "corrected" (inferred) frequency deviation sigma is about 167 ns. The standard deviation of the frequency deviation caused by quantization error is (as mentioned above) 70.7 ns, divided by sqrt(2) because of the factor of 1/2 in the Allan deviation expression, gives 50 ns. Putting these together in quadrature, the expected overall deviation is sqrt(166^2 + 50^2) = 174 ns, which is the measured deviation. In other words, because of the way errors combine in quadrature, even though the expected magnitude of the quantization error, considered by itself, is 50/174 = 29% of the measured deviation, it cannot by itself account for more than about 4% of the measured deviation.

Friday, March 18, 2011

Why the -1/2 power law?

Juan is coming by in a bit to finish setting up the bootloader on the Acer after installing the XP partition.

I realized after my way out on Wednesday that my interpretation of the power-law scaling of the OCXO's Allan deviation was completely wrong - it would have predicted a -2 scaling, rather than the -1/2 that we see!

I discussed this with Ray this morning, and he thinks it is due to a sample-size effect. The standard deviation sigma_mean of a measurement of a mean from a sample is 1/sqrt(N) times the standard deviation sigma of the underlying distribution, where N is the number of elements of the sample. So, if we think of Allan deviation as being a measurement of the mean frequency displacement between adjacent averaging windows, where the displacement is a difference of frequencies, and the mean frequencies are both measured over windows of size N containing N elements, and the instantaneous frequency fluctuates with a standard deviation of sigma, then one would expect the mean frequency measurements to have fluctuations of magnitude 1/sqrt(N) lower than the magnitude of the real underlying frequency fluctuations.

Wednesday, March 16, 2011

Still hangin' with Allan...

* Mike had problems this morning & couldn't make it to meet the Symmetricom rep. He said he'd leave a catalog outside the door but I didn't find it when I got here.

* Re: The Allan deviation computations. I realized after leaving last night that I forgot to divide the sum of squares by the number of samples to get the mean. I also didn't normalize to dimensionless units (dividing by the nominal oscillator frequency) as is normally done. That is fixed now. Also, on Ray's advice, I vectorized the Allan deviation calculation, so now it is much faster. I dumped the data to a CSV file and graphed it in OpenOffice calc. Chekkit out! (Click on image to enlarge.) The red line, for comparison purposes, indicates the 1/2 power-law slope that would be expected if all of the deviation was due entirely to the (fixed) quantization error of oscillator phase measurement (which is at most +/- 100 ns), where the magnitude of these errors gets divided by the window size τ twice - the 1st time because there are τ independent phase samples that contribute to the overall phase offset for each data point, so the errors get averaged away by a factor of τ, and the 2nd time because the magnitude of the frequency delta gets divided by the window size to get a rate of frequency drift. (The starting point is sqrt(1/2)*1e-7, which is the expected deviation based on 1/sqrt(12) - the variance of the uniform distribution over [0,1], and the quadrature rules for combining x1+2x2+x3; though I'm still unsure if this is the right calculation.)


* Interestingly, the Allan deviation attains a minimum at a window size of almost exactly 1 day (86,400 seconds). Perhaps this means that, despite the OCXO's corrections, environmental temperature deviations over the course of the day are responsible for some of the frequency variation that we're seeing on smaller (multi-hour) timescales. Then, when you look at adjacent pairs of window longer than 1 day, you lose alignment with the day boundaries, so some of the error re-occurs. This suggests that we should also see minima at 2-day, 3-day, etc. window sizes. This is not the case, so perhaps this theory is wrong. It might just be a coincidence.

Monday, March 14, 2011

Allan Deviation

* Gordon called this morning & said he would come by after his class which ends at 3:30. Mike is planning to talk with him about what's going on. He came by and we talked for a couple of hours & he started looking at journal papers.

* Our regional sales rep from Symmetricom called; he will drop by Wednesday 11 am. I want to talk with him about their time reference products. Ray says we should see if we can borrow a unit from them. Ray also says look at TEMEX. They're now called Spectratime. Requested a quote from them on this GPS time reference. A guy called back and gave a ballpark of $5K; he will send a quote later & put us in touch with a Florida sales rep Bonnie.

* XMath: Mike called around and finally realized that Charles hadn't yet approved the requisition. Emailed him a reminder and CC'd Ray.

* Mike spent most of afternoon working on the Allan deviation computations in Scilab. The code seems to be working now. It is slow though. I'll leave it running overnight & it should be finished by the time I get back.

Friday, March 11, 2011

Fry day

Every day is (French) "fry" day when I eat lunch at McDonald's, LOL. Things for today:

1. Call the folks responsible for the ERP (Enterprise Resource Prevention) system to figure out why budget check on XMath isn't working. Ronnie Mackey left me a voicemail with their phone number 561-4357. Except that is actually the number of the Tallahassee office of the DEA. Calling the main EIT number instead & asking them to transfer me. I got someone's voicemail and left a message asking them to call me back. The requisition # in question is 0000093625.

2. Read journal articles on GPS timing, in preparation for writing paper. I printed out a bunch of articles on Wednesday; now I just need to find a more comfortable spot to sit & read them. I wonder if the library down the hall is open. Yes it is. Going to settle down there to read papers. Papers collected so far are:
  • Arceo-Miquel, Shmaliy, et al. (2009), "Optimal synchronization of local clocks by GPS 1PPS signals using predictive FIR filters;"
  • Banerjee, Suman, et al. (2007), "A study on the potentiality of the GPS timing receiver for real time applications;"
  • Berns, Burnett, et al. (2004), "GPS time synchronization in school-network cosmic ray detectors;"
  • Bullock, King, et al. (1997), "Test results and analysis of a low cost core GPS receiver for time transfer applications;"
  • Defraigne & Bruyninx (2007), "On the link between GPS pseudorange noise and day-boundary discontinuities in geodetic time transfer solutions;"
  • Ding & Wang (2008), "Time synchronisation error and calibration in integrated GPS/INS systems;"
  • Khan (2007), "Behavior of the GPS timing receivers in the presence of interference;"
  • Lewandowski & Thomas (1991), "GPS time transfer;"
  • Li, Wang, et al. (2009), "Time synchronization design based on FPGA in integrated GPS/INS system;"
  • Lombardi, Novick, et al. (2005), "Characterizing the performance of GPS disciplined oscillators with respect to UTC(NIST);"
  • Mumford (2003), "Relative timing characteristics of the one pulse per second (1PPS) output pulse of three GPS receivers;"
  • Shmaliy & Ibarra-Manzano (2010), "Recent advances in GPS-based clock estimation and steering;"
  • Skog & Händel (2010), "Time synchronization errors in GPS-aided inertial navigation systems."
3. Notes on the 1st paper (Arceo-Miquel, Shmaliy, et al., 2009). This is quite a bit beyond my present level of understanding of the field of time measurement. However, from it I learned the importance of the Allan deviation/variance for characterizing frequency stability of an oscillator. In this paper they had a cesium frequency reference (CsIII from Symmetricom) which they could use to measure the "true" time errors. In our lab, we don't have an atomic clock, so we have to make do with relative characterization (assuming the GPS has the specified accuracy). However, I requested a quote on the CsIII, to see if we might be able to afford it (maybe if the grant gets renewed). We actually got a quote on a couple of their Rubidium clocks a couple of years ago, and they were $16-19K. Symmetricom also has a chip-scale atomic clock which looks pretty interesting.

4. Some notes on the Allan deviation. Wikipedia & the 2nd paper (Banerjee & Suman) explain it pretty well. Basically, it is just the RMS difference between the average frequencies y of adjacent windows. It is plotted as a function of τ (tau), which is both the window size and the interval between the centers of adjacent windows. Let tk be the time point exactly between the adjacent windows, so then tτ and t+τ are the start time of the earlier window and the end time of the later window, respectively. Then, if x(t) denotes the cumulative phase of the measured oscillator at time t, then y(t) = τ -1[x(t+τ) − x(t)] is the average oscillator frequency for the window starting at time t, and a sample for the Allan deviation is then y(t) − y(t) = τ -1[x(tτ) - 2x(tk) + x(t+τ)]. Just average the square of these samples over a long run (as an estimator of the mean squared deviation), divide by 2 (not quite sure why), and you have σy2(τ), which is the Allan variance. Take the square root of this, and you have the Allan deviation. In our case, we may use any integer values of τ from 1 (sec.) through half the size of the run. So we could plot this. However, because the GPS is sometimes losing time lock, if we did this, we would be conflating the oscillator frequency instabilities with the instability due to the GPS losses. To untangle this, it would be nice to do a long run when we have plenty of satellites in range throughout - but this will probably require setting up the experiment on the roof. :(

5. Some notes on the integer-millisecond glitches seen during our 10-day run:
  • 1 second - 175
  • 2 secs. - 280
  • 5 s. - 3
  • 6s - 13
  • 7 - 5
  • 12 - 2
  • 13 - 1
  • 22 - 1
  • 24 - 1
It's too hard to read the raw array... Modifying the code to display stats in a readable form. I need to write some code to systematically correlate these events with things that may be going on in the GPS.

6. Started adding code to my "gps-plot.sce" SciLab script to try to correct for extra PPS pulses. Last time I had repaired these by hand, but that was a pain. I think I can correct most of them automatically. That is done now - they were all corrected automatically.

Wednesday, March 9, 2011

Wins day

This morning's planned phone meeting with Sachin was postponed to 4 pm because Mike had an appointment with his eye doctor.

The XMath requisition is still outstanding. I am still unable to do the budget check for some unknown reason. Ray said it was not a budget issue or an EIT hold, and that I should talk to Ronnie Mackey in Sponsored Research. I tried to call him at 1 pm but he was on another call. So instead I left him a voice mail. Will try him again later today.

Next agenda item for today: Identify a journal that might be suitable for a paper on the GPS timing system. For example, the Berns et al. paper "GPS Time Synchronization in School-Network Cosmic Ray Detectors" was published in IEEE Trans. Nucl. Sci. 51(3), June 2004. This paper was mostly about the design of the timing system. Mike needs to read it more carefully and find some related literature.

Here's what looks like another useful paper: "GPS Time Transfer," Lewandowski & Thomas, 1991. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=84976&abstractAccess=no&userType=inst . However I am sadly out of printer ink again. Time to go to Staples.

It's just really too bad that this department doesn't have a staff person in charge of keeping a stockroom stocked with office supplies, and that this university's purchasing office never fulfills my requisitions for these kinds of things, and so I, as an individual researcher, am forced to waste my valuable time, and my personal money buying ink cartridges and paper for the lab printer, so that I don't suffer excessive eyestrain from trying to read long research articles on the computer screen. Especially when I am still recovering from my eye surgery!

OK, anyway, I got my cartridge & printed out a number of papers. Ray & I had our phone meeting with Sachin at 4 pm, and he promised to provide us some documentation.

Monday, March 7, 2011

Spring Break!! (NOT.)

The students are on Spring break this week, but Ray & I are here working... :-/

We were going to have a phone meeting with Sachin this morning at 10 am, but we pushed it back to tomorrow.

My latest data run has been going for the last 10 days, so I decided to finally stop it so we could process it. The file size (with a few small initial aborted runs trimmed off) is 383 MB.

The reformatted (line-per-record) data file is 108 MB. Apparently, there were 0 serial data errors in the entire run! (Hooray!)

Now reading the new big data set into SciLab. OK, took a while but that's done.

The initial wander graph has a number of big discontinuities - probably extra PPS pulses.

The first one is definitely an extra PPS pulse, after only about 1 million (rather than 10 million) cycles. Fixed the current data file by hand, and modified the firmware so that in the future, PPS events that do not arrive within +/-1% of the expected time will be ignored & not reported to the server.

The other big discontinuities were also caused by extra PPS pulses, as expected. Fixed them by hand.

Next, there were a few hundred small transient glitches, always of particular sizes: Usually 10,000 (1 ms), but occasionally 20,000 (2 ms), or 200,000 (20 ms). One was 400,000 (40 ms). Most of them only lasted a second, but there were occasional ones of larger sizes, up to 10 or 20 seconds. There was one that was almost 100 seconds long. All discrepancies larger than 1,000 were accounted for by these transient glitches.

I wrote code to automatically repair glitches of the duration and apparent sizes I found (fixes the in-memory vectors in SciLab only; the glitches are still there in the data files).

However, it is somewhat of a mystery exactly what is causing these glitches. The fact that they are all exactly by small multiples of 1 or 10 milliseconds is telling. Since they are not powers of 2, they are unlikely to be problems with the input capture circuit.

More likely, they reflect some kind of temporary adjustments being made to the GPS unit's internal clock by its firmware. This is corroborated by the fact that the glitches I looked at seemed to all coincide exactly with situations where the TRAIM was temporarily in an alarm condition (with all satellites marked invalid). I probably should do a more thorough (or automated) check to make sure that some event like this accounts for each of the glitches, but I haven't done that yet....

Meanwhile, with the glitches repaired algorithmically, we get output like we expected, based on our previous tests. Below is the cumulative phase wander (in 100-ns cycles) over the course of the entire big 10-day run, with TRAIM accuracy in nanoseconds (x10) superimposed.


The overall graph is U-shaped due to a gradual increase in OCXO frequency over the course of the run. We can see that the remaining excursions away from the baseline curve are generally fairly brief (at most a few hours) and coincide with times at which the TRAIM accuracy is nulled out.

And below is the relative frequency drift over the run, plotted in Hz above the nominal frequency of 10MHz. This plot uses a 1-hour window size for smoothing (time-averaging) the frequency measurements (with no smoothing, we'd only see integer values of Hz, almost all in the range 12-14). The sharp spikes correspond entirely (or almost entirely) to the start or end of excursions, times at which either the GPS clock is drifting at an anomalous frequency because it has just lost time lock & hasn't realized it yet, or at which the GPS clock has snapped back into phase after regaining time lock.


This graph confirms that the OCXO frequency drift over multi-day scales is within its specification (according to the OCXO datasheet) of 1 ppb per day. @10MHz, 1 ppb/day = 10 mHz/day = 0.1 Hz/10 days, whereas we actually observed less than 0.05 Hz cumulative upwards frequency drift over the entire 10-day period. (Not counting the spikes of course, which are the fault of the GPS, not the OCXO.)

Anyway, this data is sufficient for a preliminary paper on the GPS time alignment; the plots just need to be prettied up a bit. Maybe add a few zooms on interesting sections.

Ray says maybe for 1st version of system we shouldn't worry about interpolating over the GPS outages - instead just throw away data during those times.

Ray says we should maybe put emphasis on a journal article rather than a conference paper. He says look at the other GPS papers we have to see what journals they were in. Mike needs to check to make sure SPIE doesn't forbid republishing results in a journal.

Ray asked me to call somebody (was it Ronnie Mackey?) about the budget check problem on the XMath, but I forgot to do it before 5 pm. Will try again tomorrow.

Wednesday, March 2, 2011

Is it Hump Day already?

I got the XP product key by email from Juan, entered it, completing installation. I temporarily named it "ACER-XP". Spoke to Antony about naming. He said "NEUTRALINO-XP" might be OK. But we should probably talk to Ray about what he wants.

My data-collection run that I started last Friday is still going strong. I'm just going to leave it going for now, and hopefully get it more than a week long. In the meantime, I can start on the Scilab code using the converted data files from previous runs.

The code to read in the data into vectors is finished... Testing it now. It is working. I am now ready to massage & plot the data.