This tutorial bridges the Framework and the Lab. Part I develops derivations by hand; Part II translates them into working code. Both parts are prerequisites for the entrance session.
Pen-and-paper work — from oscillator model to comparison geometry
Start with a damped harmonic oscillator driven by a restoring potential. The general solution is
where A(t) captures amplitude noise and φ(t) captures phase noise.
Show that for a high-Q oscillator (Q >> 1), amplitude fluctuations decay on a timescale τA = 2Q/ω0, while phase fluctuations accumulate without bound. Argue why clock metrology focuses on φ(t) rather than A(t).
Define the fractional frequency deviation y(t) = (1/2πν0) dφ/dt. Show that the time-averaged fractional frequency over an interval [t, t+τ] is
Starting from the definition of ¯yk, derive the two-sample (Allan) variance:
Show that this is equivalent to
Evaluate the integral above for Sy(f) = h0 (white frequency noise). Show that σy(τ) ∝ τ−1/2 and determine the proportionality constant in terms of h0.
Repeat for Sy(f) = h−1f−1 (flicker frequency noise). Show that σy(τ) becomes independent of τ (flat on the log-log plot). Explain physically why averaging does not improve stability in this regime.
Consider N independent measurements of fractional frequency, yi, drawn from a Gaussian with unknown mean μ and known variance σ². Write down the likelihood, choose a conjugate Gaussian prior for μ, and derive the posterior analytically. Show that the posterior mean is a precision-weighted average of the prior mean and the sample mean.
Now suppose both μ and σ² are unknown. Write down the joint likelihood. Argue why a Jeffreys prior p(σ) ∝ 1/σ is appropriate for the scale parameter. Show that the marginalised posterior for μ is a Student-t distribution. Discuss: what does the heavier tail of the Student-t imply for outlier sensitivity compared to the Gaussian case?
Two clocks are separated by distance L. A phase comparison requires a signal round trip (or one-way with synchronisation). Show that the minimum comparison interval is Tmin = L/c (one-way) or Tmin = 2L/c (round trip). Define η(τ) = L/(cτ) and argue that η ≤ 1 is a necessary condition for a meaningful comparison.
For a clock with white frequency noise (Sy = h0), the Allan deviation improves as τ−1/2 with averaging. But as τ increases, so does the accumulation of propagation noise on the comparison link. Model the link noise as white phase noise with amplitude h2,link. Derive the total comparison uncertainty σtotal(τ) = σclock(τ) ⊕ σlink(τ) and find the τopt that minimises it. Express ηopt = L/(cτopt) in terms of h0 and h2,link.
Interpret ηopt physically. Sketch the two competing contributions and their sum on a log-log plot. Explain: why is ηopt the falsifiable prediction of the causal-geometry framework?
Python implementations — validate your derivations numerically
Generate N = 106 samples of white frequency noise y[n] with amplitude h0. Compute the Allan deviation using the overlap estimator and verify the τ−1/2 slope.
import numpy as np
def white_freq_noise(N, h0, dt=1.0):
"""Generate white frequency noise samples."""
sigma_y = np.sqrt(h0 / (2 * dt))
return np.random.normal(0, sigma_y, N)
# Your code: compute phase by cumulative sum, then Allan deviation
Generate flicker frequency noise (α = −1) by filtering white noise in the frequency domain. Compare the resulting Allan deviation slope to your analytic prediction from D.2.3.
def power_law_noise(N, alpha, h_alpha, dt=1.0):
"""Generate power-law noise S_y(f) = h_alpha * f^alpha
via spectral shaping of white noise."""
freqs = np.fft.rfftfreq(N, d=dt)
freqs[0] = freqs[1] # avoid division by zero
# Shape the spectrum
white = np.fft.rfft(np.random.normal(0, 1, N))
shaped = white * np.sqrt(h_alpha * np.abs(freqs)**alpha * dt)
return np.fft.irfft(shaped, n=N)
Generate a time series with both white frequency and flicker frequency noise. Plot the Allan deviation and identify the crossover τ where the dominant noise type changes. Compare to the analytic prediction.
Implement the non-overlapping Allan variance estimator directly from the phase time series φ[n]:
def allan_variance(phase, taus, dt=1.0):
"""Compute non-overlapping Allan variance from phase data.
Parameters
----------
phase : array - cumulative phase samples
taus : array - averaging times to evaluate
dt : float - sampling interval
Returns
-------
adev : array - Allan deviation for each tau
"""
adev = np.zeros(len(taus))
for i, tau in enumerate(taus):
n = int(tau / dt)
# Second differences of phase
diffs = phase[2*n::n] - 2*phase[n:-n:n] + phase[:-2*n:n]
adev[i] = np.sqrt(np.mean(diffs**2) / (2 * tau**2))
return adev
Test against allantools.adev() for each noise type. Quantify agreement.
Generate 100 samples of fractional frequency from a clock with known offset μ = 1×10−13 and white frequency noise. Use MCMC to recover μ with a Gaussian prior. Compare the posterior width to the frequentist standard error.
import emcee
def log_posterior(theta, data, prior_mu, prior_sigma, sigma_known):
mu = theta[0]
# Log-likelihood (Gaussian)
ll = -0.5 * np.sum((data - mu)**2 / sigma_known**2)
# Log-prior (Gaussian)
lp = -0.5 * ((mu - prior_mu) / prior_sigma)**2
return ll + lp
# Your code: set up sampler, run chains, check convergence
Generate a dataset from white + flicker frequency noise. Fit two models: (1) pure white frequency, (2) white + flicker. Estimate the Bayes factor using the Savage-Dickey density ratio or thermodynamic integration. Which model is preferred, and by how much?
From the posterior of C.3.2, generate 200 synthetic datasets. For each, compute the Allan deviation at τ = 1, 10, 100, 1000 s. Plot the distribution of synthetic Allan deviations against the observed values. Does the posterior predictive distribution contain the observations?
Simulate two clocks with different noise profiles. Add white phase noise on the comparison link. Compute the comparison Allan deviation σAB(τ) and identify τopt. Calculate ηopt and compare to your derivation in D.4.2.
Extend to three clocks. Compute all three pairwise comparisons and verify closure: yAB + yBC + yCA → 0. Inject a systematic offset on one link and show that the closure residual reveals it.
For a three-clock network with varying separations, compute ηopt for each pair. Plot a map of ηopt vs Lcomparison. Overlay the analytic prediction from D.4.2. Discuss where the prediction breaks down and why.
Bring your completed derivations (handwritten or typeset) and working Jupyter notebooks to the entrance session. Push notebooks to your fork of the repository before the session.
Continue to the Lab →