These concepts recur throughout the programme. Each tier engages them at a level appropriate to its tools and precision. This page connects the threads.
Periodicity
What repeats, and how reliably?
Every clock rests on something periodic — a process that repeats in a way stable enough to count against. The question is never “does it repeat perfectly?” (nothing does) but “how well does it repeat, and how do we know?”
| Tier | Periodic process | What limits its regularity |
|---|---|---|
| 0 · Observe | Sun’s apparent motion, pendulum swing, clock tick | Equation of time, temperature, battery |
| 1 · Build | Electronic oscillation (DDS, VCO, GPSDO) | Crystal ageing, temperature, voltage noise |
| 2 · Simulate | Numerical oscillator with injected noise model | White, flicker, random-walk frequency noise |
| 3 · Explore | Atomic transition, Earth rotation, pulsar spin | Quantum projection noise, tidal friction, timing noise |
Comparison
A clock is an operational comparison between periodic processes.
This is the programme’s invariant principle. No tier defines a clock by its mechanism alone. Every tier defines it by comparing one periodic process to another and extracting information from the difference.
| Tier | What is compared | How the difference is observed |
|---|---|---|
| 0 | Sun vs pendulum vs household clock | Notebook entries, visual inspection |
| 1 | VCO vs DDS or GPSDO | Beat note (ear), then FFT (Beat Lab) |
| 2 | Simulated oscillators in a network | Allan deviation, closure residuals |
| 3 | Atomic clocks, UT1, pulsars | UTC − UTC(k), UT1 − UTC, timing residuals |
Reference vs Free-Running
Which clock do you trust, and why?
A “reference” is not an absolute — it is the oscillator you choose to trust for the duration of the comparison. That choice is always justified by prior knowledge about stability, not by any inherent property of the oscillator itself.
| Tier | Reference | Free-running |
|---|---|---|
| 0 | Sun (slow, predictable drift) | Pendulum (temperature-sensitive) |
| 1 | DDS crystal (ppm) or GPSDO (10−12) | XR2206 VCO (drifts visibly) |
| 2 | Injected “true” frequency (known ground truth) | Simulated noisy oscillator |
| 3 | TAI ensemble, GPS system time | Individual lab clock UTC(k), UT1 |
Synchronisation
How are clocks aligned initially, and what does “aligned” mean?
Synchronisation crosses tiers with changing character. At Tier 0, it is notebook-mediated: the student carries information between the sundial and the pendulum by walking and writing. At Tier 1, it is signal-mediated: a PPS pulse from GPS. At Tier 2, it is algorithmic: ensemble time-scale computation. The progression moves from human coordination to electronic to mathematical — from logical to metrological synchronisation.
Stability vs Accuracy
A clock can be stable without being accurate, and accurate without being stable.
Stability is how well a clock repeats — how consistent successive measurements are. Accuracy is how close those measurements are to a defined standard. A pendulum that swings at exactly the same rate every day is stable, even if that rate is slightly wrong. A GPS PPS pulse is accurate (traceable to caesium) but may jitter at the nanosecond level.
This distinction is central to metrology and appears at every tier. The Allan deviation (σy(τ)) characterises stability; calibration against a primary standard characterises accuracy.
Drift and Noise
What changes slowly, and what fluctuates?
| Tier | Drift (systematic, slow) | Noise (random, fast) |
|---|---|---|
| 0 | Equation of time (~minutes over months) | Observer reaction time (~0.5 s) |
| 1 | VCO temperature drift (~Hz/°C) | Oscillator phase noise |
| 2 | Simulated random-walk frequency | Simulated white/flicker frequency |
| 3 | Pulsar spin-down, tidal deceleration | Interstellar medium dispersion, receiver noise |
Systematic vs Random Deviation
Can you tell them apart with your data?
A systematic deviation shifts all measurements in the same direction. A random deviation scatters them. With enough observations over enough time, the two become distinguishable — but in a short dataset, a systematic drift can masquerade as noise or vice versa. This is why Experiment 0.1 requires 7–14 days, not one afternoon.
Comparison Networks
What does a third clock add that two clocks cannot provide?
Two clocks can measure their frequency difference. Three clocks can check for consistency: if A–B, B–C, and C–A don’t satisfy triangular closure (their sum should be zero), something is wrong — and the residual localises the fault. Every additional clock adds constraints, rapidly over-determining the system.
| Tier | Network | Closure test |
|---|---|---|
| 0 | Sun + pendulum + household clock | Do all three agree on noon? |
| 1 | DDS + VCO (+ GPSDO if available) | Two or three pairwise beats |
| 2 | Simulated N-clock networks | Triangular closure residuals |
| 3 | TAI ensemble, VLBI baselines, pulsar timing arrays | Global closure of time-transfer links |
Local vs Distant Comparison
How does distance constrain what you can learn?
When two clocks are in the same room, comparison is essentially instantaneous. When they are separated by kilometres or light-years, the signal carrying comparison information takes time to propagate. This geometric constraint — formalised in Tier 2 as the parameter η(τ) = Lcomparison / (cτ) — sets a boundary on what comparisons are physically possible at what precision.
Causal Baselines and Geometry
The boundary condition L ≤ cτ.
Any phase comparison between two clocks separated by distance L must satisfy L ≤ cT, where T is the comparison interval and c is the speed of signal propagation. This is not a law of clock physics — it is a boundary condition that all clocks must respect. In Tier 0, it is trivially satisfied (the garden is small). In Tier 1, it is negligible (electronic signals cross a bench in nanoseconds). In Tier 2, it is formalised as the η(τ) parameter. In Tier 3, it becomes the central design constraint for continental and interstellar clock networks.
This framework is developed in: U. Warring, Causal Clock Unification Framework, Zenodo v1.0.0, DOI: 10.5281/zenodo.17948436. It is under local stewardship and has not received broad community endorsement. Students are encouraged to test where it holds and where it breaks.
Further Reading
W. J. Riley, Handbook of Frequency Stability Analysis, NIST SP 1065, 2008. NIST
D. W. Allan, N. Ashby, C. C. Hodge, The Science of Timekeeping, HP Application Note 1289, 1997. PDF
D. S. Sivia and J. Skilling, Data Analysis: A Bayesian Tutorial, 2nd ed., Oxford, 2006.
F. Riehle, Frequency Standards: Basics and Applications, Wiley-VCH, 2004.
J. Levine, “Introduction to time and frequency metrology,” Rev. Sci. Instrum. 70, 2567 (1999). DOI