Skip to content
Endorsement Marker: Local candidate framework under local stewardship. Concept pages describe the programme’s organising logic, not an externally endorsed standard.

Concepts

Ideas that appear in every tier. The same physics, different tools.

These concepts recur throughout the programme. Each tier engages them at a level appropriate to its tools and precision. This page connects the threads.

Periodicity

What repeats, and how reliably?

Every clock rests on something periodic — a process that repeats in a way stable enough to count against. The question is never “does it repeat perfectly?” (nothing does) but “how well does it repeat, and how do we know?”

TierPeriodic processWhat limits its regularity
0 · ObserveSun’s apparent motion, pendulum swing, clock tickEquation of time, temperature, battery
1 · BuildElectronic oscillation (DDS, VCO, GPSDO)Crystal ageing, temperature, voltage noise
2 · SimulateNumerical oscillator with injected noise modelWhite, flicker, random-walk frequency noise
3 · ExploreAtomic transition, Earth rotation, pulsar spinQuantum projection noise, tidal friction, timing noise

Comparison

A clock is an operational comparison between periodic processes.

This is the programme’s invariant principle. No tier defines a clock by its mechanism alone. Every tier defines it by comparing one periodic process to another and extracting information from the difference.

TierWhat is comparedHow the difference is observed
0Sun vs pendulum vs household clockNotebook entries, visual inspection
1VCO vs DDS or GPSDOBeat note (ear), then FFT (Beat Lab)
2Simulated oscillators in a networkAllan deviation, closure residuals
3Atomic clocks, UT1, pulsarsUTC − UTC(k), UT1 − UTC, timing residuals

Reference vs Free-Running

Which clock do you trust, and why?

A “reference” is not an absolute — it is the oscillator you choose to trust for the duration of the comparison. That choice is always justified by prior knowledge about stability, not by any inherent property of the oscillator itself.

TierReferenceFree-running
0Sun (slow, predictable drift)Pendulum (temperature-sensitive)
1DDS crystal (ppm) or GPSDO (10−12)XR2206 VCO (drifts visibly)
2Injected “true” frequency (known ground truth)Simulated noisy oscillator
3TAI ensemble, GPS system timeIndividual lab clock UTC(k), UT1

Synchronisation

How are clocks aligned initially, and what does “aligned” mean?

Synchronisation crosses tiers with changing character. At Tier 0, it is notebook-mediated: the student carries information between the sundial and the pendulum by walking and writing. At Tier 1, it is signal-mediated: a PPS pulse from GPS. At Tier 2, it is algorithmic: ensemble time-scale computation. The progression moves from human coordination to electronic to mathematical — from logical to metrological synchronisation.

Stability vs Accuracy

A clock can be stable without being accurate, and accurate without being stable.

Stability is how well a clock repeats — how consistent successive measurements are. Accuracy is how close those measurements are to a defined standard. A pendulum that swings at exactly the same rate every day is stable, even if that rate is slightly wrong. A GPS PPS pulse is accurate (traceable to caesium) but may jitter at the nanosecond level.

This distinction is central to metrology and appears at every tier. The Allan deviation (σy(τ)) characterises stability; calibration against a primary standard characterises accuracy.

Drift and Noise

What changes slowly, and what fluctuates?

TierDrift (systematic, slow)Noise (random, fast)
0Equation of time (~minutes over months)Observer reaction time (~0.5 s)
1VCO temperature drift (~Hz/°C)Oscillator phase noise
2Simulated random-walk frequencySimulated white/flicker frequency
3Pulsar spin-down, tidal decelerationInterstellar medium dispersion, receiver noise

Systematic vs Random Deviation

Can you tell them apart with your data?

A systematic deviation shifts all measurements in the same direction. A random deviation scatters them. With enough observations over enough time, the two become distinguishable — but in a short dataset, a systematic drift can masquerade as noise or vice versa. This is why Experiment 0.1 requires 7–14 days, not one afternoon.

Comparison Networks

What does a third clock add that two clocks cannot provide?

Two clocks can measure their frequency difference. Three clocks can check for consistency: if A–B, B–C, and C–A don’t satisfy triangular closure (their sum should be zero), something is wrong — and the residual localises the fault. Every additional clock adds constraints, rapidly over-determining the system.

TierNetworkClosure test
0Sun + pendulum + household clockDo all three agree on noon?
1DDS + VCO (+ GPSDO if available)Two or three pairwise beats
2Simulated N-clock networksTriangular closure residuals
3TAI ensemble, VLBI baselines, pulsar timing arraysGlobal closure of time-transfer links

Local vs Distant Comparison

How does distance constrain what you can learn?

When two clocks are in the same room, comparison is essentially instantaneous. When they are separated by kilometres or light-years, the signal carrying comparison information takes time to propagate. This geometric constraint — formalised in Tier 2 as the parameter η(τ) = Lcomparison / (cτ) — sets a boundary on what comparisons are physically possible at what precision.

Causal Baselines and Geometry

The boundary condition L ≤ cτ.

Any phase comparison between two clocks separated by distance L must satisfy L ≤ cT, where T is the comparison interval and c is the speed of signal propagation. This is not a law of clock physics — it is a boundary condition that all clocks must respect. In Tier 0, it is trivially satisfied (the garden is small). In Tier 1, it is negligible (electronic signals cross a bench in nanoseconds). In Tier 2, it is formalised as the η(τ) parameter. In Tier 3, it becomes the central design constraint for continental and interstellar clock networks.

This framework is developed in: U. Warring, Causal Clock Unification Framework, Zenodo v1.0.0, DOI: 10.5281/zenodo.17948436. It is under local stewardship and has not received broad community endorsement. Students are encouraged to test where it holds and where it breaks.

Further Reading

W. J. Riley, Handbook of Frequency Stability Analysis, NIST SP 1065, 2008. NIST
D. W. Allan, N. Ashby, C. C. Hodge, The Science of Timekeeping, HP Application Note 1289, 1997. PDF
D. S. Sivia and J. Skilling, Data Analysis: A Bayesian Tutorial, 2nd ed., Oxford, 2006.
F. Riehle, Frequency Standards: Basics and Applications, Wiley-VCH, 2004.
J. Levine, “Introduction to time and frequency metrology,” Rev. Sci. Instrum. 70, 2567 (1999). DOI