work-blog/articles/drafts/the-oracle-problem.md
Gregory Gauthier 544b773e8f feat(drafts): add initial drafts for philosophy-inspired testing articles
Introduces nine new draft articles exploring intersections of software testing with philosophy, epistemology, and related concepts:
- On Flakiness (Heraclitus and non-deterministic tests)
- Popper and the Risky Test (demarcation criterion)
- Regression as Institutional Memory (Wittgenstein's On Certainty)
- Tacit Knowledge and the Testing Checklist (Polanyi's tacit dimension)
- Test Environments as Platonic Shadows (Plato's cave allegory)
- The Tester as Witness (legal metaphor and testimony)
- Testing Probabilistic Systems (ML and statistical testing)
- The Oracle Problem (oracles in testing frameworks)
- When Quality Becomes Quantity (Goodhart's Law and metrics)
2026-04-20 09:28:28 +01:00

6 lines
795 B
Markdown

The Oracle Problem. This is the most glaring missing piece. Your entire framework asks how do we know? — but you haven't yet tackled the uniquely testing-flavoured version: how do we know what "correct" means? An oracle is whatever tells a test whether an output is right. In your world, oracles are sometimes requirements, sometimes expectations, sometimes customer satisfaction, sometimes regulator sign-off — and they conflict. Elaine Weyuker's original 1982 paper[1] on the oracle assumption and Doug Hoffman's "Heuristic Test Oracles"[2] are the obvious anchors. This also unifies your Categories-of-Testing triad: each of the three fact-kinds has its own oracle species.
1. https://dl.acm.org/doi/10.1093/comjnl/25.4.465
2. https://www.stickyminds.com/article/heuristic-test-oracles