The Making of Amplitude: A Nine-Month Arc
The development of VaryOn Amplitude - from a single measurement framework to ten peer-reviewable scoring methodologies across the full spectrum of AI impact.
- May 2025FoundationGenesis
The First Frequency
VaryOn Works begins with a single question: can AI impact be measured with the same rigor as credit risk or vulnerability severity? The first framework establishes the measurement-first philosophy: AI adoption worthiness scored through quantifiable, reproducible methodology.
- VaryOn Works founded as a measurement research lab
- Initial framework developed for AI adoption readiness scoring
- Tagline established: "Measurement for AI Frontiers"
- Aug 2025ExpansionPivot
From Readiness to Reality
The critical insight: the market doesn't just need to know IF they should use AI - they need to measure the AI already deployed. The focus shifts from internal readiness assessment to external impact measurement. VaryOn Meridian is conceived to score the quality and value of data consumed by AI agents in real time.
- External measurement thesis validated
- Meridian's four dimensions crystallized: Scarcity, Quality, Decision Impact, Defensibility
- Key innovation: O(2^n) to O(1) counterfactual Decision Impact protocol
- Oct 2025SpecificationFormalization
Implementation-Ready Mathematics
Meridian reaches full mathematical specification. Every formula defined to edge-case precision. Sigmoid scarcity normalization. Exponential decay freshness with per-data-type rates. MCP server runtime gating architecture delivering scores inline during agent tool calls. Weighted geometric mean aggregation justified via UN Human Development Index methodology.
- Sigmoid scarcity normalization (continuous scoring of alternative source availability)
- MCP server architecture for real-time score delivery (<100ms)
- Pricing derivation model: scores to market signals
- Dec 2025ArchitectureAmplitude
One Framework Becomes Ten
The scope expands from measuring data quality to measuring the full spectrum of AI impact. The Amplitude thesis emerges: ten independently validated scoring frameworks organized across three layers - Data, Agent, and Ecosystem. Each framework answers a distinct question about how AI interacts with the world, using the mathematically optimal aggregation method for the specific phenomenon it measures.
- Three-layer architecture defined: Data, Agent, Ecosystem
- Ten frameworks named, scoped, and dimensionally structured
- Six distinct aggregation methods selected and justified
- Cross-index intelligence patterns identified
- Jan 2026MathematicsSpecification
The Aggregation Science
Each framework receives its full mathematical specification. The defining work: determining WHY each framework requires a different aggregation method. Geometric mean for non-compensatory trust. Harmonic mean for weakest-link resilience. Multiplicative chain for all-components-required oversight. Ceiling-constrained mean for outcome-bounded fairness. Each choice maps to how regulators and courts actually evaluate these phenomena.
- Geometric mean: Meridian, Fidelity, Cascade
- Harmonic mean: Threshold (weakest-link property)
- Multiplicative chain: Mandate, Torque (all-components-required)
- Arithmetic mean: Provenance (compensatory within bounds)
- Minimum-of-components: Convergence (non-compensatory)
- Ceiling-constrained: Parity (outcome disparity caps process)
- Gated geometric: Drift (shadow principal critical gate)
- Feb 2026Productionv3.1
Product-Ready Specification
The complete Amplitude methodology undergoes rigorous stress testing. Seventeen mathematical corrections are identified and implemented. Every framework elevated to Level 2 (Specified): implementable formulas, edge case handling, gaming resistance analysis, production tier classification, and corrected mathematical foundations. The result: a 55-page specification an engineering team can build from.
- 17 mathematical corrections across all frameworks
- Three-tier production architecture: Transaction / Monitoring / Assessment
- Gaming resistance analysis for every framework
- Regulatory compliance matrix mapping to 9 regulations
- Five canonical reference scenarios for cross-framework validation
- 1,029 paragraphs of implementable specification
Empirical validation.
Peer review.
Production deployment.