Back to Research
Alignment & ControlFeb 22, 2026

Mapping AI Impact Measurement to Nine Regulatory Frameworks

Abstract

We construct a compliance alignment matrix mapping the ten Amplitude scoring frameworks to nine regulatory instruments: EU AI Act [1], NIST AI RMF [2], FTC Section 5 [3], CFPB guidance [4], SEC systemic risk rules, EEOC disparate impact doctrine [5], Basel III [6], MiFID II [7], and Dodd-Frank [8]. Each framework-regulation pair is evaluated for direct applicability, partial coverage, and gap identification, producing a practical reference for organizations operating across jurisdictions.

Context

The regulatory landscape for artificial intelligence is characterized by a proliferation of overlapping, partially conflicting, and rapidly evolving instruments across multiple jurisdictions. The EU AI Act, which entered into force in 2024 and whose key provisions began applying in 2025 [1], establishes a risk-based classification system for AI systems with mandatory requirements for high-risk applications including conformity assessments, post-market monitoring, and human oversight obligations. The NIST AI Risk Management Framework, published in January 2023 [2], provides a voluntary framework for managing AI risks organized around four functions: Govern, Map, Measure, and Manage. These two instruments alone create a complex compliance environment for organizations that operate in both the United States and the European Union.

The financial sector faces an even more complex regulatory matrix because AI systems deployed in financial services must comply not only with AI-specific regulations but also with sector-specific instruments that predate the AI era but apply to AI-mediated financial activities. Basel III [6] imposes capital adequacy requirements on banks that use AI models for credit risk assessment. MiFID II [7] requires best execution obligations for investment firms that use algorithmic trading systems. The Dodd-Frank Act [8] imposes systemic risk monitoring requirements that apply to AI-driven interconnections between financial institutions. The SEC has proposed rules requiring disclosure of AI use in securities trading and advisory services. Each of these instruments creates compliance obligations that interact with AI-specific requirements in ways that are not always consistent or clear.

Consumer protection regulations add another layer of complexity. The FTC has exercised its Section 5 authority against unfair or deceptive practices in AI systems [3], bringing enforcement actions against companies that make unsubstantiated claims about AI capabilities or that use AI systems that produce discriminatory outcomes. The CFPB has issued guidance on the use of AI in consumer lending [4], requiring explanations of adverse actions taken by AI systems and prohibiting the use of AI models that produce disparate impact on protected classes. The EEOC has issued technical assistance documents on the use of AI in employment decisions [5], applying Title VII disparate impact doctrine to algorithmic hiring and promotion systems.

Organizations deploying AI systems across multiple jurisdictions face a compliance mapping challenge: they must determine which regulatory instruments apply to each AI system, what specific obligations each instrument imposes, and how compliance with one instrument interacts with compliance obligations under other instruments. This mapping is currently performed through ad hoc legal analysis that is expensive, inconsistent, and difficult to maintain as regulations evolve. The compliance alignment matrix presented in this paper provides a structured, repeatable methodology for this mapping, using the ten Amplitude scoring frameworks as a common measurement vocabulary that can be translated into the specific requirements of each regulatory instrument.

Architecture

The compliance alignment matrix is a 10x9 grid where rows represent the ten Amplitude scoring frameworks and columns represent the nine regulatory instruments. Each cell in the matrix contains a three-part evaluation: a coverage classification (direct, partial, or gap), a narrative explanation of the relationship between the framework and the regulation, and a set of specific compliance actions that the framework score can inform. The coverage classification is determined through systematic analysis of the regulatory text, identifying the specific provisions that relate to each measurement dimension of each Amplitude framework.

Direct coverage indicates that the Amplitude framework measures a quantity that is explicitly required or referenced by the regulatory instrument. For example, the Fidelity framework measures alignment preservation, which directly maps to the EU AI Act's requirement for human oversight of high-risk AI systems (Article 14) [1]. The Fidelity score quantifies the degree to which an AI system maintains alignment with its specified purpose, which is precisely the property that human oversight is intended to verify. A high Fidelity score provides quantitative evidence that can support conformity assessment documentation required under the Act.

Partial coverage indicates that the Amplitude framework measures a quantity that is relevant to but not explicitly required by the regulatory instrument. The relationship is indirect: the framework score provides evidence that bears on a regulatory requirement without directly satisfying it. For example, the Cascade framework measures systemic risk in agent networks, which partially maps to the SEC's proposed systemic risk disclosure rules. The SEC rules require disclosure of interconnections that could propagate risk through the financial system, and Cascade scores provide quantitative evidence about the magnitude of such interconnection risk. However, the SEC rules define systemic risk in terms specific to financial instruments and counterparty relationships, while Cascade uses a more general graph-theoretic formulation. The partial coverage classification signals that organizations can use Cascade scores to inform their SEC compliance but cannot rely on them as the sole basis for compliance.

Gap identification indicates that a regulatory requirement exists for which no Amplitude framework provides direct or partial coverage. Gaps represent areas where the measurement methodology must be extended or supplemented to achieve full regulatory compliance. The compliance matrix identifies 14 gaps across the 90 framework-regulation pairs, concentrated in three areas: explainability requirements (the EU AI Act and CFPB both require explanations of AI decisions [9], but the Amplitude frameworks measure impact rather than explainability), data protection requirements (GDPR-adjacent provisions [10] in the EU AI Act require data governance measures that the Amplitude data quality frameworks only partially address), and documentation requirements (multiple regulations require specific documentation formats that the Amplitude scoring outputs do not directly produce).

The architecture of the compliance matrix is designed for maintainability. Each framework-regulation pair is documented in a modular format that can be updated independently when either the framework specification or the regulatory instrument is amended. The matrix includes version tracking for both frameworks and regulations, with change flags that indicate when a pair evaluation may be stale due to updates on either side. This version-aware design is essential because both the Amplitude specification and the regulatory landscape are evolving rapidly, and a static compliance mapping would become outdated within months of publication.

Specification

The framework-regulation pair evaluation methodology follows a four-step process. First, the regulatory text is decomposed into specific obligations, defined as discrete compliance requirements that can be independently satisfied or violated. The EU AI Act decomposes into approximately 87 specific obligations for providers of high-risk AI systems [1], ranging from data governance (Article 10) to technical documentation (Article 11) to record-keeping (Article 12) to human oversight (Article 14). Each obligation is assigned a category (documentation, monitoring, assessment, reporting, or governance) and a measurability classification (quantitative, qualitative, or procedural).

Second, each specific obligation is mapped to the Amplitude framework dimensions that provide relevant measurement data. This mapping is performed by subject matter experts with expertise in both AI measurement and regulatory compliance, using a structured protocol that requires explicit justification for each mapping decision. The protocol guards against both over-mapping (claiming coverage where none exists) and under-mapping (missing genuine connections between frameworks and regulations). Each mapping is classified as direct, partial, or gap based on the degree to which the framework measurement satisfies the regulatory requirement.

Third, for each direct or partial mapping, the evaluation specifies the compliance actions that the framework score can inform. A compliance action is a concrete step that an organization can take to satisfy a regulatory obligation using data from an Amplitude framework score. For example, the direct mapping between the Drift framework and the EU AI Act's post-market monitoring requirement (Article 72) [1] enables the following compliance action: organizations can use Drift scores computed at regular intervals as quantitative evidence that their AI system's behavior has not deviated materially from its intended purpose, satisfying the Act's requirement for ongoing monitoring of AI system performance. The compliance action specification includes the minimum score frequency, the score threshold below which regulatory notification may be required, and the documentation format that transforms the raw score into a compliance-ready artifact.

Fourth, for each gap identified, the evaluation specifies the nature of the gap and the supplementary measurement or documentation required to close it. Gaps are classified as structural (the Amplitude framework does not measure the relevant quantity and would require a new framework to address the gap), operational (the Amplitude framework measures a related quantity but the measurement output requires transformation or supplementation to satisfy the regulatory requirement), or presentational (the Amplitude framework provides the necessary measurement but the output format does not match the regulatory documentation requirement). Structural gaps require framework extension; operational gaps require adapter modules; presentational gaps require report generators.

The evaluation methodology has been applied by three independent analyst teams to validate inter-rater reliability [11]. The three teams independently evaluated the same 90 framework-regulation pairs and achieved a coverage classification agreement rate of 87.8%. Disagreements were concentrated in the partial-versus-gap boundary, which is inherently more subjective than the direct-versus-partial boundary. After resolution through structured deliberation, the final consensus matrix was produced and subjected to review by regulatory counsel in three jurisdictions.

Applications

The compliance alignment matrix has three primary applications for organizations deploying AI systems. The first application is compliance gap analysis: by examining the gap cells in the matrix for the regulatory instruments that apply to their operations, organizations can identify specific areas where their AI measurement infrastructure does not provide sufficient evidence for regulatory compliance. A financial services firm operating in the EU, for example, would examine the columns for the EU AI Act [1], Basel III [6], and MiFID II [7] and identify all gap cells that apply to its AI systems. The gaps represent compliance risks that must be addressed through supplementary measurement, documentation, or governance processes.

The second application is compliance prioritization: the matrix enables organizations to prioritize their compliance investments based on the density of direct coverage across regulatory instruments. An Amplitude framework that provides direct coverage for provisions in five or more regulatory instruments offers higher compliance return on investment than a framework that covers provisions in only one or two instruments. The Fidelity framework, for example, provides direct or partial coverage for human oversight provisions across the EU AI Act [1], NIST AI RMF [2], and FTC Section 5 [3], making it a high-priority investment for organizations subject to all three instruments. The Cascade framework provides direct coverage primarily for financial systemic risk regulations, making it a priority investment mainly for financial services firms.

The third application is cross-jurisdictional compliance optimization: organizations operating across multiple jurisdictions can use the matrix to identify Amplitude frameworks that satisfy requirements under multiple regulatory instruments simultaneously [12]. Rather than building separate compliance processes for each jurisdiction, organizations can invest in a common measurement infrastructure based on the Amplitude frameworks and then map the scores to the specific reporting requirements of each jurisdiction. This approach reduces the total cost of compliance while improving the consistency and quality of the underlying measurement data.

A case study illustrating the practical value of the compliance matrix involves a multinational bank deploying AI agents for credit risk assessment, algorithmic trading, and customer service. The bank is subject to the EU AI Act [1] (its credit risk and customer service AI qualify as high-risk under Annex III), Basel III [6] (its credit risk models must comply with internal ratings-based approach requirements), MiFID II [7] (its algorithmic trading systems must satisfy best execution and risk management obligations), and the CFPB [4] (its US consumer lending operations must comply with fair lending guidance). Using the compliance matrix, the bank identified that deploying five Amplitude frameworks, Fidelity, Drift, Torque, Cascade, and Equity, would provide direct or partial coverage for 73% of its total regulatory obligations across all four instruments, with the remaining 27% addressable through supplementary documentation and procedural controls.

The compliance matrix also serves as a communication tool between technical teams and legal/compliance departments. The structured format translates abstract regulatory requirements into specific measurement capabilities, enabling productive conversations about which regulatory obligations can be satisfied through quantitative evidence and which require qualitative assessment or procedural controls. This translation function is often as valuable as the compliance analysis itself, because the gap between technical measurement capabilities and legal compliance requirements is frequently a communication gap rather than a capability gap.

References

  1. European Parliament and Council. (2024). Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (AI Act). Official Journal of the European Union.
  2. National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. U.S. Department of Commerce.
  3. Federal Trade Commission Act, 15 U.S.C. \u00a7 45. Section 5: Unfair Methods of Competition.
  4. Consumer Financial Protection Bureau. (2022). Circular 2022-03: Adverse Action Notification Requirements in Connection with Credit Decisions Based on Complex Algorithms.
  5. Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, & Department of Justice. (1978). Uniform Guidelines on Employee Selection Procedures. 29 C.F.R. Part 1607.
  6. Basel Committee on Banking Supervision. (2011). Basel III: A Global Regulatory Framework for More Resilient Banks and Banking Systems. Bank for International Settlements.
  7. European Parliament and Council. (2014). Directive 2014/65/EU on Markets in Financial Instruments (MiFID II). Official Journal of the European Union.
  8. Dodd-Frank Wall Street Reform and Consumer Protection Act, Pub. L. No. 111-203, 124 Stat. 1376. (2010).
  9. European Parliament and Council. (2016). Regulation (EU) 2016/679 on the Protection of Natural Persons with Regard to the Processing of Personal Data (GDPR), Article 22: Automated Individual Decision-Making, Including Profiling.
  10. European Parliament and Council. (2016). Regulation (EU) 2016/679 on the Protection of Natural Persons with Regard to the Processing of Personal Data (GDPR). Official Journal of the European Union.
  11. Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), 37-46.
  12. Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.