At VaryOn Works, we build measurement infrastructure for the frontiers of artificial intelligence. We believe that the ability to measure AI's impact on society is itself a responsibility - one that demands rigor, honesty, and care. This policy outlines the principles that guide how we design our frameworks, conduct our research, and operate as an organization.
Our measurement frameworks are designed to produce results that are fair, unbiased, and representative. We actively work to identify and mitigate sources of bias in our scoring methodologies, datasets, and evaluation criteria. We recognize that measurement itself can reinforce or challenge existing inequities, and we take that influence seriously.
We are committed to making our methodologies understandable and our processes visible. When we publish scores, assessments, or research findings, we provide clear explanations of how results were derived, what data was used, and what limitations exist. We do not use opaque or unexplainable methods where interpretable alternatives are available.
We take ownership of the impact our frameworks and research have on the organizations and communities that use them. We maintain clear lines of responsibility for the design, deployment, and outcomes of our work. When our tools produce unexpected or harmful results, we investigate, disclose, and correct.
Our frameworks - including VaryOn Amplitude and VaryOn Meridian - are grounded in research, not speculation. We subject our methodologies to peer scrutiny, document our assumptions, and distinguish clearly between established findings and emerging hypotheses. We do not overstate the capabilities or accuracy of our tools.
We collect and process only the data necessary for our research and services. We design our frameworks to minimize the need for sensitive or personally identifiable information. When data is required, we handle it in accordance with our Privacy Policy and applicable data protection laws.
Every measurement framework we develop adheres to the following standards:
Our research practices are guided by the following commitments:
We believe that AI measurement tools should augment human judgment, not replace it. Our frameworks are designed to inform decision-making, not to automate it. We encourage all users of our tools to apply their own expertise and context when interpreting results, and we design our outputs to support - not shortcut - critical thinking.
We are mindful of the environmental costs of AI research and computation. We strive to design efficient methodologies that minimize unnecessary computational overhead, and we consider the environmental impact of our infrastructure choices.
Responsible AI is not a static achievement - it is an ongoing practice. We commit to:
We welcome feedback, questions, and concerns about our AI practices. If you believe any of our frameworks, research, or processes raise ethical concerns, please contact us. We take all reports seriously and will investigate and respond in a timely manner.
VaryOn Capital LLC
Email: ethics@varyon.ai