Who We Are

Artificial Analysis is an independent research team dedicated to making the rapidly changing AI landscape understandable, comparable, and trustworthy. We benchmark hundreds of large language models and AI systems with first-party testing, then translate the data into clear guidance for decision-makers.

This site is a redesign concept produced by ISDA 131 Group 9: Peter Woolery, Kevin Villa, Luke Son, and Julio Vazquez.

Our Mission

To give every kind of user — from ML engineers to business stakeholders — a plain-language, evidence-based answer to one question: Which AI model is right for my use case, right now? We do that by publishing neutral benchmarks, clear explanations, and a guided Model Finder so no one has to guess.

Our Services

  • Benchmark Data — Speed, quality, cost, and context-length comparisons across proprietary and open-weight models. View the data →
  • Learn — Beginner-friendly explainers, the State of AI report, and plain-language introductions to benchmarking. Start learning →
  • Model Finder — A short guided quiz that recommends a model based on your use case and constraints. Find your model →
  • Methodology — Full transparency on how we score intelligence, measure performance, and define endpoints. Read the methodology →

Contact Us

Questions, corrections, or partnership inquiries? Reach the Group 9 team at group9@research.bike. We review every message and post corrections publicly when our benchmarks or methodology need updating.

Frequently Asked Questions

  • How often is the data updated? Benchmark pages display a Last Updated timestamp; major model releases are typically scored within one week.
  • Do you accept sponsored placements? No. Rankings are never paid.
  • Where can I see your methodology? On the Methodology subpage — the same page the previous About page displayed.
Scroll to top