From 6be42bd1535e862c1ec528367e381c3c64cdb4c9 Mon Sep 17 00:00:00 2001 From: Kevin Date: Mon, 2 Mar 2026 12:32:40 -0500 Subject: [PATCH] Fix link to EEE blog post --- _events/shared-task-every-eval-ever.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_events/shared-task-every-eval-ever.md b/_events/shared-task-every-eval-ever.md index 757b465..2b0ba53 100644 --- a/_events/shared-task-every-eval-ever.md +++ b/_events/shared-task-every-eval-ever.md @@ -61,7 +61,7 @@ Participants will contribute to building a comprehensive database of LLM evaluat ## 🔍 Schema at a Glance -For the full story, see our blog post: [Every Eval Ever: Toward a Common Language for AI Eval Reporting](/infrastructure/2026/02/15/everyevalever-launch/). +For the full story, see our blog post: [Every Eval Ever: Toward a Common Language for AI Eval Reporting](/infrastructure/2026/02/17/everyevalever-launch/). The repository is organized by benchmark, model, and evaluation run. Each result file captures not just scores but the context you need to interpret and reuse them: - who ran the evaluation,