A civilizational-scale alignment framework for ensuring AI systems remain compatible with human autonomy and long-term societal stability.
-
Updated
Mar 15, 2026
A civilizational-scale alignment framework for ensuring AI systems remain compatible with human autonomy and long-term societal stability.
An operating system for humans and AI. A living framework of tools for clear thinking, reading reality, and cooperation under pressure. Includes the Meridian AI Standard, the Case Record, and the Range Audit, an open architecture for AI alignment and institutional evaluation. CC BY 4.0.
Add a description, image, and links to the civilizational-risk topic page so that developers can more easily learn about it.
To associate your repository with the civilizational-risk topic, visit your repo's landing page and select "manage topics."