Skip to content

dilaracankaya/AI_Risk_Index

Repository files navigation

AI Risk Index

developed by Dilara Çankaya and Miloš Borenović


AI Risk Index (AIRI) is a novel tool designed to quantify and monitor the potential existential risks associated with the rapid advancement of artificial intelligence. By combining investment data and sentiment analysis, AIRI provides a comprehensive view of both the current AI developments and future outlooks.

We are hopeful that AIRI is able to serve a gamut of functions: raise public awareness, encourage and instruct investment, and provide insights into the risks associated with artificial intelligence (AI). As it simplifies complex AI risks, AIRI aims to foster societal understanding and consensus for safe AI practices. By highlighting potential negative impacts, AIRI motivates investment in ethically aligned AI, tracks AI trends for informed decision-making, and aids policymakers in crafting effective regulations. It might also serve as an educational resource, supporting corporate risk management, stimulating public discourse on AI ethics, and promoting collaborative innovation among diverse stakeholders to develop safer AI systems.

AIRI is available through its web app and an X bot publishing weekly updates. The goal is to develop it further iteratively as we collect more feedback and better understand the mechanics. In this task we expect significant input from the community.

See our medium article for details on our methodology, and our web app at for more details on the 4 indicators, including historical data.

About

Project created as part of Bluedot Impact's 12-week AI Safety Fundamentals: Governance course. https://x.com/airiskindex

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors