| Dimension | Supporting Evidence | Meets Criteria |
|---|---|---|
| CERN for AI | "... an international consortium that is focused on pure AI science and research to serve the public good"¹ "... foster the basic foundational architecture for AI"¹ "...a collaborative, scientific effort to accelerate and consolidate the development and uptake of AI for the benefit of all humans and our environment."² "An international A.I. mission focused on teaching machines to read could genuinely change the world for the better — the more so if it made A.I. a public good, rather than the property of a privileged few."³ |
Yes |
| CERN for AI Safety | The closest calls for safety found are: • "... an international consortium that is focused on pure AI science and research to serve the public good"¹ • "...a collaborative, scientific effort to accelerate and consolidate the development and uptake of AI for the benefit of all humans and our environment."² |
No |
| Compute Monopoly | Found no mention of a compute monopoly. The call for the "integrated platform" to be as open as possible would undermine any compute monopoly. | No |
| Model Monopoly | Found no mention of a model monopoly. The call for the "integrated platform" to be as open as possible would undermine any model monopoly. | No |
| Data Monopoly | Found no mention of a data monopoly. The call for the "integrated platform" to be as open as possible would undermine any data monopoly. | No |
| Research Monopoly | "A key element would be advancing the many strands of research in this area by discussing common approaches, methods, algorithms, data and their applications. It would foster the basic foundational architecture for AI that other, smaller efforts in university and private labs could plug into."¹ | Yes |
| Monopoly Legislated | Found no mention of a monopoly that could be legislated. Some suggestion that participation should be voluntary: • "The world can engage in a cooperative, international effort to develop these powerful AI technologies for humanity, or we can have a trade war – or, even worse, an arms race."¹ |
No |
| Intergovernmental | "Following that model, some scientists and experts like Marcus are calling for an international consortium that is focused on pure AI science and research to serve the public good. This would be a highly interdisciplinary effort that would bring together scientists from many areas, in close collaboration and interaction with industry, politics and the public."¹ | No |
| Transparent | "The integrated platform should be as open and flexible as possible to promote research and fast experimentation," Slusallek says. Yet it also needs to facilitate the transfer of results and datasets to and from industry to encourage commercial applications and spinoffs. "These capabilities should not be locked behind closed doors but developed with full transparency, including open data where possible," he explains.¹ | Yes |
| Intrinsic Safety Objectives | No intrinsic safety objectives found beyond the general statements below: • "Progress in AI must benefit everyone, creating 'AI for Humans.'"¹ • "Following that model, some scientists and experts like Marcus are calling for an international consortium that is focused on pure AI science and research to serve the public good."¹ |
No |
References:
- https://www.reframetech.de/en/2018/05/22/opinion-piece-why-we-need-a-cern-for-ai/
- https://www.oecd-forum.org/posts/28452-artificial-intelligence-and-digital-reality-do-we-need-a-cern-for-ai
- https://www.nytimes.com/2017/07/29/opinion/sunday/artificial-intelligence-is-stuck-heres-how-to-move-it-forward.html
| Dimension | Supporting Evidence | Meets Criteria |
|---|---|---|
| CERN for AI | The focus is an international Safety Project with no goals of broader AI capabilities. | No |
| CERN for AI Safety | "The motivation behind an international Safety Project would be to accelerate AI safety research by increasing its scale, resourcing and coordination, thereby expanding the ways in which AI can be safely deployed, and mitigating risks stemming from powerful general purpose capabilities."¹ | Yes |
| Compute Monopoly | Some suggestion of compute competitive with but not necessarily monopolising compute elsewhere: • "...enabled by the project's exceptional compute, engineers and model access."¹ • "this institution would have access to significant compute and engineering capacity, as well as AI models developed by AI companies."² |
No |
| Model Monopoly | Some suggestion the project would have access to a broader array of models than any other institution, but does not grant the project control over those models: • "...enabled by the project's exceptional compute, engineers and model access."¹ • "this institution would have access to significant compute and engineering capacity, as well as AI models developed by AI companies."² |
No |
| Data Monopoly | Found no mention of a data monopoly. Since the aim is safety and the project will access "AI models developed by AI companies" a data monopoly is not required. | No |
| Research Monopoly | "The motivation behind an international Safety Project would be to accelerate AI safety research by increasing its scale, resourcing and coordination, thereby expanding the ways in which AI can be safely deployed, and mitigating risks stemming from powerful general purpose capabilities."¹ | Yes |
| Monopoly Legislated | Found no mention of legislation to enforce monopolies. Without legislation to compel AI companies to grant access to their models, the project's model monopoly would be much weaker and rely on voluntary access. Since the research is safety-focused there is no need to legislate for the research monopoly. | No |
| Intergovernmental | "Because perceptions of AI risk vary around the world, such an effort would likely be spearheaded by frontier risk-conscious actors like the US and UK governments, AGI labs and civil society groups."¹ "Contrary to other international joint scientific programs like CERN or ITER, which are strictly inter-governmental, Ho and others propose that the AI Safety Project comprise other actors as well (e.g. civil society and the industry)."² |
No |
| Transparent | "In particular, it may be possible to silo information, structure model access, and design internal review processes in such a way that meaningfully reduces this risk [the diffusion of dangerous technologies] while ensuring research results are subject to adequate scientific scrutiny."¹ "The authors also suggest that, to prevent replication of models or diffusion of dangerous technologies, the AI Safety Project should incorporate information and security measures such as siloing information, structuring model access and designing internal review processes."² |
No |
| Intrinsic Safety Objectives | "The motivation behind an international Safety Project…"¹ | Yes |
References:
- Ho, L., Barnhart, J., Trager, R., Bengio, Y., Brundage, M., Carnegie, A., ... & Snidal, D. (2023). International institutions for advanced AI. arXiv preprint arXiv:2307.04699.
- Maas, M. M., & Villalobos, J. J. (2023). International AI Institutions: A Literature Review of Models, Examples, and Proposals. AI Foundations Report, 1.
| Dimension | Supporting Evidence | Meets Criteria |
|---|---|---|
| CERN for AI | "Petition for keeping up the progress tempo on AI research while securing its transparency and safety."¹ "...rather than impeding the momentum of public AI development, a more judicious and efficacious approach would be to foster a better-organized, transparent, safety-aware, and collaborative research environment."¹ "The establishment of transparent open-source AI safety labs tied to the international large-scale AI research facility as described above, which employ eligible AI safety experts, have corresponding publicly funded compute resources, and act according to regulations issued by democratic institutions, will cover the safety aspect without dampening progress."¹ "[Legislating to slow down] is a lost battle because models will get more powerful anyway," says Christoph Schuhmann, founder of German AI non-profit LAION. "We should empower citizens, researchers and companies to keep pace with and mitigate risks."² |
Yes |
| CERN for AI Safety | While the petition has overt safety objectives these are not the sole focus. Capabilities objectives are also included. "...rather than impeding the momentum of public AI development, a more judicious and efficacious approach would be to foster a better-organized, transparent, safety-aware, and collaborative research environment."¹ "The establishment of transparent open-source AI safety labs tied to the international large-scale AI research facility as described above, which employ eligible AI safety experts, have corresponding publicly funded compute resources, and act according to regulations issued by democratic institutions, will cover the safety aspect without dampening progress."¹ |
No |
| Compute Monopoly | "This facility, analogous to the CERN project in scale and impact, should house a diverse array of machines equipped with at least 100,000 high-performance state-of-the-art accelerators (GPUs or ASICs)…"¹ "To build societal resilience to these kinds of risks, Schuhmann estimates that world governments would need to invest $1bn-5bn in an AI-optimised supercomputer…"² This seems to correspond to roughly 1 to 2 orders of magnitude more accelerators than used for recent frontier models.³,⁴ Avoid the "dominance of few large corporations in AI development"¹ |
Yes |
| Model Monopoly | A model monopoly does not seem to be an intrinsic objective, but the compute monopoly could mean a model monopoly is acquired. Control over models is aimed at benefiting "small and medium-sized companies" and undermining the "dominance of few large corporations in AI development": "Economically, this initiative will bring substantial benefits to small and medium-sized companies worldwide. By providing access to large foundation models, businesses can fine-tune these models for their specific use cases while retaining full control over the weights and data. This approach will also appeal to government institutions seeking transparency and control over AI applications in their operations."¹ |
No |
| Data Monopoly | A data monopoly does not seem to be an intrinsic objective, but the "establishment of an international, publicly funded, open-source supercomputing research facility" could mean a data monopoly is acquired. There are some intrinsic objectives relating to the control and acquisition of data: • "By providing access to large foundation models, businesses can fine-tune these models for their specific use cases while retaining full control over the weights and data. This approach will also appeal to government institutions seeking transparency and control over AI applications in their operations."¹ • "By making these models open source and incorporating multimodal data (audio, video, text, and program code), we can significantly enrich academic research, enhance transparency, and ensure data security."¹ |
No |
| Research Monopoly | "...approach would be to foster a better-organized, transparent, safety-aware, and collaborative research environment."¹ The compute monopoly combined with the centralisation of talent, models and data would give the institution a research monopoly. |
Yes |
| Monopoly Legislated | Found no mention of legislation to enforce monopolies. The mechanism relies on developing a compute monopoly with the resulting incentive for collaboration leading to model, data and research monopolies. Unclear how the diffusion of monopolised models and data would be controlled. Inferring from context, perhaps through security measures with violations prosecuted through existing legislation. |
No |
| Intergovernmental | "We call upon the global community, particularly the European Union, the United States, the United Kingdom, Canada, Australia and other willing countries, to collaborate on a monumental initiative…" Direct access of "small and medium-sized companies worldwide" to the institution's resources suggest it is not strictly intergovernmental. |
No |
| Transparent | "By making these models open source and incorporating multimodal data (audio, video, text, and program code), we can significantly enrich academic research, enhance transparency, and ensure data security."¹ "...rather than impeding the momentum of public AI development, a more judicious and efficacious approach would be to foster a better-organized, transparent, safety-aware, and collaborative research environment."¹ |
Yes |
| Intrinsic Safety Objectives | "The establishment of transparent open-source AI safety labs tied to the international large-scale AI research facility as described above, which employ eligible AI safety experts, have corresponding publicly funded compute resources, and act according to regulations issued by democratic institutions, will cover the safety aspect without dampening progress."¹ "...foster a better-organized, transparent, safety-aware, and collaborative research environment"¹ "To harness their full potential as tools for societal betterment, it is vital to democratize research on and access to them, lest we face severe repercussions for our collective future."¹ "an international supercomputing research facility…that can be overseen by democratically elected institutions from participating countries."¹ |
Yes |
References:
- Schuhmann, Christoph, 'Petition for keeping up the progress tempo on AI research while securing its transparency and safety' (LAION, 29 March 2023), https://laion.ai/blog/petition/
- https://sifted.eu/articles/ai-supercomputer-petition-stable-diffusion
- https://blog.heim.xyz/palm-training-cost/
- https://www.fierceelectronics.com/sensors/chatgpt-runs-10k-nvidia-training-gpus-potential-thousands-more
| Dimension | Supporting Evidence | Meets Criteria |
|---|---|---|
| CERN for AI | The objectives include capabilities, but safety is the priority: "MAGIC (the Multilateral AGI Consortium) would be the world's only advanced and secure AI facility focused on safety-first research and development of advanced AI."¹ "MAGIC will only be concerned with preventing the high-risk development of frontier AI systems - godlike AIs. Research breakthroughs done at MAGIC will only be shared with the outside world once proven demonstrably safe."¹ |
No |
| CERN for AI Safety | "MAGIC (the Multilateral AGI Consortium) would be the world's only advanced and secure AI facility focused on safety-first research and development of advanced AI. Like CERN, MAGIC will allow humanity to take AGI development out of the hands of private firms and lay it into the hands of an international organization mandated towards safe AI development."¹ "MAGIC will only be concerned with preventing the high-risk development of frontier AI systems - godlike AIs. Research breakthroughs done at MAGIC will only be shared with the outside world once proven demonstrably safe."¹ |
Yes |
| Compute Monopoly | "To make sure high risk AI research remains secure and under strict oversight at MAGIC, a global moratorium on creation of AIs using more than a set amount of computing power be put in place..."¹ | Yes |
| Model Monopoly | Found no mention of a model monopoly, but since "MAGIC would have exclusivity when it comes to the high-risk research and development of advanced AI. It would be illegal for other entities to independently pursue AGI development"¹, a model monopoly would be acquired. | Yes |
| Data Monopoly | Found no mention of a data monopoly. The scale of the consortium could mean a data monopoly is acquired. | No |
| Research Monopoly | "MAGIC would have exclusivity when it comes to the high-risk research and development of advanced AI. It would be illegal for other entities to independently pursue AGI development. This would not affect the vast majority of AI research and development, and only focus on frontier, AGI-relevant research."¹ A research monopoly at the frontier is an intrinsic objective. Legal research outside the consortium would likely focus on applications of less capable models, or those shared by the consortium. |
Yes |
| Monopoly Legislated | "MAGIC would have exclusivity when it comes to the high-risk research and development of advanced AI. It would be illegal for other entities to independently pursue AGI development."¹ "To make sure high risk AI research remains secure and under strict oversight at MAGIC, a global moratorium on creation of AIs using more than a set amount of computing power be put in place…"¹ |
Yes |
| Intergovernmental | Not explicitly stated, but parallels drawn with CERN and IAEA¹ combined with exclusivity, control over what is shared and calls for "The U.S. and the U.K…to facilitate this multilateral effort"¹ suggest an intergovernmental organization. | Yes |
| Transparent | "MAGIC (the Multilateral AGI Consortium) would be the world's only advanced and secure AI facility focused on safety-first research and development of advanced AI."¹ "Research breakthroughs done at MAGIC will only be shared with the outside world once proven demonstrably safe."¹ |
No |
| Intrinsic Safety Objectives | Meets criteria for CERN for AI Safety. | Yes |
References:
| Dimension | Supporting Evidence | Meets Criteria |
|---|---|---|
| CERN for AI | "CLAIRE seeks to strengthen European excellence in AI research and innovation."¹ "...address challenges in various sectors and across a wide range of applications, including health, manufacturing, transportation, scientific research, sustainable agriculture, financial services, public administration and entertainment…"¹ "It is sometimes said that the best way to meet the future is to create it."¹ |
Yes |
| CERN for AI Safety | The primary objective is to protect European interests and ensure that Europe remains competitive. Safety seems instrumental and not the focus. There is some mention of European values and trustworthy AI: • "Europe can ensure that many of tomorrow's advanced technologies, products, systems and services are European and are based on and reflect European realities, needs and values."¹ • "If Europe were to fall behind in AI technology, we would be likely to face challenging economic consequences, academic brain drain, reduced transparency, and increasing dependency on foreign technologies, products and values."¹ • "CLAIRE will focus on trustworthy AI that augments human intelligence rather than replacing it, and that thus benefits the people of Europe."¹ • "CLAIRE will work for a major increase in funding towards existing scientific strengths in AI, novel research opportunities and key European interests."¹ |
No |
| Compute Monopoly | Found no mention of a compute monopoly. The call is for "Google-scale" infrastructure and therefore competitive, but not a monopoly: • "The CLAIRE initiative aims to establish a pan-European network of Centres of Excellence in AI, strategically located throughout Europe, and a new, central facility with state-of-the-art, "Google-scale", CERN-like infrastructure – the CLAIRE Hub."¹ • "This facility should comprise a large, state-of-the-art data and computer centre, cutting edge robotics laboratories, test facilities for key application areas, such as autonomous transportation, advanced agriculture and automated scientific experimentation, usability labs, and others."¹ |
No |
| Model Monopoly | Found no mention of a model monopoly. The objective is to remain competitive with other AI-leaders and not to develop a monopoly. The focus is on applications of AI rather than developing AGI. The initiative will not develop a model monopoly. | No |
| Data Monopoly | Found no mention of a data monopoly. The objective is to remain competitive with other AI-leaders and not to develop a monopoly. The focus is on applications of AI rather than developing AGI. The initiative may develop a data monopoly in some application domains. | No |
| Research Monopoly | The objective is to develop "an environment where Europe's brightest minds in AI meet and work for limited periods of time. This will increase the flow of knowledge among European researchers and back to their home institutions."¹ The lack of any monopoly from the AI Triad means the initiative is unlikely to acquire a research monopoly on AGI development. | No |
| Monopoly Legislated | The initiative has no monopoly as an intrinsic objective and is unlikely to acquire an instrumental monopoly. | No |
| Intergovernmental | "Its extensive network forms a pan-European Confederation of Laboratories for Artificial Intelligence Research in Europe. CLAIRE was launched in 2018 as a bottom-up initiative by the European AI community."¹ "The CLAIRE initiative aims to establish a pan-European network of Centres of Excellence in AI…"¹ "CLAIRE will work with key stakeholders to find mechanisms for citizen engagement, industry and public sector collaboration and innovation-driven startup and scale-up."¹ |
No |
| Transparent | Found no mention of a policy on transparency. Inferring from context, open-science would likely be valued and some degree of transparency will be required to achieve the objective of trustworthy AI. However, the main objective is to ensure Europe remains competitive and some protection of intellectual property will likely be required. There is no mention of, for example, making training runs public. | No |
| Intrinsic Safety Objectives | There are no intrinsic safety objectives. | No |
References:
| Dimension | Supporting Evidence | Meets Criteria |
|---|---|---|
| CERN for AI | Mitigating AI existential risks is the intrinsic objective. Any development of AI capabilities must be safe: "The Treaty on Artificial Intelligence Safety and Cooperation (TAISC) represents an effort to mitigate existential risks posed by advanced AI systems."¹ |
No |
| CERN for AI Safety | "The Treaty on Artificial Intelligence Safety and Cooperation (TAISC) represents an effort to mitigate existential risks posed by advanced AI systems."¹ "The provisions of the TAISC are designed to establish a minimum necessary policy framework that addresses AI risk and promotes safe AI development, while maintaining political feasibility and harnessing the economic and social benefits of AI."¹ "The CERN-inspired Joint AI Safety Laboratory (JAISL) embodies a collective initiative to seriously advance progress on AI safety, the alignment problem, and the development of safe AI models."¹ "Establishment of the International AI Safety and Cooperation Commission (IASC) ... serves as a centralized authority to monitor compliance with the treaty, promote AI safety research, and facilitate cooperation between signatories."¹ |
Yes |
| Compute Monopoly | "The TAISC proposes a two-cap system, where models within a collaborative AI safety lab (the JAISL) can be trained using an amount of compute up to 2.5×10²⁵ FLOP, the amount of compute used to train GPT-4, and for models developed outside of this lab the regular global cap of 10²³ FLOP is applied."¹ | Yes |
| Model Monopoly | The compute monopoly means the JAISL would acquire a model monopoly. | Yes |
| Data Monopoly | Found no mention of a data monopoly. The resources of the JAISL could mean a data monopoly is acquired. | No |
| Research Monopoly | The compute and model monopolies give the JAISL a monopoly on research. | Yes |
| Monopoly Legislated | The two-system global compute caps would be legislated to ensure the JAISL compute monopoly was maintained: • See Compute Monopoly • "The creation of the IASC, inspired by the IAEA, serves as a centralized authority to monitor compliance with the treaty, promote AI safety research, and facilitate cooperation between signatories. This provision is essential for providing oversight and ensuring that AI research aligns with safety and ethical standards."¹ |
Yes |
| Intergovernmental | TAISC is intended as an intergovernmental treaty between governments where the undersigned are to be "duly authorized thereto by their respective Governments…"² | Yes |
| Transparent | Transparency is desirable only to the extent that it mitigates "race dynamics among major powers"¹: "...JAISL promotes collaborative research, helps ensure safe AI development, and fosters transparency and cooperation between signatories, mitigating race dynamics among major powers."¹ |
No |
| Intrinsic Safety Objectives | Meets criteria for CERN for AI Safety. Intrinsic safety objectives are: • "Keep AI systems safe"¹ • "Reduce race dynamics"¹ • "Solve the alignment problem"¹ • "Promote beneficial use of AI"¹ |
Yes |
References: