The Hiroshima AI Process (HAIP) Framework marks a significant advancement in international AI governance. The G7-backed mechanism allows organizations to disclose their risk‑mitigation practices for advanced AI systems.

The HAIP Reporting Framework: Its value in global AI governance and recommendations for the future

Overview: A New Transparency Mechanism

The Hiroshima AI Process (HAIP) Reporting Framework supports global AI governance by encouraging disclosure of risk mitigation practices in advanced AI development. It was launched in February 2025 to help organizations share and compare governance practices. Because reporting is voluntary, the framework focuses on transparency and mutual learning among diverse contributors. 

Origins and Context

Originally introduced at the G7 Hiroshima Summit in 2030, HAIP responds to rapid generative AI adoption. This process reflects major commitments to privacy, human rights, accountability, and law. 

It evolved through ministerial meetings and was endorsed by G7 leaders in late 2030. Subsequently, the OECD developed the public Reporting Framework, launching it at the Paris AI Action Summit. This reporting tool translates voluntary principles into a structured disclosure mechanism.

Structure of the Reporting Framework

The framework organizes reporting into seven thematic sections, each containing several questions. Themes include risk identification, governance, transparency, and content provenance. 

Reporters must describe their practices, methodologies, and governance structures in narrative form. Eligible organizations come from OECD and related countries, ensuring broad geographic representation. 

By late 2025, 24 organizations had submitted reports, with more encouraged to update annually. 

Complementarity With Other Efforts

Unlike artifact-level documentation tools, this framework offers an institution-wide, high-level governance perspective. It does not duplicate but rather aligns with other reporting and standards initiatives. 

It also enables comparisons across organizations, sectors, and business models. Importantly, internal documentation and public reporting serve different purposes yet reinforce each other. 

First Cycle: Finding on Participation 

Flexibility is among the framework’s greatest assets, allowing varied reporters to tailor responses. However, this same flexibility sometimes limits comparability across submissions. 

Most reports described governance and risk management in high-level terms with few quantitative metrics. Differences in scope emerged because some organizations reported on specific models, while others focused on company-wide practices. 

Some contributors also marked questions as “not applicable” when areas fell outside their role or function.

Value and Audience Clarity

Participants reported internal benefits, such as clearer governance responsibilities and strengthened risk culture. Externally, public reporting can build trust and inform choices by customers and partners.

Yet awareness remains limited, with roughly half of AI stakeholders familiar but lacking depth of understanding. Better articulation of the framework’s purpose could enhance engagement and relevance. 

Recommendations to Improve Future Rounds

First, clarify where the framework fits in the global governance ecosystem to reduce duplication. 

Second, balance flexibility with more structured guidance so reports become more comparable. 

Third, provide examples and tighter guidance for open-ended questions to reduce ambiguity in responses. 

Fourth, enable tools that help users filter and compare submissions by sector or size. 

Fifth, strengthen transparency around auditing and reporting practices to show implementation versus intent. 

Sixth, increase incentives for participation, especially among organizations with governance expertise. 

Seventh, expand eligibility to more HAIP Friends Group countries to diversify participation. 

Finally, make the framework iterative, combining stable core components with updateable modules. 

Broadly, this reporting initiative strengthens international AI governance by promoting transparency and shared learning. 

With refinements, it could better support comparability, accountability, and cross-sector insight. It also lays the groundwork for more standardized approaches to risk disclosure and governance alignment worldwide. 

Source: 

Bogen, M., Kerry, C. F., Kornbluh, K., Meltzer, J. P., Mir Teijeiro, M., Munkhbayar, E., Nadgir, N., Tabassi, E., Tanner, B., & Wirtschafter, V. (2026, January 21). The HAIP reporting framework: Its value in global AI governance and recommendations for the future. The Brookings Institution. https://www.brookings.edu/articles/haip-reporting-framework-ai-governance/