Global leaders agree first international network for AI safety

HomeTechnologyGlobal leaders agree first international network for AI safety

Hot Topics

World News

Featured Content

Global leaders have agreed to launch the first international network to cooperate on the advancement of AI safety

The agreement between 10 countries and the European Union was reached at the AI Seoul Summit in South Korea.

The countries, which include the US, Japan, Singapore and the UK, will develop a common understanding of AI safety and align work on research, standards and testing.

A range of construction-related tools, tasks and research projects are already harnessing the power of AI.

The 2024 State of Design & Make report by Autodesk found that 79% of those in the design and make industries believe AI will make their industry more creative.

Meanwhile, 72% said their organisation had increased spending on AI in the last three years and 77% expected investment to continue in the next three.

Human-centric, trustworthy and responsible AI

The new agreement will see national AI safety institutes build “complementarity and interoperability” between their technical work on AI safety.

This includes sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents”.

Leaders also signed up to the wider Seoul Declaration, which emphasizes international cooperation to develop “human-centric, trustworthy and responsible” AI.

US sets strategic vision for AI safety

As the AI Seoul Summit began, US secretary of commerce Gina Raimondo released a strategic vision for the US Artificial Intelligence Safety Institute (AISI).

The Department of Commerce and the National Institute of Standards & Technology established the AISI at President Biden’s direction.

Secretary Raimondo backed global cooperation on AI safety and plans to convene safety institutes later this year in San Francisco.

“Recent advances in AI carry exciting, lifechanging potential for our society, but only if we do the hard work to mitigate the very real dangers of AI that exist if it is not developed and deployed responsibly,” Secretary Raimondo said.

“That is the focus of our work every single day at the US AI Safety Institute, where our scientists are fully engaged with civil society, academia, industry and the public sector so we can understand and reduce the risks of AI, with the fundamental goal of harnessing the benefits.

“Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety and trust.”

Location
Category

Advertisements

Subscribe

Related Content