The South Korean summit has established a “blueprint” for the use of artificial intelligence in the military.

SEOUL, Sept 9 (Reuters) – South Korea hosted an international summit on Monday aimed at establishing a blueprint for the responsible use of artificial intelligence (AI) in the military. However, any agreements reached are not expected to be legally binding.

Government representatives from more than 90 countries, including the United States and China, attended the two-day summit in Seoul, marking the second gathering of its kind.

At the inaugural summit in Amsterdam last year, key nations such as the United States, China, and others lent their support to a “call to action,” a non-binding agreement aimed at encouraging the responsible development and use of AI technologies in military settings. This agreement, while symbolic, reflected the growing awareness of the implications of AI in modern warfare, but without any legal enforcement or mandatory commitments.

At the current summit in Seoul, South Korean Defense Minister Kim Yong-hyun, in his opening remarks, emphasized how AI is transforming military tactics and strategies in active conflict zones. He drew attention to Ukraine’s innovative use of AI-enabled drones in its ongoing war with Russia. “In the Russia-Ukraine war, an AI-equipped Ukrainian drone acted as David’s slingshot,” Kim said, drawing a biblical parallel to highlight the effectiveness of these drones against a larger adversary. Ukraine, faced with overwhelming Russian forces, has turned to AI technology in hopes of leveling the playing field. By incorporating AI into their drones, Ukraine aims to overcome traditional limitations such as signal jamming and enhance their ability to operate UAVs in large, coordinated swarms. This technological edge is seen as a potential game-changer in modern warfare, where the integration of AI could redefine combat dynamics.

The summit, with over 90 countries in attendance, including major players like the U.S. and China, seeks to establish guidelines and best practices for the responsible and ethical use of AI in military applications. However, the challenge remains in finding common ground among nations with differing views on AI’s role in warfare. As the development of AI progresses rapidly, the international community faces the urgent task of addressing concerns surrounding its use, such as ensuring that autonomous systems remain under human control and preventing AI-driven escalation in conflicts.

Though the agreements from this summit, like those from last year, are unlikely to be legally binding, they represent a crucial step in building a framework for the future use of AI in global security, fostering collaboration and understanding in an increasingly AI-dependent world.

This summit is not the only international discussion on the military use of artificial intelligence. Nations that are signatories to the 1983 Convention on Certain Conventional Weapons (CCW) under the United Nations are currently engaged in talks about potential restrictions on lethal autonomous weapon systems, ensuring compliance with international humanitarian law.

Last year, the U.S. government also issued a declaration on the responsible use of AI in the military, covering broader applications of AI beyond just weapons systems. As of August, 55 countries had signed this declaration.

The Seoul summit, co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, aims to facilitate ongoing discussions among multiple stakeholders in a field where technological advancements are primarily driven by the private sector, but where governments remain the key decision-makers.

Around 2,000 people from across the globe have registered for the event, including representatives from international organizations, academia, and the private sector. Discussions will cover critical topics such as the protection of civilians and the control of AI in nuclear weapons systems.