[ad_1]
The world’s first international summit on the responsible use of artificial intelligence in the military commenced on February 15 in The Netherlands. The REAIM 2023 brings together governments, corporations, academia, startups, and civil societies to raise awareness, discuss issues, and possibly, agree on common principles in deploying and using AI in armed conflicts.
“AI might soon exceed our combined knowledge. It is changing the way we live; it is changing the way we work, and it is clearly changing the military. It has been estimated that AI is as ground-breaking as nuclear technology, and it has huge potential,” The Netherlands’ Foreign Minister Wopke Hoekstra said in his inaugural address at the World Forum in The Hague.
Highlighting the fast pace of development in technology, Mr. Hoekstra appealed to the multi-stakeholder community to build common standards to mitigate risks arising from the use of AI. The maiden conference, co-hosted by South Korea, hosted 80 government delegations, and over hundreds of researchers and defence contractors.
The two-day summit is designed to explore advances in AI and machine learning models, understand their limitations, and the possibilities of building a human-machine team. At the heart of these topics is the element of decision making: Who will be tasked to decide? Humans or the algorithm.
Explainability in AI
Digital technology has transformed everyday human life, and it is no wonder the same tools could be used to make the military smarter. An AI-powered weapon system could possibly make the army decide fast and be more efficient in a warzone. But such systems pose serious risks and dangers for civilians as the same technology used to save a particular group could also be used to target it. These so-called intelligent systems could also be biased.
In the plenary session, Dr. Agnes Callamard, Secretary-General of Amnesty International, raised a pertinent point on wars being biased. “There is not one clean war. Wars are of full of biases. Wars are driven by biases,” she argued. “Now, you are creating a technology that can encode biases to target certain segments of people. And you are creating weapons that will make it more lethal and precise.”
To remove bias from AI systems, researchers have resorted to ‘explainability’. Explainable AI seeks to address lack of information around how decisions are made. This in turns helps remove biases and make the algorithm fairer. But, in the end, the call to make a final decision will rest with a human in the loop. So, the next question is: Will that human take an unbiased call?
“Most importantly, when thinking about new technologies, it is about at least two opposing wills,” said Sir Lawrence Freedman, expert on War Studies and Emeritus Professor, King’s College London. Explaining that most wars are fought to gain more territory, and that they are political, he noted that they are about choice and judgement. And so, the question is: Can machines help us make those choices better?
“There are areas where it will make a difference, and there are those where it won’t,” he added.
The writer attended the summit at the invitation of the Dutch government
[ad_2]