6 June 2023

Gunning for an edge: Defence cadets put AI vision system to the test

| Andrew McLaughlin
Start the conversation
Screenshot

A screenshot of one of the experiment’s scenarios where the AI identified vehicles, people and objects they were carrying as they approached a checkpoint. Photo: UNSW ADFA.

Cadets at the Australian Defence Force Academy (ADFA) have been participating in an experiment conducted by the University of NSW (UNSW) on how the ethics in artificial intelligence (AI) systems might work in real-world military scenarios.

Held in collaboration with ADFA on the campus grounds, the exercise was part of a project UNSW is conducting on vision-based AI systems and how ethical decision-making can be integrated with AI systems.

The AI system used, Athena AI, is an existing commercially available one developed in Australia, and participants were run through scenarios to examine the value of AI in assisting them in completing the scenarios.

“We used Vision AI, which puts a box around whatever object you see on a screen,” Christine Boshuijzen-Van Burken, senior researcher in ethics of autonomous military systems at UNSW Canberra, told Region.

READ ALSO Work starts on new power transmission lines project for HMAS Harman upgrade

“For example, in the civil world, it could be a cup or a phone, and it tells you what it sees.

“We also have these technologies for military-relevant objects such as weapons, or vehicles, or tanks. But the interesting thing is, they also have included items that can be detected that are of ethical significance, such as protected symbols like the red cross, or red crescent.

“So, anything that has a symbol like that on it is protected by the laws of armed conflict. What we want is something that can tell us if it is a threat, but also something that can tell us something must be protected.”

The system can also tell whether people are standing or lying down, or if they have their hands raised as if surrendering.

“Everyone wants ethical AI,” Ms Boshuijzen-Van Burken said.

NATO, the UK, the US, Australia and many other countries have all come up with or are developing a series of principles for the development and use of ethical AI, and most agree that these need to be traceable, responsible and trustworthy. But it’s still early days.

“These principles are very theoretically grounded,” Ms Boshuijzen-Van Burken said. “There’s some good conceptual thinking, but what we really don’t understand very well yet is how people behave around these technologies.”

The three-day exercise involved setting up and development of the scenarios in conjunction with the ADFA role-players on day one. The scenarios were recorded and presented to participants, who ran the experiment without AI assistance on day two, and then with assistance from AI on day three.

“We focused our experiment about ethical principles of distinction on the battlefield with the technologies that we have,” Ms Boshuijzen-Van Burken explained. “And while no technology is perfect, it’s very interesting because that is the reality. We need to find out how our soldiers are going to deal with an AI system that tells them, ‘That is a gun’, when it might actually be a camera with a big lens.”

The research project received funding from the UNSW Canberra AI Hub, and the experiment was approved by the Defence Human Research Ethics Committee. The participants were mainly drawn from ADFA second and third-year training officers, as these cadets had already been exposed to some of the theories of ethics and military law and had an understanding of military operations.

The experiment at ADFA coincided with an attempt by the Federal Government to ensure the development of AI technologies continues in a safe and responsible way. To this end, Minister for Industry and Science Ed Husic has released two papers designed to prompt discussion on appropriate safeguards for AI technologies.

The Safe and Responsible AI in Australia paper canvasses existing regulatory and governance responses in Australia and overseas, identifies potential gaps, and proposes several options to strengthen the framework governing the safe and responsible use of AI.

READ ALSO Government adds Defence SATCOM ground station to Projects of Concern list

The accompanying National Science and Technology Council paper Rapid Response Report: Generative AI assesses the technology’s potential risks and opportunities, providing a scientific basis for discussions about the way forward.

“Using AI safely and responsibly is a balancing act the whole world is grappling with at the moment,” Mr Husic said. “The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics, or preventing online fraud.

“But as I have been saying for many years, there needs to be appropriate safeguards to ensure the safe and responsible use of AI.”

Start the conversation

Daily Digest

Want the best Canberra news delivered daily? Every day we package the most popular Riotact stories and send them straight to your inbox. Sign-up now for trusted local news that will never be behind a paywall.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.