You’ve probably heard of the artificial intelligence (AI) powered chatbot ChatGPT, but what if you could harness the power of AI to detect symptoms of depression?
To answer this question for her PhD, Dr Dua’a (Dua) Ahmad has trained about 1500 AI models to learn the symptoms of depression from people’s facial expressions and body movements alone.
The subject hits close to home for Dr Ahmad, whose parents emigrated from Iraq, where she says the culture can act as a barrier for people to receive a conventional depression diagnosis.
“When I came to Australia, it feels like more people acknowledge depression as a thing, and then when I read about it, it’s a mental health condition that really, really affects people,” she says.
“Even in the context of Australia, some people have symptoms of depression but are unaware of it. I guess in our culture … people look at it as something not as significant as it should be.
“But generally, when you look at studies looking at depression, the main concerns are that it goes by undetected, people just don’t realise they have it, or doctors misdiagnosed them.”
If Dr Ahmad’s research is starting to sound like an episode of the dystopian sci-fi series Black Mirror, she assures its purpose is not to replace your doctor or psychologist with a robot.
“You can’t just say, ‘Oh, I’ve developed a model that was able to detect symptoms of depression’, and then put it in an app and just distribute it to people and then they will start using their phones in order to see whether [they’re depressed],” she explains.
“The purpose of my research was to be able to say whether it’s enough of a tool to be used as a diagnosis aid by a clinician.
“[Similar to a blood pressure monitor] that measures your pressure and then the doctor, based on the numbers, decides whether you have high or low blood pressure, it’s just a tool.”
After six years of studying the topic through the University of Canberra and training her models on some 160 different people, she found the technology shares the same limitations as ChatGPT.
“When students come and do assignments in ChatGPT, they’re starting to realise it’s not as accurate to use as a source because of how it’s been trained and how it’s generating the information,” Dr Ahmad says. “So, it’s okay to have a better understanding of things using artificial intelligence, but it doesn’t mean that it’s a reliable source to completely rely on, especially in the medical field.”
Dr Ahmad concluded in her thesis that although the technology has potential, extensive further study is required before the technology will be reliable enough to use in a clinical setting.
“Having that model trained on 160 people is nowhere close to being reliable for a larger audience,” she says.
“You also have technologies and AI that learn by example, such as ChatGPT. So you can have it in a clinical set-up, and it would keep on learning on its own from examples of cases, and then it’s validated by clinicians. That is probably a future direction of how it could be applied.”
Dr Ahmad says future studies will also need to take into account people of different nationalities and those with similar mental health conditions, as well as potential privacy concerns.
“I love the idea of it. It’s very nice, but in reality, if you want to apply it, you really have to consider all of those elements,” she says.