Artificial intelligence (AI) and machine learning hold great potential to improve healthcare by enhancing diagnostics and treatment. However, their adoption comes with ethical challenges that must be addressed to ensure they’re used responsibly and fairly.
Dr. Melissa McCradden, AI Director at the
Women’s and Children’s Hospital Network (WCHN) in South Australia, emphasises AI’s value in pattern recognition, which is essential in medicine.
Subscribe for FREE to the HealthTimes magazine
"It helps doctors figure out what kind of problem a patient might have or what treatment is likely to work," she explains to Cosmos Magazine.
AI can assist in detecting medical issues faster—like identifying fractures in X-rays or predicting risks of cardiac events.
However, McCradden notes the challenge of rigorously testing AI tools to ensure they meet the same standards of safety and effectiveness as traditional medical devices. One critical issue is AI bias, which can impact different groups of people differently. For example, a false negative in a medical test might have more severe consequences for rural patients with limited access to care.
McCradden’s research focuses on "translational trials," which test AI tools in live settings without affecting patient care. She’s also working on policies and procedures to ensure AI tools are developed, tested, and integrated safely into healthcare systems.
Another important aspect is inclusive governance. McCradden highlights the need for diverse perspectives, especially from consumers and Indigenous communities, to ensure AI is developed with their values and needs in mind.
"We need to authentically partner with Aboriginal colleagues, knowledge holders, and consumers," she says.