From facial recognition software to loan approvals, complex algorithms are increasingly woven into AI, and the fabric of our lives.
So, it has never been more important to develop a better understanding of how these models arrive at their decisions.
“By demystifying the inner workings of AI, we can ensure that these powerful tools are fairer for all. Understanding how a model arrives at a particular decision allows us to identify and address biases in the training data," says Dr Diem Pham.
Dr Pham is lead author on new research, published in Complex & Intelligent Systems, that addresses a critical gap in how fairness is considered in machine learning algorithms where the data changes over time.
“Imagine a model trained to approve a loan application based on historical data. Over time, economic conditions or social factors might change, leading to different loan applications being more or less risky,” explains co-author, Dr Binh Tran.
“Many existing fairness-aware methods focus on situations where the data stays the same over time. This works well for things like analysing historical sales figures, but it falls short in the real world where data is constantly being generated.”
If the model isn't designed to handle evolving data, says Dr Tran, it can become biased and result in unfair outcomes.
To address this, the research team has developed new algorithms that can continuously learn, update and adapt as new data arrives.
“This ensures that the model stays relevant in constantly changing environments. The algorithms can also automatically adjust these models to minimise discrimination, promoting fair outcomes for all.”
The next step in the research project will involve industry partnerships where the algorithms will be tested in practical settings.
“By combining real-world testing with domain-specific adaptations, we can refine the algorithms for robust and responsible deployment, and hopefully ensure fairer outcomes for all.”