AI and credit: How can we keep machines from reproducing social biases?

Wait 5 sec.

Integrating AI into the banking sector must not have the result of exacerbating social inequalities. (Shutterstock)Artificial intelligence (AI) has revolutionized many fields in recent years, including the banking sector. There have been both positive and negative aspects of its implementation, in particular the issue of algorithmic discrimination in lending.In Canada and more broadly around the world, the implementation of AI within major banks has led to increased productivity while offering greater personalization of services.According to the IEEE Global Survey, the adoption of AI-based solutions is expected to double globally by 2025, reaching 80 per cent of financial institutions. Some banks are more advanced, such as BMO Financial Group, which has created specific positions to oversee the integration of AI into its digital services in order to remain competitive. As a result, thanks to AI, the global banking industry’s profits could exceed US$2 trillion by 2028, representing growth of nearly nine per cent between 2024 and 2028.As a professor at Laval University of knowledge and innovation management and a science communicator, I was assisted in writing this analysis by Kandet Oumar Bah, author of a research project on algorithmic discrimination, and Aziza Halilem, an expert in governance and cyber risk at the French Prudential Supervision and Resolution Authority.How does AI improve bank performance?The integration of AI in the banking sector has already significantly optimized financial processes, with a 25 to 40 per cent gain in operational efficiency. Combined with the growing capabilities of big data — for example, the massive collection of data — AI offers powerful analytics that can already reduce the error margins of financial systems by 18 to 30 per cent.It also makes it possible to monitor millions of transactions in real time, detect suspicious behaviour and even preventively block certain fraudulent transactions. This is one of the uses implemented by J.P. Morgan. In addition, platforms such as FICO, which specialize in AI-based decision analysis, help financial institutions leverage a variety of customer data, refining their credit decisions through advanced predictive models.Several banks around the world now rely on automated rating algorithms that can analyze numerous parameters, including income, credit history and debt ratios, in a matter of seconds. In the credit market, these tools significantly improve the processing of applications, particularly for “standard” cases, such as those with explicit loan guarantees.But what about the other cases?Formalizing injustice?As American researchers Tambari Nuka and Amos Ogunola point out, the illusion that algorithms produce fair and objective predictions poses a major risk for the banking sector. Reviewing the scientific literature, they warn against the temptation to blindly delegate the assessment of complex human behaviour to automated systems. Several central banks, including Canada’s, have also expressed strong reservations about this, warning of the operational risks associated with over-reliance on AI, particularly in assessing creditworthiness and solvency.Although algorithms are technically neutral, they can amplify existing inequalities when training data is tainted by historical biases, particularly those inherited from systemic discrimination against certain groups. These biases not only result from explicit variables such as gender or ethnic origin, but also from indirect correlations with factors such as place of residence or type of employment. Studies show that AI could contribute to reproducing inequalities. (Shutterstock) For example, rating systems may assign lower credit limits to women, even in situations where they are financially equivalent to men. Analyzing variables such as postal codes and employment history can also lead to the exclusion of members of marginalized groups, such as racialized individuals, workers with irregular incomes, and recent immigrants.Virginia Eubanks, a professor in the United States and an expert in social justice, illustrates this phenomenon well, showing how people living in historically disadvantaged neighbourhoods or with atypical career paths are penalized by automated financial decisions based on biased data.This raises a crucial question: how can we ensure that the automation of financial decisions helps reduce disparities in access to banking services?Mitigating errors through inclusive financeSeveral avenues are being explored in the scientific literature in response to these risks of discrimination. Nuka and Ogunola, for example, suggest a financial inclusion approach. This involves continuously improving statistical models by identifying and correcting biases in training data in order to reduce disparities in treatment between social groups.Beyond technical solutions, regulatory frameworks have recently been put in place to ensure the transparency and fairness of algorithms in sensitive sectors such as finance. Canada’s Artificial Intelligence and Data Act and Europe’s EU Artificial Intelligence Act are examples of this. The latter, adopted in 2024 and implemented gradually, imposes strict requirements on high-risk AI systems, such as those used for granting credit.Article 13 sets out transparency requirements to ensure that systems are auditable and that their decisions can be understood by all stakeholders. The aim is to prevent algorithmic discrimination and ensure ethical and fair use. Financial regulators also have a crucial role to play in ensuring compliance with fair competition rules and guaranteeing prudent and transparent practices in the interests of financial stability and customer protection.However, pressure from certain technology and financial lobbies to slow down the adoption of strict standards poses a significant risk: the lack of regulation in some countries and difficulties in enforcement in others could encourage opacity, to the detriment of the most vulnerable citizens.Professor Norrin Halilem has received funding from the Social Sciences and Humanities Research Council of Canada and the Fonds de recherche du Québec. Aziza Halilem participated in discussions on the European Union's Digital Operational Resilience Act (DORA) and contributed to the drafting of Level 2 texts for the act.