Innovation

How Organizations Can Reduce Gender Bias And Concurrently Increase Profitability With Data Science And AI

By Dr. Adrienne Heinrich, AI and Innovation Center of Excellence Head, Aboitiz Data Innovation

In the past decade, Southeast Asia has seen tremendous technological transformation, enabled by the region’s growth in digital consumers. This trend has been seen across all sectors and markets. The financial services market in particular has seen more consumers moving from traditional banking in physical branches to accessing banking through mobile devices. Consumers in Asia actively using digital banking have jumped up to 88% in 2021, compared to a mere 68% four years ago.

Despite the immense growth the region has seen in recent years, a majority of consumers continue to be unbanked or underbanked. Many consumers are experiencing denied loan applications or long approval processes, and many struggle with access to financial support.


Biases in loan approval processes

A recent study even found that consumer loan applicants submitted by women are 15% less likely to be approved than those by men with the exact same credit profile.

Research has shown that the consumer lending market process is stacked against certain groups of applicants—women and minorities. A recent study even found that consumer loan applicants submitted by women are 15% less likely to be approved than those by men with the exact same credit profile. Many question whether this is due to biases or a statistical anomaly.

While artificial intelligence (AI) presents an opportunity to transform how we allocate credit and risk, creating fairer and more inclusive systems, many have concerns that AI will do just the opposite in this situation. However, this is happening because data is not primarily used to the benefit of financial inclusion and fairness in most cases.

The rise of fintech and algorithmic decision-making has pushed concerns to an all-time high. The fintech industry has recently begun training machine learning (ML) models by using data from past borrowers, allowing the ML models to predict the odds of an applicant repaying or defaulting on a loan. However, if historic data has led to biased outcomes in the past, then the question remains — won’t AI-driven lending decisions only amplify biases?


Examining the impact of gender data on creditworthiness

When gender data is used, we observed that ML models are less discriminatory, of better predictive quality, and more profitable compared to traditional statistical models like logistic regression.

Many may argue that by simply taking out gender information when assessing creditworthiness, you are able to remove the biases that are linked to it. While some agree that this approach can help relieve concerns, there are still many debates on whether the use of gender-related data should be allowed in credit lending models. There are anti-discrimination regimes that prohibit the collection and use of gender, e.g. the United States. In the European Union, collection of gender data is allowed but the use of gender as a feature in the training and screening models is prohibited for individual lending decisions.

Along with researchers from the Smith School of Business at Queen’s University in Canada, we are engaging in explainable and responsible AI efforts. Queen’s University, Union Bank of the Philippines, and Aboitiz Data Innovation have carried out a study to understand the impact of regulatory constraints on AI performance, discrimination, and firm profitability. We investigated whether the laws banning the use of gender information in assessing creditworthiness hurt rather than help the groups they are supposed to protect. Our approach tested various antidiscrimination scenarios—with and without gender-related data and measured their impacts on AI model performance, gender discrimination, and firm profitability. Ultimately, we found that regimes that prohibit the use of gender (like those in the United States) substantially increase discrimination and decrease firm profitability. On the other hand, when gender data is used (in accordance with regimes such as Singapore), we observed that ML models are less discriminatory, of better predictive quality, and more profitable compared to traditional statistical models like logistic regression.


Implications and ways forward

The growing adoption of algorithmic decision-making requires us to rethink current anti-discrimination data policies, specifically with respect to the collection and use of protected attributes for machine learning models. Our analysis points to the importance of allowing for the responsible collection and use of gender data. Empowering organizations to collect protected attributes like gender would, at minimum, give them the ability to assess the potential bias in their model and, ideally, take steps to reduce it. Increased data access should therefore come with greater accountability for organizations.

The collection and use of gender should be supported by a strong customer communication strategy.

This work paves the way for fair economic welfare of both the financial institutions and the individual customers by approving loans for individuals who deserve the financial support, but are currently discriminated against when traditional modeling approaches or regulatory-binding guidelines are applied. The customers’ chances for economic well-being are improved and likewise, the profitability of the lending company increases for cases with lower default risk. The collection and use of gender should be supported by a strong customer communication strategy; the benefits of using personal attributes should be well described and a suitable level of AI education carried out to increase customer confidence.


Banking on responsible AI to help address inequity

We take the concern of discrimination and risk of unfair decisions seriously and apply methods to reduce these, especially where there are concerns with regards to AI. In this case, it is imperative for organizations to understand their lending models and modeling techniques, as it would give them the ability to assess biases in their models and tweak them accordingly. Leveraging AI/ML and other next-gen technologies presents organizations with a huge opportunity to help address key issues such as inequities in the current financial system. We need to continue making leaps and bounds in having these broader conversations and carrying out collective actions around the responsible use of AI.

More related articles