BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

The Importance Of Diversity In Finance AI: Why Inclusive Data And Representation Matter

Forbes EQ

By Tasha Austin, principal, Deloitte & Touche LLP, in collaboration with the National Association of Black Accountants, Inc. (NABA, Inc.)

Many businesses are integrating artificial intelligence (AI) into their everyday operations. In finance, AI supports lending decisions, fraud detection and customer personalization. It also drives business process automation that helps financial institutions deliver a better customer experience.

But for all the promise AI offers, it also poses a risk of advancing societal iniquities if organizations don’t use it responsibly. AI could perpetuate existing biases if companies aren’t proactive about making the data models that train AI systems — and the people behind them — more diverse and inclusive.

The risk of AI bias is real, but there is an opportunity to address this bias with purposeful action and policies now while the technology is still maturing.

How AI Bias Materializes in the Financial Industry

AI is becoming more pervasive in the financial system. The technology helps banks, credit unions and other financial institutions automate lending determinations, processing a nearly incalculable amount of data in real time to decide whether to approve or deny customers a credit card, mortgage, business or personal loan. It also can factor into interest rate determinations that either increase or decrease a consumer’s overall borrowing costs or hinder their access to a particular financial product.

Though AI is beneficial from an automation standpoint, it can sustain bias and unfair lending practices and prevent historically disenfranchised groups from participating equally in our financial ecosystem.

The technology itself isn’t inherently biased, but several factors increase the potential for AI to deliver negative outcomes. First, data scientists who create the models and algorithms that power AI systems may unintentionally use incomplete or biased data sets. The data may not be representative of the current population a financial institution is trying to serve, leading to faulty or ill-informed decision-making.

Secondly, the technologists who create AI models and algorithms may, themselves, hold implicit biases that emerge in their work. They may choose certain data or interpret data in specific ways. For example, a data scientist may train an AI model that flags loan applications from a certain zip code for further review based on historical data. However, they may be unaware that this historical data was informed by previous operational practices regulators now prohibit or by federal policies that prolonged lending discrimination.

As another example, a data scientist may train an AI model to automatically move a mortgage applicant to the final stage of the approval process if their credit score is above a certain threshold. However, this may not consider applicants who are under-banked, don’t really use credit cards or have a shorter credit history but would otherwise be a qualified applicant.

These situations have real-world consequences, particularly when it comes to wealth-building in marginalized communities. Denying someone a mortgage may prevent them from buying a home in a good school district that gives their kids a strong educational foundation. It also could prevent them from building and passing on generational wealth, since homeownership is the pathway through which most Americans build wealthi. Denying a business or personal loan may keep someone from starting their own business, another avenue for wealth generation. This also has larger implications, as society misses out on potentially groundbreaking innovations.

AI, whether its designers intend to or not, could further marginalize underrepresented communities, exacerbate the racial wealth gap, and prevent us from building a more inclusive financial system. However, this potential bias can be confronted and minimized if meaningful steps are taken now.

It begins and ends with stronger governance.

An Action Plan for Combatting AI Bias

Organizations should establish a sustainable governance model that facilitates ethical and responsible AI. At Deloitte, we’ve put forth a Trustworthy AI™ frameworkii that aligns with the federal government’s trustworthy and responsible AI frameworkiii. This framework incorporates several key principles and practices:

Fair and Impartial

Organizations should integrate automated and manual checks into AI applications to ensure fairness and equity across participants who seek access to their products and services. In this way, they can move from AI being a de-facto decision-maker to an enabler of better business processes.

Transparent and Explainable

Financial institutions should also ensure there’s transparency around how AI systems operate, how they use data, and reach certain decisions. Organizations need to better understand and explain how AI contributed to a particular outcome, so they can more effectively interrogate AI systems and identify and root out any underlying biases.

Responsible and Accountable

Organizations need to establish policies to determine who is responsible for the output of AI systems and their decisions.

AI itself can’t be responsible for a given outcome, because ultimately the technology is still human-led. With robust governance policies, organizations can better understand what inputs shaped AI models and algorithms and make the right stakeholders accountable for upholding stringent AI and data standards and fostering equitable and responsible AI.

Robust and Reliable

AI-driven solutions also need to be robust and reliable. AI systems must be able to learn from humans and other systems to deliver consistent outcomes. AI also must preserve — rather than undermine — trust in the products and services it supports.

Organizations must operationalize trustworthy characteristics within AI systems that make consistent accuracy, ethical implementation and transparency integral to how these systems function.

Preserve Privacy

Consumers want to know their financial institution and any other entities that have access to their data put privacy first. They also need a mechanism to opt in or out if an entity would like to use their data for a different purpose than what they originally communicated.

To that end, organizations should have effective AI governance policies in place that preserve privacy, as well as build AI systems with privacy in mind.

Safe and Secure

Institutions operate in an unrelenting threat environment, so they need to implement strong security practices to ensure AI systems are protected from risks that could cause physical or digital harm.

Maximizing AI’s Potential for Good

Financial institutions and other entities can use AI in powerful ways for the public good — whether it’s to increase access or advance financial inclusion.

But different communities have different needs and experiences and show up in different ways within our financial system. Therefore, institutions must continue to offer products tailored to needs of the community they serve and provide equitable access to financial products that will address these needs.

AI offers so much opportunity in the financial sector, which is truly exciting. But that enthusiasm cannot obscure the risks this technology poses if it isn’t governed properly. It doesn’t matter how sophisticated or automated an AI-driven application is or the cost savings it delivers if it harms historically disenfranchised communities or negatively affects the customer experience.

As financial institutions embrace AI, they also need to embrace robust AI governance to prevent bias and maximize the full potential of this transformative technology.


Tasha Austin is a Principal in Deloitte’s Risk and Financial Advisory business and has more than 23 years of professional services experience involving commercial and federal financial statement audits, fraud, dispute analysis and investigations, artificial intelligence (AI) and advanced data analytics. Tasha serves as the Director of Deloitte’s Artificial Intelligence Institute for Government where she focuses on amplifying Deloitte’s capabilities and services in key areas such as trustworthy/ethical AI and is responsible for elevating Deloitte’s thought leadership and digital presence in AI to the federal market.

Tasha helps lead Deloitte’s Artificial Intelligence and Data Analytics market offering, where she provides innovative insight-driven solutions to her clients to deliver financial management transformation. Tasha provides strategic direction to C-Suite Executives and management across the federal community to help them solve their agencies’ most complex and unique data challenges. Tasha also helps organizations assess their readiness for, and adoption of Artificial Intelligence solutions.

Tasha has co-authored several publications on the topics of AI and open data including, Fostering Diversity in STEM Learning and Careers with AI; Developing and Deploying Trustworthy AI in Government, Fluid Data Dynamics: Generating Greater Public Value from Data; Future of Open Data (Maximizing the Impact of the Open Government Data Act); and Data Act 2022, which provides an innovative approach to achieve an insights-driven organization while understanding and navigating cultural and technology challenges. Tasha has also served on several industry, academic, congressional stakeholders, and global leadership panels discussing critical topics in AI including AI & Equity, AI for Good, AI & Diverse Talent, Mitigating Bias, Scaling AI, and Trustworthy AI.

Tasha also serves as Deloitte’s National Leader for strategic engagement initiatives with Historically Black Colleges and Universities (HBCUs) and the National Association of Black Accountants (NABA) and works closely with Deloitte’s Executive Leadership to shape investments in the HBCU and NABA communities. Tasha has a passion for preparing and recruiting talent from untapped communities, including HBCUs and community colleges, and for developing and advancing racially and ethnically diverse professionals in their careers. She also has a passion for bridging the data analytics and digital divide in under-resourced communities and working with non-profit organizations to deliver and scale solutions that help advance equity and promote social justice. Tasha is an advocate for equitable education and supports the delivery of interactive and impactful STEM educational experiences for grade school students across diverse communities.

Tasha also serves as a trusted advisor to the HBCU community (including faculty, students, and executive leadership) and has established mentoring programs, STEM programs, sponsored students and faculty’s research and development efforts in signature areas (including Health Equity and Education Equity) and establishes platforms to promote faculty / industry exchange and bring real word problem-solving into the classrooms.

Tasha received her bachelor’s and master’s degrees in Mathematics from North Carolina Central University and obtained her MBA from Howard University. She also holds a Certificate in Artificial Intelligence Business Strategy and Applications from the University of California, Berkeley. Tasha is also a Professor of Mathematics and Statistics at the Northern Virginia Community College and serves on the Board of Directors for NABA as the Chair of Strategy, Innovation and Technology, and the Posse Foundation for the District of Columbia (Posse DC). Tasha resides with her family in Bowie, MD.