Banks & Lenders Shouldn’t Fear AI

The controversy surrounding AI has been nothing short of polarizing. On one hand, AI holds the promise of transformative benefits, including improved efficiency and advanced decision-making. On the other hand, it has raised a slew of contentious issues, such as algorithmic bias and discrimination, potential job displacement due to automation, concerns about personal privacy, and questions of accountability when AI systems make critical decisions.


Regulators are also now warning financial institutions on AI. In July, Federal Reserve Vice Chair for Supervision Michael Barr cautioned banks against fair lending violations stemming from their use of AI. Several recent consent orders involving AI were also issued this year. In two of them, banks were leveraging technology through a third-party fintech and ultimately deemed responsible for the fintech’s use of AI.  


The CFPB also recently issued a warning on AI-powered chat bots on compliance, privacy, security and operational risks, advising that financial institutions are “legally obligated to competently interact with customers about financial products or services, even if those interactions occur through chat bots powered by artificial intelligence.” 


The CFPB has also taken measures to protect consumers from black-box credit modeling. According to CFPB Director Rohit Chopra in a press release, “The law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand.” In other words, financial institutions cannot simply state that AI made the decision.


Adding to these warnings, the FTC, the Civil Rights Division of the DOJ, the CFPB, and the EEOC issued a joint statement in April on their commitment to fight AI-related discrimination and bias. The statement reads, “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”


Amid these warnings, some financial institutions may be hesitant to jump on the AI bandwagon – but this would be a mistake. One critical factor to consider is that regulators aren’t telling banks and lenders not to use it. Instead, they’re giving them a blueprint for how to use it by providing insight into how it will be regulated.


How Financial Institutions Can Safely Leverage AI


One of the biggest areas regulators point to is transparency. When safely implemented, AI can transform lending by discerning patterns within documents, streamlining workflows, and facilitating data extraction. Nevertheless, it is imperative that this procedure be well-defined and transparent. Regulators have explicitly said that when AI contributes to decision-making, the mechanisms behind these decisions must be open and unbiased.


This also underscores the importance of AI as a tool for underwriters rather than a replacement. With complete transparency and accessible insights, underwriters retain their essential role, exercising authority and confidence in the decision-making process, while AI complements their efforts by expediting tasks, enhancing efficiency, and ensuring a higher degree of precision.


AI can also serve as a tremendous tool in mitigating risks. Illicit actors are well-versed in identifying the data fields relevant to mortgage underwriting and exploit the limited cross-checks of such data. By capturing every data point from every document, even those not required, lenders can bolster their defense against fraudulent activities. A comprehensive dataset empowers lenders to conduct exhaustive cross-verification and virtually eradicate instances of fraud.


Data Security and AI


Financial institutions must also ensure that any AI system used does not inadvertently compromise customer data, including AI used by their fintech partners. This is especially the case for the loan origination process. Traditionally, this process has been slow and cumbersome, often taking weeks or even months to complete. One key reason for this delay is the need to securely verify and process large amounts of sensitive data.


By automating the data verification process, banks and lenders can ensure that customer data is processed quickly, but they must leverage advanced encryption protocols to secure that data while it’s being transmitted and stored. This means that any sensitive information handled through the lender’s platforms or through its fintech partner’s platform should be rendered unreadable to any unauthorized individuals, keeping it safe from potential breaches.


Financial institutions should also make sure that any AI technology leveraged is subjected to regular security audits and updates to stay ahead of emerging threats. Security experts can continually monitor AI-based systems to ensure that data is protected. Additionally, banks and lenders must also make sure that any AI-based technology can scale. As their business grows, their platforms must adapt to handle increased data volume and complexity.



With AI, financial institutions can transform their business to become smarter, better and safer. But it does require careful consideration and an acute focus on safety and data security. Otherwise, they could face bigger problems. Ultimately, it doesn’t need to become an acronym banks and lenders fear.


About Author:

Michael Tuch, Co-Founder of Rapidio

Want to keep reading? This content is for subscribers only.

Login Subscribe