Don’t underestimate AI’s risks
Artificial intelligence technologies have already begun to transform financial services. At the end of 2017, 52% of banks reported making substantial investments in AI and 66% said they planned to do so by the end of 2020. The stakes are enormous — one study found that banks that invest in AI could see their revenue increase by 34% by 2022, while another suggests that AI could cut costs and increase productivity across the industry to the tune of $1 trillion by 2030.
The message is clear — banks need to be investing in and experimenting with a range of AI technologies across many use cases to stay competitive. The impact of these technologies — including machine learning and deep learning, natural language processing and generation, and computer vision — will upend markets and operations across bank business lines. It will also be comparatively swift, as banks are already sitting on massive troves of data to train and fine-tune AI models. Many other industries are still in early stages of digitization and lack those critical data stockpiles.
However, given the stakes and speed with which AI is changing the industry, it’s critical to be aware of risks on the horizon. As AI becomes more embedded in banks’ most critical operations, particularly in ways that impact the financial stability both of institutions and their customers, it could expose banks to new hazards. Two of the most dangerous and far-reaching areas of risk when it comes to AI in banking are the opacity of some of these technologies and the vast changes AI will inflict on bank workforces.
One of the chief challenges with AI, particularly deep learning models, is their opacity. While these models often prove over time to be more accurate than human decision-making, they often don’t reveal how they generated their conclusions.
This opacity could open banks up to risks without their knowledge. As AI increasingly makes decisions that affect customers or banks’ balance sheets, both regulators and the public could become uncomfortable with those decisions being made by these so-called black boxes that could have hidden biases in their decision-making. The Federal Reserve has published guidance requiring that banks should be able to validate and assess the decision-making of their analytics tools. Additionally, a recent research study found that fintechs that used AI models in loan underwriting charged minority borrowers higher interest rates. This issue could draw increased regulatory scrutiny in the future, as well as public backlash if customers find out they were negatively impacted by a model’s biased conclusions.
Keep reading on American Banker.
Subscribe to CCG Insights.