The Consumer Financial Protection Bureau is warning financial institutions that they could violate federal consumer financial protection laws if chatbots fail to provide accurate information to customers and don’t protect their privacy and data.
“Financial institutions should avoid using chatbots as their primary customer service delivery channel when it is reasonably clear that the chatbot is unable to meet customer needs,” stated a June 5 CFPB issues spotlight.
Chatbots simulate human responses, including having human names and using popup features to spark engagement. Chatbots sometimes use artificial intelligence to generate responses and are advertised as offering customers features such as retrieving account balances, looking up recent transactions and paying bills.
“When chatbots provide inaccurate information regarding a consumer financial product or service, there is potential to cause considerable harm,” the CFPB stated. “It could lead the consumer to select the wrong product or service that they need. There could also be an assessment of fees or other penalties should consumers receive inaccurate information on making payments.”
An estimated 37 percent of Americans interacted with a bank’s chatbot, a number which is expected to increase in the coming years. Morgan Stanley is rolling out an advanced chatbot to help the bank’s team of financial advisors. Use of Bank of America’s AI-powered virtual assistant Erica has tripled since the first quarter of 2020.
“To reduce costs, many financial institutions are integrating AI technologies to steer people toward chatbots,” said CFPB Director Rohit Chopra. “A poorly deployed chatbot can lead to customer frustration, reduced trust, and even violations of the law.”