Aaron Chou


The availability of credit is a foundation of the American economy, but not everyone has an avenue to credit. Financial Technology (“FinTech”) lending plays a sizable role in providing these avenues for Americans who would not otherwise have access to loans and are forced to turn to high-cost loan instruments like payday lending. Most scholars who have contributed to the topic of FinTech lending have focused on the risk of discrimination by Artificial Intelligence within FinTech lending platforms. This Note argues that given the recent history of data breaches in the credit industry, privacy issues should be a part of the larger discussion. Furthermore, balancing privacy with FinTech lending’s goal of financial inclusion will be a task required by regulation such as the Fair Credit Reporting Act.

This Note argues that the number of issues that might arise—the inherent invasiveness of FinTech and the unfairness of the contracts; the biased nature of their algorithms; the lack of transparency; and the danger of data breaches—should ultimately play second fiddle to the goal of financial inclusion. The reason is that although the two priorities of privacy and access to credit seem to offset one another, they actually balance in counterintuitive ways. Even though there are legitimate privacy concerns with the FinTech model, they can be softened by greater transparency. Toward this end, this Note discusses the solutions that have been offered to help eliminate the opacity of FinTech lending’s Artificial Intelligence and ultimately proposes the use of counterfactual explanations to develop accountability in FinTech lending while expanding financial inclusion.

Included in

Law Commons