Joshua New is a policy analyst at the Center for Data Innovation . Reposted from CDI's blog .
Bias in big data. Automated discrimination. Algorithms that erode civil liberties.
These are some of the fears that the White House , the Federal Trade Commission , and other critics have expressed about an increasingly data-driven world. But these critics tend to forget that the world is already full of bias, and discrimination permeates human decision-making.
The truth is that the shift to a more data-driven world represents an unparalleled opportunity to crack down on unfair consumer discrimination by using data analysis to expose biases and reduce human prejudice. This opportunity is aptly demonstrated by the Consumer Financial Protection Bureau’s (CFPB) December 2013 auto loan discrimination suit against Ally Financial, the largest such suit in history , in which data and algorithms played a critical role in identifying and combating racial bias.
CFPB found that, from April 2011 to December 2013, Ally Financial had unfairly set higher interest rates on auto loans for 235,000 minority borrowers and ordered the company to pay out $80 million in damages. But the investigation also posed an interesting challenge: Since creditors are generally prohibited from collecting data on an applicant’s race , there was no hard evidence showing Ally had engaged in discriminatory practices. To piece together what really happened, CFPB used an algorithm to infer a borrower’s race based on other information in his or her loan application. Its analysis identified widespread overcharging of minority borrowers as a result of discriminatory interest rate markups at car dealerships.
Ally Financial buys retail installment contracts from more than 12,000 automobile dealers in the United States, essentially allowing dealers to act as middlemen for auto loans. If a consumer decides to finance his or her new car through a dealership rather than a bank, the dealership submits the consumer’s application to a company like Ally. If approved, the consumer pays back the dealership with interest. The interest rate, of course, matters a great deal. To determine what it will be, Ally calculates a “buy rate ”—a minimum interest rate for which it is willing to purchase a retail installment contract, as determined by actuarial models. Ally notifies dealerships of this buy rate, but then also gives them substantial leeway to increase the interest rate to make the contract more profitable. Though consumers are free to negotiate these rates and shop around for the best deal, CFPB’s analysis determined that discretionary dealership pricing had a disparate impact on borrowers who were African American, Hispanic, Asian, or Pacific Islanders. On average, they paid between $200 and $300 more than similarly situated white borrowers.
Since creditors cannot inquire about race or ethnicity, Ally’s algorithmically generated buy rates are objective assessments. But when dealerships increase these rates, their judgments are entirely subjective, relying on humans to make decisions that could very well be influenced by racial bias. If dealerships instead took a similar approach to creditors and automated this decision-making process, there would be no opportunity for human bias to enter the equation. While dealerships could still increase interest rates to capture more profits, they could do so based on algorithmic analysis of predefined criteria about a consumer’s willingness to pay, thereby preventing themselves from offering similar consumers different rates based on their race.
Policymakers should guard against the possibility that automated decision-making could perpetuate bias, but with ever-increasing opportunities to collect and analyze data, the public and private sectors also should follow CFPB’s lead and identify new opportunities where data analytics can help expose and reduce human bias. For example, employers could rely on algorithms to select job applicants for interviews based on their objective qualifications rather than relying on human oversight that can be biased against factors such as whether or not the job applicant has an African American–sounding name . And taxi services could rely on algorithms to match drivers with riders rather than leaving it up to drivers who might be inclined to discriminate against passengers based on their race . If policymakers let fear of computerized decision-making impede wider deployment of fair algorithms, then society will lose a valuable opportunity to build a more just world.