Categories
Case Studies Data Algorithms and Artificial Intelligence Diversity Equity Inclusion Law Moral Accounting Engagements Technology

Hold yourself accountable – or be ready for the FTC to do it for you

The US Federal Trade Commission just issued guidance on “Aiming for truth, fairness, and equity in your company’s use of AI“.  The title of this post comes from the kicker at the end of their announcement:

Hold yourself accountable – or be ready for the FTC to do it for you. As we’ve noted, it’s important to hold yourself accountable for your algorithm’s performance. Our recommendations for transparency and independence can help you do just that. But keep in mind that if you don’t hold yourself accountable, the FTC may do it for you. For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA. Whether caused by a biased algorithm or by human misconduct of the more prosaic variety, the FTC takes allegations of credit discrimination very seriously, as its recent action against Bronx Honda demonstrates.

As your company launches into the new world of artificial intelligence, keep your practices grounded in established FTC consumer protection principles.

Heavy stuff!  The Bronx Honda case is an interesting choice for the FTC to close with, because it bookends one of their opening points:

With its mysterious jargon (think: “machine learning,” “neural networks,” and “deep learning”) and enormous data-crunching power, AI can seem almost magical.

But the whiz-bang magic of AI ultimately runs into old-fashioned laws and norms, and I suspect the FTC chose Bronx Honda to underscore just this point:  they’ll treat algorithmic discrimination just like they treat the old-fashioned versions we’ve seen in car dealerships for a long time:

According to the FTC’s complaint, the defendants told sales people to charge higher financing markups and fees to African-American and Hispanic customers. The defendants told employees that these groups should be targeted due to their limited education, and not to attempt the same practices with non-Hispanic white consumers. According to the complaint, African-American and Hispanic customers paid more for financing than similarly situated non-Hispanic white consumers.

I’ve highlighted the part that makes this a suitable case for a moral accounting engagement, in which we focus on whether Bronx Honda bosses are holding their employees accountable in a moral way, and how they could do better.  It’s a lot easier to improve an accountability system (stop telling your employees to discriminate) than to change the urge to discriminate.  And remember that moral accounting isn’t about judging the bosses, just about advising them on their system, so we can apply moral accountability principles without considering the bosses intent; all we care about is impact.  In this case, society recognizes that the salespeople have an obligation not to discriminate, so encouraging them to do so is a violation of Effectiveness (because Bronx Honda’s system was not effective in reducing discrimination) and the Social Recognition Principle, which puts the power to recognize obligations in the hands of society, not a car dealer.

Now let’s consider the accountability practices of Facebook.  In a complaint from last year, the FTC summarized how (in their view) Facebook was violating the Fair Housing Act. To keep our eyes on the MAP, ask yourself:  how should Mark Zuckerberg hold his employees accountable for their moral performance, specifically their obligation not to discriminate, or to encourage discrimination, through their algorithmic practices?

First, the FTC lays out the vast data that goes into Facebook (“Respondent”) algorithms:

Respondent collects millions of data points about its users, draws inferences about each user based on this data, and then charges advertisers for the ability to microtarget ads to users based on Respondent’s inferences about them. These ads are then shown to users across the web and in mobile applications. Respondent promotes and distinguishes its advertising platform by proclaiming that “most online advertising tools have limited targeting options . . . like location, age, gender, interests and potentially a few others. . . . But Facebook is different. People on Facebook share their true identities, interests, life events and more.”  As Respondent explains, its advertising platform enables advertisers to “[r]each people based on . . . zipcode . . . age and gender . . . specific languages . . . the interests they’ve shared, their activities, the Pages they’ve like[d] . . . [their] purchase behaviors or intents, device usage and more.” Thus, Respondent “use[s] location-related information-such as your current location, where you live, the places you like to go, and the businesses and people you’re near to provide, personalize and improve our Products, including ads, for you and others.”

Then, they detail how Facebook lets their advertisers target their ads with tremendous specificity:

During the ad targeting phase, Respondent provides an advertiser with tools to define which users, or which types of users, the advertiser would like to see an ad. Respondent has provided a toggle button that enables advertisers to exclude men or women from seeing an ad, a search-box to exclude people who do not speak a specific language from seeing an ad, and a map tool to exclude people who live in a specified area from seeing an ad by drawing a red line around that area. Respondent also provides drop-down menus and search boxes to exclude or include (i.e., limit the audience of an ad exclusively to) people who share specified attributes. Respondent has offered advertisers hundreds of thousands of attributes from which to choose, for example to exclude “women in the workforce,” “moms of grade school kids,” “foreigners,” “Puerto Rico Islanders,” or people interested in “parenting,” “accessibility,” “service animal,” “Hijab Fashion,” or “Hispanic Culture.” Respondent also has offered advertisers the ability to limit the audience of an ad by selecting to include only those classified as, for example, “Christian” or “Childfree.”

After some details about exactly how Facebook creates Custom Audiences and Lookalike audiences, and delivers and prices ads, the FTC lowers the boom by referring to “protected classes”, which are explicitly protected against discrimination (they are race, color, religion, origin, sex, age, disability, veteran status, genes and citizenship):

Respondent considers sex and close proxies for the other protected classes. Such proxies can include which pages a user visits, which apps a user has, where a user goes during the day, and the purchases a user makes on and offline. Respondent alone, not the advertiser, determines which users will constitute the “actual audience” for each ad….To decide how an ad will be priced for each user, Respondent considers sex and close proxies for the other protected classes. Furthermore, Respondent uses the pricing differentials it sets to determine which users will see which ads rather than allowing advertisers to make that decision. As Respondent explains, “If there are more and cheaper opportunities among men than women, then we’d automatically spend more of [an advertiser’s] overall budget on the men.”

Respondent’s ad delivery system prevents advertisers who want to reach a broad audience of users from doing so. Even if an advertiser tries to target an audience that broadly spans protected class groups, Respondent’s ad delivery system will not show the ad to a diverse audience if the system considers users with particular characteristics most likely to engage with the ad. If the advertiser tries to avoid this problem by specifically targeting an unrepresented group, the ad delivery system will still not deliver the ad to those users, and it may not deliver the ad at all. This is so because Respondent structured its ad delivery system such that it generally will not deliver an ad to users whom the system determines are unlikely to engage with the ad, even if the advertiser explicitly wants to reach those users regardless.

Let’s assume that the FTC’s allegations are true.  How can Facebook’s leadership live up to their obligation not to discriminate against protected classes? The FTC suggests focusing on these tasks:

  • Start with the right data set and account for gaps;
  • Watch for discriminatory outcomes
  • Embrace transparency and independence
  • Don’t exaggerate what the algorithm can do
  • Do more good than harm.

This is a nice start, but this still seems like a tough setting in which to build a good accountability system.  But it seems like a great case study for anyone with the data-driven marketing chops to understand Facebook’s process and challenges.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *