Fixing digital redlining

Author

Scroll Down

The challenge of online discrimination


Update on Fake News—According to Buzzfeed, during the critical final three months of the recent election campaign, fake news stories on Facebook were read and forwarded by more people than top stories from major news outlets. More specifically, with a baseline fewer than 10 million, fictitious and factually incorrect news sources generated over 1.5 million more shares and comments than those from The New York Times, Washington Post NBC News and other mainstream news providers. Eighty-five percent of the bogus posts were pro-Trump or anti-Hillary, including fake stories, e.g., that Hillary had sold arms to ISIS and that Pope Francis had endorsed Trump.

This and similar revelations are driving FB, Twitter, Google and other social media platforms to begin to stanch the flow of internet disinformation.

Digital redlining has slipped under the radar

As I’ve discussed in previous Insights posts, social media track our every ‘like,’ download, and forward. In our age of Big Data, social media platforms and search engines instantaneously assess our social class, gender, ethnic and racial identities. And, because a person’s online experience is personalized according to his or her social and psychological variables, we see ads calculated to fit our personal needs and preferences. Their objective, of course, is to generate higher conversion rates and increase revenue. –What could be wrong with that?

The problem is what a person with our demographic profile doesn’t see online. In other words, algorithms defining who we are can result in hidden discrimination. It’s a thorny issue because even if we knew what was happening inside these targeted advertising algorithms, it couldn’t reveal an intent to discriminate or that there is a discriminatory impact.

Online discrimination can also be upfront and blatant. This is easier to remedy, but not without challenges. For example, research by ProPublica journalists published in October of 2016 shows that Facebook has run ads that discriminate based on race. This is in violation of the Fair Housing Act, which stipulates that ads can’t indicate a preference based on “race, color, religion, sex, disability, or familial status.” Moreover, there is no need to prove that the advertiser (or social media platform) intended to discriminate. In November, FB took overdue action to remove that objectionable
filter from its ads.

Though hidden discriminatory components of algorithms are hard to identify, with easy-to- track variables like gender and age, research confirms that opportunities are clearly skewed towards men and younger people.

Online housing discrimination and the law

Building on that New York Times case, there are two general standards that help us deal with discriminatory housing ads on the internet.

  • The first, part of the Communications Decency Act, states that websites aren’t responsible for user-submitted content. This is why someone posting a roommate ad on Craigslist can stipulate—”whites only.”
  • The second standard somewhat limits the impact of the first one by disallowing websites from providing users tools that discriminate.

Case in point–Roommates.com in 2008 provided a discriminatory tool for users with its feature asking if they had a racial or age preference for a roommate. The court subsequently ruled to prohibit this feature. However, Roommates.com quickly introduced an easy workaround feature by adding a text box that allowed users to put in additional comments, including racial preference, for which there was no liability.

Outcome means testing

By far the most reliable method for determining whether a specific algorithm has a discriminatory outcome is means testing. This requires creating multiple fake profiles to test discrimination for different demographic groups. While means testing is easy to formulate, it is against the websites’ terms of service laws of the Computer Fraud and Abuse Act. Researchers are challenging this in the courts. However, the best approach is for companies to implement internal testing of their algorithms to ensure they don’t inadvertently discriminate.

Many other factors come into play as well, e.g., towns giving affordable housing preference to people already living there. Finally, at a micro level, it’s difficult to prove when trying to rent a home or interviewing for a job–that race, ethnicity or age was the determining factor in being rejected.