The sharing economy is not as open as you might think

Screen Shot 2015-02-24 at 10.20.01 AM

Uber, Lyft and Airbnb are trying to combat discrimination within their communities. Are they doing enough to tackle the challenge?

Online forums have long provided an avenue for anonymous venting. One particular cluster cropping up recently are those intended for Uberdrivers to gather to share their experiences of working for the ridesharing company that uses smartphones to connect passengers to drivers. Drivers discuss everything from getting started in the business, declining wages, and even brag of sexual conquests on the job.

Then there are the posts about Uber’s reliance on user-generated ratings systems, a mainstay of the peer-to-peer economy, to keep tabs on its drivers and passengers. Reading through them, a troubling thread emerges: Uber’s rating system leaves it open for abuse.

“Of course there’s discrimination,” an anonymous Nashville-based member of one of these forums,, told the Guardian, “An opaque ratings system leads to low ratings and deactivations for all kinds of things such as age, sex, and safety.

“College students down-rate older drivers, male riders down-rate female drivers who don’t flirt along, drivers with disabilities get deactivated”.

The allegations are, perhaps, particularly troubling given the lofty ambitions of the pioneers of the growing economic sector.

Architects of the so-called sharing economy aim at nothing less than the total transformation of the top-down, regimented, profits-first capitalist economy. “We’re rejecting the idea that stuff is what makes us happier and that ownership is better than access,” Natalie Foster, executive director and co-founder of the advocacy group Peers, said at a conference earlier this year. “We’re building more than businesses and nonprofits, we’re building a movement.”

Though largely celebrated as a liberating force during an extended season of economic hardship, the sector is increasingly running afoul of those who allege it may be hazardous to worker’s rights and has been too slow to act on allegations of race-based discrimination.

“African Americans have a lower acceptance rate than white folks in Airbnb,” Nikki Silvestri, executive director at Green For All, told her audience at SXSW Eco in October. “On Uber too, people were canceling rides if they got drivers whose appearance they didn’t like.”

While Silvestri sees this largely as an extension of the broader culture at large, a number of economists and legal scholars who have been tracking the emerging sector say the inequalities perpetuated are by design. And, just as importantly, there are strategies these companies could employ to correct them if they wanted to, according to Janelle Orsi, co-founder and executive director of the Sustainable Economies Law Center.

Prominently displaying anti-discrimination messages and the use of algorithms to identify false reports could greatly reduce discrimination, according to experts. But this requires sufficient motivation, a quality that may be slipping as the sector’s financial performance expands.

“Combatting discrimination in creating a more equitable society is fundamentally at odds with the business structure of these big companies,” Orsi said. “That’s because the companies get wealthier, and their shareholders get wealthier, if they have higher-income users who are able to pay higher rates, and therefore [pass] higher fees to the company”.

Who’s responsible?

As two of the most successful for-profit players in the sector, Airbnb and Uber have been watched – and criticized – the most heavily.

While Uber officials argue they have brought ridesharing to communities long neglected or poorly served by taxi service, this peer-to-peer network, just sued by blind riders in San Francisco, even charged denied passengers cancellation fees afterward, according to the San Francisco Chronicle. Others have claimed the reputation system the company relies upon, in which drivers and riders rate each other, is being manipulated to “deactivate” drivers of color.

There have been protests over wages in Seattle and New York and drivers prevented from unionizing in Los Angeles (since they aren’t technically employees) are working with the Teamsters to get their grievances addressed by the company.

Even the benefits of Uber as outlined in Jenna Wortham’s recent essay, “Ubering While Black,” come with a pricetag – higher per-mile charges than the taxi industry and “surge” pricing that automatically increases the rate during periods of high demand, among them.

However, taking legal action against companies like Uber for perceived discrimination by users or unfair pay grades set by management poses challenges. Such peer-to-peer companies often argue against regulation, for instance, by insisting they aren’t actually employing anyone, but merely serve as communications platform connecting users.

In that, they have an ally in the conservative think tank American Enterprise Institute, whose president argued against regulation of such efforts in a recent op-ed in The New York Times.

Others aren’t so sure.

“The more difficult that it is to identify an employer, who should be accountable for the well-being of employees and equal hiring practices, those things become increasingly difficult,” said Valerie Wilson, director of the program on race, ethnicity, and the economy at the Economic Policy Institute.

Admitting a problem

Perhaps the most-often cited research on discrimination in the sharing economy comes from Harvard Business School, where a pair of researchers published a working paper in January stating that non-black Airbnb members in New York City are able to charge 12% more than their black counterparts, while holding factors like location, rental characteristics, and quality constant.

“Moreover,” the pair write, “black hosts receive a larger price penalty for having a poor location score relative to non-black hosts.”

Part of the problem, they assert, is the platform’s reliance on online profiles that allow users to act on the potential host’s apparent ethnicity. “While these features serve the laudable goals of trust-building and accountability,” the authors write, “they can also bring unintended consequences: personal profiles may facilitate discrimination.”

While Airbnb’s terms of service forbid posting any material that is “false, misleading or deceptive” or that “promotes discrimination, bigotry, racism, hatred, harassment or harm,” there is, as with Uber and others, much more that could be done to address user bias.

Talking about it, for one.

Companies must call attention to prejudiced behaviors of their users, Orsi said. “Even show some statistics and say something like, ‘A black person is three times more likely to have a ride canceled,’ or something like that.”

They could also post anti-discrimination policies much more prominently on their websites and offer anti-racism training. “People, even if they don’t realize they’re acting on their biases, they might become more aware of it,” Orsi said.

Nick Papas, Airbnb’s spokesperson, responded to a request for an phone interview by email, saying the company is aware of the challenges revealed in the Harvard paper but disputes some of the “subjective determinations” contained within it. He also suggested the data it relied upon is now “outdated”.

“We receive very few inquiries from our hosts and guests with concerns around discrimination,” Papas wrote. “When we do, they are handled very seriously and investigated by our team, with corresponding action such as suspended accounts or removal from the platform.”

While Lyft doesn’t track its users’ race or gender out of a desire to be non-intrusive, according to spokeswoman Katie Dally, they do closely monitor their drivers to make sure they aren’t denying large numbers of requests. It’s a behavior that may lead to investigation of the driver, she said. To avoid potential discrimination, they also don’t communicate destinations of potential passengers until that passenger is in the vehicle.

“There’s no way to make a decision on who to pick up based on a neighborhood they’re going to, which might have a context around it,” Dally said. “One thing we very proactively message to our drivers are [Americans with Disabilities Act] policies and the fact that drivers are required to adhere to requirements that allow for service dogs. I think it’s been quick to be adopted and understood because Lyft was founded on this mentality of being your friend with a car.”

Peers, however, which bills itself as “a member-driven organization that supports the sharing economy movement” and boasts founders and board members from across the sector, told the Guardian that while discrimination was an “interesting” topic, they “don’t feel we can add to this conversation at this time”.

And a public relations representative for the peer-to-peering lending company Lending Club responded to an interview request, saying the company was “going through [a] ‘quiet period’ which doesn’t allow for interviews at this time”.

Uber and TaskRabbit failed to respond to questions about discrimination policies by our deadline.

Algorithms to the rescue

The understanding that one is being watched, which comes with participation in the social media embedded in so many of these companies, can go a long way toward reducing overt expressions of racism, according to research by Lior Strahilevitz, a professor at the University of Chicago Law School.

He finds support for this in the “How’s My Driving?” programs that have emblazoned phone numbers on the backs of trucks for travelers to call with complaints about reckless driving. Some carriers have credited the programs for reducing fleet accidents by as much as half by alerting them to problem drivers.

More than drunk driving, aggressive driving, or spotty policing, what most contributes to the leading cause of death among young adults – vehicular accidents – is the anonymity of the motorists themselves, he says.

“People are more likely to drive aggressively when they can avoid sanctions, but drive courteously when they believe they will be held accountable for misconduct,” Strahilevitz writes in a paper published in the New York University Law Review back in 2006. “Being watched acts as a deterrent to bad acts.”

But while user visibility provided by social media may dampen overt expressions of racism, it doesn’t address that shielded moment when one platform user rates another, potentially passing along “false information” bred by prejudice.

Tackling that, he says, will require the development of algorithms similar to those already used by Ebay that can identify suspiciously high levels of negative feedback toward members of a particular race or gender. That data could then be used to advantage members of the discriminated group.

“In theory, if Kickstarter found disquieting levels of gender discrimination by male investors, they could decide to make funding proposals from female project leads more prominent in the search results that male investors saw,” Strahilevitz wrote the Guardian by email.

“I don’t think they do that, though. Rather, what seems to be happening online is that web sites are catering to the preferences of their users, rather than trying to push against them.

“Giving your customers what they want is often better for a company’s bottom line than trying to change what your customers want, especially in a competitive marketplace. This dynamic suggests there could be a role for legal intervention, perhaps by giving companies subsidies to reduce discrimination through the use of discrimination-reducing algorithms.”

Juliet Schor, a professor of sociology at Boston College who has been studying these issues since 2011, said the companies could make use of such algorithms in even more creative ways.

“They could do things that are covert as a way of avoiding the discrimination to begin with,” she said. “Are there consumers who they can see systematically turning down drivers of color or hosts of color? They can then adjust what those people get in terms of offerings.”

Like offering them only drivers of color?

“Yeah,” Schor said. “Or the white drivers they offer them are farther away.

“That would be a controversial thing. The other side of it is just passively allowing people to discriminate in ways that are illegal in other kinds of contexts.”

Security and regulation

When asked about tracking race and gender or the use of algorithms, Airbnb’s Papas responded to promote his company’s “Trust and Safety” team. Including former government investigators, criminal prosecutors, and law-enforcement members, the team, he said, work “around-the-clock in every time zone to protect our community and monitor suspicious behavior, and they use a variety of technological tools to help make our community more secure and prevent fraud”.

Still, among the legal scholars interviewed there is a growing interest in integrating stronger regulation for these new-model platforms – in spite of company promises to do good.

“If the primary intention of Airbnb, Uber, Lyft and other new companies is really to help people,” said Tim Iglesias, law professor at the University of San Francisco School of Law, “let them incorporate as nonprofits and state their public purposes clearly so that they can be held accountable to them.”

And, indeed, legality may be a concern as these peer-to-peer platforms grow to resemble more traditional business models. Uber, for instance, sets the rates, dictates how people drive, and even what cars they drive, Orsi said.

“It’s starting to look more and more that they are employing the drivers. If that is the case, there could be more grounds to bring a lawsuit.”


Originally published at Guardian Sustainable Business.