Corporate Accountability

The Risk Makers

Viral hate, election interference, and hacked accounts: inside the tech industry’s decades-long failure to reckon with risk.
Erik Carter

One spring day in 2014, Susan Benesch arrived at Facebook’s headquarters in Menlo Park and was ushered into a glass-walled conference room. She’d traveled from Washington, D.C., to meet with Facebook’s Compassion Research Team, a group that included employees, academics, and researchers whose job was to build tools to help users resolve conflicts directly, reducing Facebook’s need to intervene.

  • Our partner

“I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread,” Mark Zuckerberg told a number of his fellow Harvard students in 2003 after he harvested their photos, without consent, to populate facemash.com, the Facebook precursor that invited students to rank classmates à la “hot-or-not.”

Erik Carter

Identifying risk isn’t just a technical problem. Risk assessment is, in simple terms, any action taken, whether by an individual or an institution, to decide what is acceptable risk. It matters who’s in the room identifying acceptable risk and taking action in response. It is political. It is high-stakes. And, according to sources with expertise in the field, it is deeply misunderstood.

It was a gray December afternoon in Arlington, Virginia, day two of the Society for Risk Analysis’ 2019 Annual Meeting, when we first met Paul Slovic. A balding man of 82 in cargo pants, running shoes, and a well-worn wool color-blocked sweater, he was jotting down notes about a presentation on risk communications: “Incomplete. Complex. Multidisciplinary.”

In the 1970s, Slovic was invited to present his work at conferences of nuclear engineers and energy industry executives. “Nuclear power, as with tech today, was a technology driven by engineering science, by great technical knowledge,” Slovic says. The engineers viewed themselves as the smartest guys in the room — not unlike today’s tech engineers and CEOs. They had little understanding or interest in the psychological perspectives on risk that Slovic was describing.

Erik Carter

And both rushed their products to market, prioritizing technical and business goals over social and humanitarian concerns. Today’s tech leaders still tend to prioritize technical fixes — better algorithms, faster processors, improved features — over efforts to improve the structure of decision-making and paths to public engagement that improve outcomes.

The violence in Myanmar, and Facebook’s apparent role in fueling it, was a disaster of unprecedented scale for the company. The company was reportedly warned for years of allegations of expanding ethnic violence. The problems were routinely exacerbated, critics say, by insufficient planning, translation services, and content moderation in the country.

As of 2019, Facebook officially supported 111 languages, with content moderation teams working to identify needs in hundreds more. It’s “a heavy lift to translate into all those different languages,” Monika Bickert, Facebook’s vice president of global policy and management, told Reuters in 2019.

Six months later, in November 2018, Zuckerberg also announced a significant shift in how Facebook planned to handle risk. “Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence — and because of the multi-billion dollar annual investments we can now fund,” he said. “For most of our history, the content review process has been very reactive.”

Nonetheless, in March 2019, Brenton Tarrant, a 28-year-old Australian, was able to activate Facebook Live and use it to broadcast his killing of more than 50 people over the course of 17 minutes. Then, 12 minutes after the attack, a user flagged the video as a problem, and it took nearly an hour, after law enforcement contacted Facebook, for the company to remove it. But by that time, the content had reached, and inspired, countless others.

What might a more integrated approach to risk look like? Gathering input from various departments and a diverse set of stakeholders is important, but not sufficient on its own. Individuals who are tasked with assessing risk also need the agency and authority to be part of the final decision-making, experts say.

Disparities like these are also racialized. A study released this summer, based on 2016 data, found that 10 major tech companies in Silicon Valley had no Black women on staff at all. Three large tech companies had no Black employees in any position, the study found. During the past four years, industry analysts have noted the slow pace of change. In 2019, 64.9% of Facebook’s technical teams were composed of white and Asian employees, 77% of whom were male.

Identifying more information, better algorithms, and enhanced technology doesn’t fundamentally reflect “a total shift of intentions,” however. Arguably, that approach, grounded in tech fixes, detrimentally doubles down on existing ones. Data and information after the fact, as predictive inputs, however necessary, are not sufficient. “Just imagine if these companies had said, ‘We’re going to hold onto launching this new feature or capability. We need another one and half years,’” says designer and technologist professor Batya Friedman, co-author of the book Value Sensitive Design: Shaping Technology with Moral Imagination. “These systems are being deployed very, very fast and at scale. These are really, really hard problems.”

There are reasons to doubt that tech leaders will slow down to adopt the kind of paradigm shift Noble describes on their own. Some 16 years after Facebook’s launch, calls are growing for government regulation of the tech industry and a renunciation of a business model that profits from the idea that content is “neutral” and platforms are objective, a model that, critics point out, cashes in on engagement and extremism.

Will Silicon Valley be more risk-aware in the future? Only those in power can say. While calls for a more activist public are evergreen, reliance on the demonstrably diminishing power of the people is naive. “The government should be passing laws to discipline profit-maximization behavior,” said Marianne Bertrand, an economics professor at the University of Chicago’s Booth School of Business. “But too many lawmakers have themselves become the employees of the shareholders — their electoral success tied to campaign contributions and other forms of deep-pocketed support.”

There is a growing sense of urgency to address massive concerns in the lead-up to the U.S. presidential election. Twitter and Facebook have both implemented various measures to address political disinformation on their platforms — flagging disinformation or blocking political ads immediately before the election, for example — but these solutions may not go far enough, and the stakes could not be higher.

About the reporters

Catherine Buni

Catherine Buni

Catherine Buni is a freelance writer and editor focusing on technology, health, and justice.

Soraya Chemaly

Soraya Chemaly

Soraya Chemaly is a writer and activist whose work focuses on the role of gender in culture, politics, religion and media.

×

We bring hidden stories to light. Don’t miss the next one! Get our free newsletter now.

Subscribe