Originally published at The Atlantic

Facebook announced today that it has removed pages, events, and accounts involved in “coordinated inauthentic behavior” on its social-media platforms, including Facebook and Instagram. The posts and accounts in question appeared to have been created to sow discord in advance of a second “Unite the Right” rally in Washington D.C., meant to memorialize last year’s deadly white supremacist protest in Charlottesville. The material Facebook found and removed included false counter-protest events, job ads for protest coordinators, and content referencing diversity and the #AbolishICE movement, among others.

This behavior is inauthentic and the actors bad, in Facebook’s analysis, not because the content is objectionable on its face, but because it does not represent earnest speech. Instead, the posts appear to have been created to give those who might encounter their messages and oppose them a sense of injustice or distress, in order to precipitate discord. This type of propaganda has been common on Facebook and Instagram in the past, including during the run-up to the 2016 U.S. election.

The company didn’t explicitly connect these posts to efforts to interfere with the U.S. midterm elections this year, nor could it confirm the identities of the parties responsible. But it did draw parallels between the new material and those earlier disinformation campaigns. It also found possible connections between the banned accounts and the Russia-based Internet Research Agency (IRA), which was responsible for creating material that reached millions of Americans thanks to likes, shares, page follows, and ads.

Facebook is still reeling from the blowback of its data-extraction and election-interference scandals dating back to well before 2016. On top of that, the company’s stock also shed over 20 percent of its value since late last week, after it revealed that its astronomical growth would slow. Facebook’s announcement is clearly meant, in part, to give its users and investors confidence that the company has learned from its mistakes and is taking more proactive action to protect citizens against misinformation on its platform.

The obvious question: Will it be enough?

continue reading at The Atlantic

published July 31, 2018