Facebook plans to treat the 2020 U.S. Census “like an election” — with “people, policies, and technology in place” to protect against interference, the company said Sunday.
“We’re building a team dedicated to these census efforts and introducing a new policy in the fall that protects against misinformation related to the census,” wrote Chief Operating Officer Sheryl Sandberg in a blog post. “We’ll enforce it using artificial intelligence. We’ll also partner with nonpartisan groups to help promote proactive participation in the census.”
The social-media giant
said it will now look to remove false census information and election data — such as inaccurate details about polling places and fake descriptions of how the census process works — which would have been allowed to remain up in years past.
“Since the first audit update in December, I created a civil-rights task force made up of senior leaders across key areas of the company,” Sandberg said. “The task force will onboard civil-rights expertise to ensure it is effective in addressing areas like content policy, fairness in artificial intelligence, privacy, and elections. For example, we will work with voting-rights experts to make sure key members of our election team are trained on trends in voter intimidation and suppression so they can remove this content from Facebook more effectively.”
Just last week, Facebook CEO and founder Mark Zuckerberg told attendees at an event in Colorado that he was fine with letting fake information circulate on the website.
“We don’t think it should be against the rules to say something that happens to be false to your friends,” he said. “People get things wrong.”
The company has been conducting an outside “audit” for the past year — starting in May 2018 — to change how it handles election information, census data and civil rights issues as a whole.
The 2020 census will determine how many congressional seats and Electoral College votes each state will get in the 2024 presidential election — and ultimately the next 10 years. President Trump has suggested pushing the record-gathering process back following his failed attempt to include a question about citizenship.
“Just as civil rights groups helped us better prepare for the 2018 elections, their guidance has been key as we prepare for the 2020 Census and upcoming elections around the world,” Sandberg said. “We’re also introducing civil rights training for all senior leaders on the task force and key employees who work in the early stages of developing relevant products and policies. The training is designed to increase awareness of civil rights issues and build civil rights considerations into decisions, products and policies at the company. We know these are the first steps to developing long-term accountability. We plan on making further changes to build a culture that explicitly protects and promotes civil rights on Facebook.”
Sandberg’s blog post was accompanied Sunday by a 26-page update on the civil-rights audit, which is being carried out by outside consultant Laura Murphy, who has worked for years as a civil-rights advocate.
“During the first six months of the audit, Laura Murphy conducted interviews with over 90 civil rights organizations to understand their concerns and identify key issues to analyze,” the update said, noting how the first progress report came in December. “In an effort to ensure transparency in the audit’s progress, this second audit report describes developments since December 2018. A third and final report will be issued in the first half of 2020.”
The update points out significant changes made by Facebook in recent months to suppress white nationalist posts and discriminatory ads, while also suggesting that some other moves be done.
“Specifically, the investigation found that context matters. For example, sometimes users post photos that would otherwise violate Facebook’s hate speech policies, but accompany those photos with captions to indicate they are not embracing the hateful content but instead are calling attention to racism or hate,” the update says. “To accurately apply Facebook’s hate speech rules, content reviewers must look at the entire post, inclusive of comments or captions that accompany the content.”
Some other suggestions listed in the progress report include updating the Facebook “review tools” to “better highlight important context critical to determining whether a post is condemning hate speech and therefore should not be removed” and changing “the order and manner in which hate speech content reviewers analyze content for action.”
“Facebook’s current review process leads with a decision,” the report says. “First, content reviewers decide whether the content should be removed as a violation of Community Standards or be left up. Then, the reviewer follows prompts to answer questions about the basis for the decision. To improve overall accuracy and help reduce the incorrect removal of posts condemning hate speech, Facebook is testing a new review procedure that would reverse this order by starting with a series of questions about the content that then lead up to a decision on whether or not to remove it.”
Neil Potts, Facebook’s public policy director, told Politico that the company “expects that people will understand our rules … and they will try to alter their activity to subvert our rules.”
“It is incumbent on us to be aware of that and have our rules be nimble enough that we can iterate on [them] very quickly to get down that type of speech that is meant to disenfranchise people,” Potts said.