Facebook Inc.
typically counts on its 1.5 billion users to report offensive content, but last week, the social network went looking for it.
On Thursday, Facebook removed a profile page used by one of two people suspected of killing 14 people the previous day in San Bernardino, Calif. A spokesman said the page violated Facebook’s community standards that, among other things, bar posts, photos or videos that support terrorism or glorify violence. The suspect, Tashfeen Malik, had published a post around the time of the shooting, but Facebook declined to disclose its contents.
Facebook declined to say how it found the profile and determined its authenticity.
The move underscores the growing pressure on sites such as Facebook, Alphabet Inc. ’s YouTube and Twitter Inc. to monitor, and sometimes remove, violent content and propaganda from terror groups. It is unclear how closely each company works with governments, how frequently they remove content and how it is identified.
“When it comes to terrorist content, it’s certainly a tricky position for companies, and one that I don’t envy,” said Jillian York, the Electronic Frontier Foundation’s director of international freedom of expression, in an email. “Still, I worry that giving more power to companies—which are undemocratic by nature—to regulate speech is dangerous.”
All three companies employ technology to scan for images related to child sexual exploitation. Hany Farid, chair of the computer-science division at Dartmouth College, who helped develop the system, said he expected it to be expanded to other types of questionable content.
But that is a challenge for several reasons. The child-exploitation scans employ a database of known images, created by the National Center for Missing and Exploited Children. There is no similar database for terror-related images.
In addition, disturbing images often appear in news content, and social-media companies don’t want to become news censors. At a September town hall meeting, Facebook Chief Executive Mark Zuckerberg cited a widely shared photograph of Aylan Kurdi, a 3-year-old refugee who died fleeing Syria and washed ashore in Turkey as an example of an image that might have been deemed inappropriate by a computer algorithm, but shouldn’t have been censored.
That leaves social-media companies making difficult judgment calls. In 2014, YouTube quickly removed videos of the beheadings of two American journalists by Islamic State. Twitter adopted a similarly passive approach to the same images, which remained on the service until reported by users.
In August, Twitter quickly took down video of two Virginia TV reporters who were gunned down during a live news broadcast.
A Twitter spokesman declined to say whether it has suspended any accounts related to the San Bernardino shooting incident. The spokesman declined to comment when asked if Twitter is re-evaluating its policy in light of Facebook’s approach to those shootings.
The volume of material on social-media sites is a challenge. Some 400 hours of video are uploaded to YouTube every minute. The online-video site doesn’t remove videos itself, waiting for users to flag content as objectionable. The site has had a “promotes terrorism” flag for several years. It hasn’t changed this approach recently, according a person familiar with the situation.
YouTube has given roughly 200 people and organizations the ability to “flag” up to 20 YouTube videos at once. That includes the U.K. Metropolitan Police’s Counter Terrorism Internet Referral Unit which has been using its “super flagger” authority to seek reviews—and removal—of videos it considers extremist.
Facebook has quietly become more aggressive in removing such content, privacy experts say. In 2012, Facebook said fan pages glorifying a shooter who opened fire in a Colorado movie theater didn’t violate its terms and services because they weren’t a credible threat to others. But last year, it removed pages honoring a gunman who killed six people at the University of California, Santa Barbara.
Ms. York discovered last year that informational Facebook pages for ISIS, Hamas and other terrorist groups were taken down. The pages included information from Wikipedia and weren’t promoting terrorism, Ms. York said, adding that it was her “first clue” that the company was scanning posts pre-emptively and censoring the terror-related ones.
Facebook said it has “hundreds” of people on its community operations team, which vets content reported by users from four offices world-wide. User reports are graded so more serious ones, including those involving terrorism, are handled first
On Thursday, Facebook removed a profile page used by one of two people suspected of killing 14 people the previous day in San Bernardino, Calif. A spokesman said the page violated Facebook’s community standards that, among other things, bar posts, photos or videos that support terrorism or glorify violence. The suspect, Tashfeen Malik, had published a post around the time of the shooting, but Facebook declined to disclose its contents.
Facebook declined to say how it found the profile and determined its authenticity.
The move underscores the growing pressure on sites such as Facebook, Alphabet Inc. ’s YouTube and Twitter Inc. to monitor, and sometimes remove, violent content and propaganda from terror groups. It is unclear how closely each company works with governments, how frequently they remove content and how it is identified.
“When it comes to terrorist content, it’s certainly a tricky position for companies, and one that I don’t envy,” said Jillian York, the Electronic Frontier Foundation’s director of international freedom of expression, in an email. “Still, I worry that giving more power to companies—which are undemocratic by nature—to regulate speech is dangerous.”
All three companies employ technology to scan for images related to child sexual exploitation. Hany Farid, chair of the computer-science division at Dartmouth College, who helped develop the system, said he expected it to be expanded to other types of questionable content.
But that is a challenge for several reasons. The child-exploitation scans employ a database of known images, created by the National Center for Missing and Exploited Children. There is no similar database for terror-related images.
In addition, disturbing images often appear in news content, and social-media companies don’t want to become news censors. At a September town hall meeting, Facebook Chief Executive Mark Zuckerberg cited a widely shared photograph of Aylan Kurdi, a 3-year-old refugee who died fleeing Syria and washed ashore in Turkey as an example of an image that might have been deemed inappropriate by a computer algorithm, but shouldn’t have been censored.
That leaves social-media companies making difficult judgment calls. In 2014, YouTube quickly removed videos of the beheadings of two American journalists by Islamic State. Twitter adopted a similarly passive approach to the same images, which remained on the service until reported by users.
In August, Twitter quickly took down video of two Virginia TV reporters who were gunned down during a live news broadcast.
A Twitter spokesman declined to say whether it has suspended any accounts related to the San Bernardino shooting incident. The spokesman declined to comment when asked if Twitter is re-evaluating its policy in light of Facebook’s approach to those shootings.
The volume of material on social-media sites is a challenge. Some 400 hours of video are uploaded to YouTube every minute. The online-video site doesn’t remove videos itself, waiting for users to flag content as objectionable. The site has had a “promotes terrorism” flag for several years. It hasn’t changed this approach recently, according a person familiar with the situation.
YouTube has given roughly 200 people and organizations the ability to “flag” up to 20 YouTube videos at once. That includes the U.K. Metropolitan Police’s Counter Terrorism Internet Referral Unit which has been using its “super flagger” authority to seek reviews—and removal—of videos it considers extremist.
Facebook has quietly become more aggressive in removing such content, privacy experts say. In 2012, Facebook said fan pages glorifying a shooter who opened fire in a Colorado movie theater didn’t violate its terms and services because they weren’t a credible threat to others. But last year, it removed pages honoring a gunman who killed six people at the University of California, Santa Barbara.
Ms. York discovered last year that informational Facebook pages for ISIS, Hamas and other terrorist groups were taken down. The pages included information from Wikipedia and weren’t promoting terrorism, Ms. York said, adding that it was her “first clue” that the company was scanning posts pre-emptively and censoring the terror-related ones.
Facebook said it has “hundreds” of people on its community operations team, which vets content reported by users from four offices world-wide. User reports are graded so more serious ones, including those involving terrorism, are handled first
Comments