blog post Wednesday, May 3, 2017, Zuckerberg said that Facebook will
hire another 3,000 people to review videos of crime and suicides
following murders shown live. (AP Photo/Eric Risberg, File)
New York (AP) -
Facebook is stepping up its efforts to keep inappropriate and often
violent material - including recent high-profile videos of murders and
suicides, hate speech and extremist propaganda - off of its site.
On Wednesday, the
world’s biggest social network said it plans to hire 3,000 more people
to review videos and other posts after getting criticized for not
responding quickly enough to murders shown on its service.
The hires over the
next year will be on top of the 4,500 people Facebook already tasks with
identifying criminal and other questionable material for removal. CEO
Mark Zuckerberg wrote Wednesday that the company is “working to make
these videos easier to report so we can take the right action sooner -
whether that’s responding quickly when someone needs help or taking a
Facebook, which had
18,770 employees at the end of March, would not say if the new hires
would be contractors or full-time workers. David Fischer, the head of
Facebook’s advertising business, said in an interview that the detection
and removal of hate speech and content that promotes violence or
terrorism is an “ongoing priority” for the company, and the community
operations teams are a “continued investment.”
Videos and posts
that glorify violence are against Facebook’s rules, but Facebook has
drawn criticism for responding slowly to such items, including video of
a slaying in Cleveland and the live-streamed killing of a baby in
Thailand. The Thailand video was up for 24 hours before it was removed.
In most cases, such
material gets reviewed for possible removal only if users complain. News
reports and posts that condemn violence are allowed. This makes for a
tricky balancing act for the company. Facebook does not want to act as a
censor, as videos of violence, such as those documenting police
brutality or the horrors of war, can serve an important purpose.
Policing live video
streams is especially difficult, as viewers don’t know what will happen.
This rawness is part of their appeal.
While the negative
videos make headlines, they are just a tiny fraction of what users post
every day. The good? Families documenting a toddler’s first steps for
faraway relatives, journalists documenting news events, musicians
performing for their fans and people raising money for charities.
“We don’t want to
get rid of the positive aspects and benefits of live streaming,” said
Benjamin Burroughs, a professor of emerging media at the University of
Nevada in Las Vegas.
Burroughs said that
Facebook clearly knew live streams would help the company make money,
as they keep users on Facebook longer, making advertisers happy. If
Facebook hadn’t also considered the possibility that live streams of
crime or violence would inevitably appear alongside the positive stuff,
“they weren’t doing a good enough job researching implications for
societal harm,” Burroughs said.
With a quarter of
the world’s population on it, Facebook can serve as a mirror for
humanity, amplifying both the good and the bad - the local fundraiser
for a needy family and the murder-suicide in a faraway corner of the
planet. But lately, it has gotten outsized attention for its role in the
latter, whether that means allowing the spread of false news and
government propaganda or videos of horrific crimes.
livestreaming murder or depicting kidnapping and torture have made
international headlines even when the crimes themselves wouldn’t have,
simply because they were on Facebook, visible to people who wouldn’t
have seen them otherwise.
As the company
introduces even more new features, it will continue to grapple with the
reality that they will not always be used for positive or even mundane
purposes. From his interviews and Facebook posts, it appears that
Zuckerberg is at least aware of this, even if his company doesn’t always
respond as quickly as outsiders would like.
heartbreaking, and I’ve been reflecting on how we can do better for our
community,” Zuckerberg wrote on Wednesday about the recent videos.
It’s a learning
curve for Facebook. In November, for example, Zuckerberg called the idea
that false news on Facebook influenced the U.S. election “crazy.” A
month later, the company introduced a slew of initiatives aimed at
combating false news and supporting journalism. And just last week, it
acknowledged that governments or others are using its social network to
influence political sentiment in ways that could affect national
What to do
Facebook workers review “millions of reports” every week. In addition to
removing videos of crime or getting help for someone who might hurt
themselves, he said, the company’s bulked-up reviewing force will “also
help us get better at removing things we don’t allow on Facebook like
hate speech and child exploitation.”
announcement is a clear sign that Facebook continues to need human
reviewers to monitor content, even as it tries to outsource some of the
work to software due in part to its sheer size and the volume of stuff
It’s not all up to
Facebook, though. Burroughs said users themselves need to decide whether
they want to look at violent videos posted on Facebook or to circulate
them, for example. And he urged news organizations to consider whether
each Facebook live-streamed murder is a story.
“We have to be
careful that it doesn’t become a kind of voyeurism,” he said.
(AP Photo/Marcio Jose
New York (AP) - Twitter has
found more creative ways to ease its 140-character limit without
officially raising it.
Now, the company says that when you
reply to someone - or to a group - usernames will no longer count toward
those 140 characters. This will be especially helpful with group
conversations, where replying to two, three or more users at a time
could be especially difficult with the character constraints.
When users reply, the names of the
people they are replying to will be on top of the text of the actual
tweet, rather than a part of it.
Last year, Twitter said it would
stop counting photos, videos, quote tweets, polls and GIF animations
toward the character limit. Twitter also said it would stop counting
usernames, but the change did not go into effect until now.
Twitter, which has been struggling
to attract new users, has been trying to appeal to both proponents and
opponents by sticking to the current limit while allowing more freedom
to express thoughts, or rants, through images and other media.
Twitter’s character limit was
created so that tweets could fit into a single text message, back in the
heyday of SMS messaging. But now, most people use Twitter through its
mobile app. There isn’t the same technical constraint, just a desire on
Twitter’s part to stay true to its roots.
Of course, there are ways to get
around the limit, such as sending out multi-part tweets, or taking
screenshots of text typed elsewhere.
New York (AP) - Google will
expand the use of “fact check” tags in its search results - the tech
industry’s latest effort to combat false and misleading news stories.
People who search for a topic in
Google’s main search engine or the Google News section will see a
conclusion such as “mostly true” or “false” next to stories that have
been fact checked.
Google has been working with more
than 100 news organizations and fact-checking groups, including The
Associated Press, the BBC and NPR. Their conclusions will appear in
search results as long as they meet certain formatting criteria for
Google said only a few of those
organizations, including PolitiFact and Snopes. com, have already met
those requirements; The Washington Post also says it complies. Google
said it expects the ranks of compliant organizations to grow following
Not all news stories will be fact
checked. Multiple organizations may reach different conclusions; Google
will show those separately.
Still unanswered is whether these
fact-check analyses will sway people who are already prone to believe
false reports because they confirm preconceived notions.
Glenn Kessler, who writes “The Fact
Checker” column <Error!
Hyperlink reference not valid.> at The Washington Post,
said in an email that Google’s efforts should at least “make it easier
for people around the world to obtain information that counters the spin
by politicians and political advocacy groups, as well as purveyors of
He added that “over time, I expect
that people increasingly will want to read a fact-check on a
controversial issue or statement, even if the report conflicts with
their political leanings.”
Google started offering fact check
tags in the U.S. and the U.K. in October and expanded the program to a
handful of other countries in the subsequent months. Now the program is
open to the rest of the world and to all languages.
False news and misinformation,
often masquerading as trustworthy news that spreads on social media, has
gained attention since the 2016 U.S. presidential election.
Google’s announcement comes a day
after Facebook launched a resource to help users spot false news and
misleading information that spreads on its service. The resource is
basically a notification that pops up for a few days. Clicking on it
takes people to tips and other information on how to spot false news and
what to do about it.
New York (AP) -
An organization affiliated with Google is
offering tools that news organizations and election-related sites can use to
protect themselves from hacking.
Jigsaw, a research arm
of Google parent company Alphabet Inc., says that free and fair elections
depend on access to information. To ensure such access, Jigsaw says, sites
for news, human rights and election monitoring need to be protected from
Jigsaw’s suite of
tools, called Protect Your Election, is mostly a repackaging of existing
- Project Shield will
help websites guard against denial-of-service attacks, in which hackers
flood sites with so much traffic that legitimate visitors can’t get through.
Users of Project Shield will be tapping technology and servers that Google
already uses to protect its own sites from such attacks.
- Password Alert is
software that people can add to Chrome browsers to warn them when they try
to enter their Google password on another site, often a sign of a phishing
- 2-Step Verification
helps beef up security beyond passwords by requiring a second access code,
such as a text sent to a verified cellphone. Though Jigsaw directs users to
turn this on for Google accounts, most major rivals offer similar
“This is as much an
occasion to have a conversation about digital security as it is putting all
the tools in one place,” Jigsaw spokesman Dan Keyserling said.
While the tools can be
useful to a variety of groups and individuals, Jigsaw says it is focusing on
elections because cyberattacks often increase against news organizations and
election information sites around election time. In particular, Jigsaw wants
to help sites deploy the tools ahead of the French presidential elections,
which begin April 23.
The tools are free,
though Project Shield is limited to news organizations, individual
journalists, human-rights groups and election-monitoring organizations.
It’s not known whether
the tools might have prevented some of the high-profile attacks in the past,
including the theft of emails from Democratic Party computers during the
2016 U.S. presidential campaign. The tools do not directly address such
break-ins, but they could help guard against password stealing, a common
precursor to break-ins.