Facebook will BAN users from livestreaming for 30 days if they post extremism in new ‘one-strike’ rule to stamp out violent content online following the broadcast of the Christchurch massacre
- The limit will be imposed on social media users who break Facebook’s rules
- The ban applies the first time a user breaks one of Facebook’s most serious rules
- The offence could include sharing a link to a terrorist organisations online
- It would stop users from accessing Facebook Live for a set amount of time
- Exactly what the offences are or how long the ban would be are unknown
Facebook claims it will now ban users from using its ‘Live’ function for 30 days if they breach rules laid out by the firm as it cracks down on violent content.
It comes as part of a widespread attempt to erradicate hate crimes and violence form the web across all outlets following the devastating Christchurch massacre.
The social network says it is introducing a ‘one strike’ policy for those who violate its most serious rules.
Facebook’s announcement comes as tech giants and world leaders meet in Paris to discuss plans to eliminate online violence.
Representatives of Google, Facebook and Twitter were present at the meeting, hosted by French president Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern.
A non-legally binding text was issued which failed to outline any concrete steps that would be taken by individual firms.
Representatives of Google, Facebook and Twitter were present at the meeting, hosted by French president Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern (left). World leaders, including Theresa May (right) attended
A lone gunman killed 51 people at two mosques in Christchurch on March 15 while live streaming the attacks on Facebook. The image shows Masjid Al Noor mosque in Christchurch, New Zealand, where one of two mass shootings occurred
A spokeswoman said it would not have been possible for the Christchurch shooter to use Live on his account under the new rules.
The firm says that the ban will be applied from a user’s first violation.
Vice president of integrity at Facebook, Guy Rosen, said that in the violations would include a user linking to a statement from a terrorist group with no context in a post.
The restrictions will also be extended into other features on the platform over the coming weeks, beginning with stopping those same people from creating ads on Facebook.
Mr Rosen said in a statement: ‘Following the horrific recent terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate.’
The announcement comes as New Zealand Prime Minister Jacinda Ardern co-chairs a meeting with French President Emmanuel Macron in Paris today urging world leaders and chiefs of tech companies to sign the ‘Christchurch Call,’ a pledge to eliminate violent extremist content online.
Facebook CEO Mark Zuckerberg, who had been expected to attend the summit, will be absent.
He will instead be represented by vice president for global affairs and communications Nick Clegg, the former British politician.
A lone gunman killed 51 people at two mosques in Christchurch on March 15 while livestreaming the attacks on Facebook.
Footage spread across the web after the gunman live-streamed his spree on Facebook.
Coming under fire for its delayed and insufficient reaction to the incident, the social media platform said it removed 1.5 million videos on its site in the 24 hours after the incident.
According to the company, the video was viewed fewer than 200 times during the live broadcast but about 4,000 times in total.
None of the people who watched live video of the shooting flagged it to moderators, and the first user report of the footage didn’t come in until 12 minutes after it ended.
Facebook, Twitter, YouTube all raced to remove video footage in the New Zealand mosque shooting that spread across social media after its live broadcast. The companies are all taking part in the summit this week in Paris to curb such online activity
According to Chris Sonderby, Facebook’s deputy general counsel, Facebook removed the video ‘within minutes’ of being notified by police.
The delay however underlining the challenge tech companies face in policing violent or disturbing content in real time.
Facebook’s moderation process still relies largely on an appeals process where users can flag up concerns with the platform which then reviews it through human moderators.
Currently Facebook relies on human reviewers through an appeals process and in some cases – such as those relating to ISIS and terrorism – automatic removal for taking down offensive and dangerous activity.
HOW TO REPORT A LIVE VIDEO
Facebook uses artificial intelligence and machine learning to detect objectionable material, while at the same time relying on the public to flag up content that violates its standards.
To report live video, a user must know to click on a small set of three gray dots on the right side of the post.
When you click on ‘report live video,’ you’re given a choice of objectionable content types to select from, including violence, bullying and harassment.
You’re also told to contact law enforcement in your area if someone is in immediate danger.
The latter includes AI driven machine learning to assess posts that indicate support for ISIS or al-Qaeda, said Monika Bickert, Global Head of Policy Management, and Brian Fishman, Head of Counterterrorism Policy in a blog post from November 2018.
‘In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism. We still rely on specialised reviewers to evaluate most posts, and only immediately remove posts when the tool’s confidence level is high enough that its ‘decision’ indicates it will be more accurate than our human reviewers.’
Ms Bickert added: ‘At Facebook’s scale neither human reviewers nor powerful technology will prevent all mistakes. That’s why we waited to launch these automated removals until we had expanded our appeals process to include takedowns of terrorist content.’
Mr Rosen added that technical innovation is needed to get ahead of large amounts of modified videos designed to get past sensors, like those uploaded after the massacre.
‘One of the challenges we faced in the days after the attack was a proliferation of many different variants of the video of the attack.
‘People – not always intentionally – shared edited versions of the video which made it hard for our systems to detect.’
Facebook said in a blog post in late March that it had identified more than 900 different versions of the footage.
The company has pledged 7.5 million dollars (£5.8 million) towards new research partnerships in a bid to improve its ability to automatically detect offending content after some manipulated edits of the Christchurch attack managed to bypass existing detection systems.
It will work with the University of Maryland, Cornell University and The University of California, Berkeley, to develop new techniques that detect manipulated media, whether it is imagery, video or audio, as well as ways to distinguish between people who unwittingly share manipulated content and those who intentionally create them.
‘This work will be critical for our broader efforts against manipulated media, including DeepFakes,’ Mr Rosen added.
HOW DOES FACEBOOK MODERATE ITS CONTENT?
Currently Facebook relies on human reviewers and moderators and in some cases – like those relating to ISIS and terrorism – automatic removal for offensive and dangerous activity.
The manual moderation relies largely on an appeals process where users can flag up concerns with the platform which then reviews it through human moderators.
However, none of the 200 viewers of the live broadcast of the Christchurch, New Zealand terror shooting flagged it to Facebook’s moderators.
In some cases, Facebook automatically removes posts using an AI driven algorithm indicates with very high confidence that the post contains support for terrorism including ISIS and al-Qaeda.
But the system overall still relies specialised reviewers to evaluate most posts, and only immediately remove posts when the tool’s confidence level is high enough that its ‘decision’ indicates it will be more accurate than that of humans.
According to ex-Facebook executive Monika Bickert, its machine learning tools have been critical to reducing the amount of time terrorist content reported by users stays on the platform from 43 hours in the first quarter of 2018 to 18 hours in the third quarter of 2018.
Ms Bickert added: ‘At Facebook’s scale neither human reviewers nor powerful technology will prevent all mistakes.’
‘We hope it will also help us to more effectively fight organised bad actors who try to outwit our systems as we saw happen after the Christchurch attack.’
Google also struggled to remove new uploads of the attack on its video sharing website YouTube.
The massacre was New Zealand’s worst peacetime shooting and spurred calls for tech companies to do more to combat extremism on their services.
During the opening of a safety engineering centre (GSEC) in Munich on Tuesday, Google’s senior vice president for global affairs, Kent Walker, admitted that the tech giant still needed to improve its systems for finding and removing dangerous content.
‘In the situation of Christchurch, we were able to avoid having live-streaming on our platforms, but then subsequently we were subjected to a really somewhat unprecedented attack on our services by different groups on the internet which had been seeded by the shooter,’ Mr Walker said.
Google, Facebook, Microsoft and Twitter are all taking part in the summit.
Mr Macron has repeatedly stated that the status quo is unacceptable.
‘Macron was one of the first leaders to call the prime minister after the attack, and he has long made removing hateful online content a priority,’ New Zealand’s ambassador to France, Jane Coombs, told journalists on Monday.
‘It’s a global problem that requires a global response,’ she said.
In an opinion piece in The New York Times on Saturday, Ms Ardern said the ‘Christchurch Call’ will be a voluntary framework that commits signatories to put in place specific measures to prevent the uploading of terrorist content.
Ms Ardern has not made specific demands of social media companies in connection with the pledge, but has called for them ‘to prevent the use of live streaming as a tool for broadcasting terrorist attacks.
Firms themselves will be urged to come up with concrete measures, the source said, for example by reserving live broadcasting to social media accounts whose owners have been identified.
An Islamic group is even suing Facebook and YouTube for hosting footage of the New Zealand terror attack.
The French Council of the Muslim Faith is using laws that prohibit ‘broadcasting a message with violent content abetting terrorism’ in its attempt, made after the shootings in March.
Source: Read Full Article