قالب وردپرس درنا توس
Home / Business / Facebook's new Transparency Report now includes data on eliminating "bad" content, including hate speech – TechCrunch

Facebook's new Transparency Report now includes data on eliminating "bad" content, including hate speech – TechCrunch



Facebook released its latest Transparency Report this morning, in which the social network shares information on government requests for user data, stating that these inquiries have risen worldwide by approximately 4 percent compared to the first half of 2017, and remained about the same. In addition, the company added a new report, which was part of the usual Transparency Report, focusing on how and why Facebook is taking action to enforce its community standards, particularly in the areas of graphic violence, nudity and sexual activity, terrorist propaganda, hate speech spam and fake Accounts

In response to government demand for user data, global growth in the second half of 2017 resulted in 82,341

inquiries, compared to 78,890 in the first half. US inquires stayed about the same at 32,742; However, 62 percent included a secrecy clause that prevented Facebook from alerting users – an increase of 57 percent at the beginning of the year and 50 percent of the previous report. This suggests that the use of the NDA is much more common among law enforcement agencies.

The amount of content that Facebook restricted based on local laws dropped from 28,036 to 14,294 in the second half of the year. But that's not surprising – the last report had an unusual increase in this type of request due to a shootout in Mexico, which led the government to ask for content removal.

46 46 Facebook service disruptions also occurred in 12 countries in the second half of 2017, compared with 52 interruptions in nine countries in the first half

and Facebook and Instagram consumed 2,776,665 content items based on 373,934 copyright Reports 222,226 content items based on 61,172 branded reports and 459,176 content items based on 28,680 counterfeit reports.

The more interesting data, however, comes from a new report that Facebook attaches to its Transparency Report, the so-called Community Standards Enforcement Report, which focuses on the actions of the Facebook review team. This is the first time that Facebook has published its figures in connection with its enforcement efforts, and follows the release of its internal guidelines three weeks ago.

On 25 pages, Facebook announced in April that it has content on its platform, specifically around areas such as graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and false accounts. These are areas in which Facebook is often criticized when it gets messed up – like when it pulled off the historic photo "Napalm Girl" because it contained nudity before it recognized the flaw and restored it. It has also recently been criticized for contributing to the violence in Myanmar, as the hate speech of the extremists has done violence. This is something that Facebook is also addressing today through an update to Messenger that now allows users to report conversations that violate community standards.

Today's Community Standards Report indicates the number of deactivations in each category.

Facebook says spam and fake account takedowns are the biggest category. 837 million spam messages were removed in the first quarter – nearly all of which were proactively removed before users reported it. Facebook also disabled 583 million fake accounts, the majority within minutes of registration. During this time, about 3-4 percent of the Facebook accounts on the website were fake.

The company probably hopes the scale of these metrics will make it look like a great job, while in fact it's not that many Russian accounts mess up the entire operation of Facebook, resulting in that CEO Mark Zuckerberg testifies before a congress that is now thinking about regulations.

Also, Facebook says it broke off in Q1 2018:

  • Adult Nudey and Sexual Activity: 21 million pieces of content; 96 percent were found and labeled by technology and not by humans.
  • Graphic violence: 3.5 million pieces of content have been tagged or added; 86 percent found and characterized by technology
  • hate speech: 2.5 million content, 38 percent found and identified by technology

You may find that one of these areas is lagging behind in terms of enforcement and automation.

Facebook In fact, it admits that its system of identifying hate speech "still does not work so well", so it needs review by the review teams.

"… we still have much to do to prevent abuse," writes Guy Rosen, VP of Product Management, on the Facebook blog. "In part, technology, like artificial intelligence, is promising to be effective for most of the bad content, even though it's promising years away, because the context is so important."

In other words, A.I. can be useful for automatically marking things like nudity and violence, but controlling hate speech requires more nuances than the machines can handle. The problem is that people are discussing sensitive issues, but they are doing it to share messages, to respectfully or even to describe something that has happened to them. It's not always a threat or hate speech, but a system that only analyzes words without understanding the full discussion does not know that.

To obtain an A.I. System in this area up to date, it requires a ton of training data. And Facebook says it does not have that for some of the less widely used languages.

(This is also a probable answer to the situation in Myanmar, where the company is late – after six civil society organizations – Mr. Zuckerberg has criticized a letter – said it has "dozens" set by human moderators.) Critics say that is not enough – in Germany, for example, which has strict laws against hate speech – Facebook hired about 1,200 presenters, the NYT said.)

It seems the obvious solution is the moderation of moderation teams everywhere, to the AI ​​technology can be a good one Doing a job, as well as other aspects of content policy enforcement. This costs money, but it is also very important for people to die because Facebook is unable to enforce its own policies.

Facebook claims it does, but does not share the details of how many, where or

"… we're investing heavily in more people and better technology to make Facebook safer for all," Rosen wrote ,

Facebook's main focus, however, seems to be on improving technology.

"Facebook is investing heavily in more people to review tagged content, but as Guy Rosen explained two weeks ago, new technologies like machine learning, computer vision, and artificial intelligence help us find more bad content faster – much faster and to a much greater extent than humans ever can, "said Alex Schultz, vice president of analytics, in a related post on Facebook methodology

He announces AI in particular as a tool to remove content from Facebook could, before it is reported at all.

But A.I. is not yet ready to monitor all hate speech, so Facebook needs a solution to any loopholes – even if it costs.


Source link