Facebook’s software systems are constantly improving to detect and block hate speech on the Facebook and Instagram platforms. But Facebook’s artificial intelligence software is still struggling to spot some content that breaks the rules. For example, he has a harder time grasping the meaning of images with text superimposed on them, also in cases where sarcasm or slang is used. In many of these cases, humans are quickly able to determine whether the content in question violates Facebook’s policies. And several of those moderators are warning that Facebook is putting them in unsafe working conditions.

About 95% of hate speech on Facebook is detected by algorithms before anyone can report it, Facebook said in its latest Community Standards Enforcement report. The remaining 5% of the roughly 22 million messages reported in the last quarter were notified by users. This report also examines a new measure of hate speech: prevalence. Basically, to measure prevalence, Facebook takes a sample of content and then asks how often the thing measured (in this case hate speech) is seen as a percentage of the content viewed. Between July and September of this year, the figure was between 0.10 and 0.11%, or about 10 to 11 visits in 10,000.

“One of the primary goals of Facebook’s AI is to deploy cutting edge machine learning technology to protect people from harmful content. With billions of people using our platforms, we rely on AI to expand our content review work and automate decisions where possible. Our goal is to quickly and accurately identify hate speech, disinformation and other forms of policy-violating content, for every form of content and for every language and community in the world, ”said Mike Schroepfer, Chief Technology Officer from Facebook.

Facebook said it recently rolled out two new artificial intelligence technologies to help it tackle these challenges. The first is called “Reinforced Integrity Optimizer” (RIO), which learns from real examples and measurements online rather than from an offline dataset. The second is an artificial intelligence architecture called “Linformer,” which allows Facebook to use complex models of language comprehension that were previously too large and “unwieldy” to work at scale.

“We are now using RIO and Linformer in production to analyze content from Facebook and Instagram in different regions of the world,” Schroepfer said.

Facebook also said it has developed a new tool to detect deepfakes and made some improvements to an existing system called SimSearchNet, which is an image matching tool designed to spot disinformation on its platform.

Facebook also pointed out that while its internal artificial intelligence system is making progress in several categories of content application, the COVID-19 pandemic is having a continuing effect on its ability to moderate content. “As the COVID-19 pandemic continues to disrupt our content review staff, we are seeing some application settings revert to pre-pandemic levels. Even with reduced review capacity, we continue to prioritize the most sensitive content for people to review, which includes areas like suicide, self-harm and child nudity, ”the company said.

Related Articles
Leave a Reply

Your email address will not be published. Required fields are marked *