Facebook’s artificial intelligence systems now report more offensive photos than humans do, marking a major milestone in the social network’s battle against abuse, the company tells me. AI could quarantine obscene content before it ever hurts the psyches of real people.
Facebook’s success in ads has fueled investments into the science of AI and machine vision that could give it an advantage in stopping offensive content. Creating a civil place to share without the fear of bullying is critical to getting users to post their personal content that draws in friends’ attention.
Twitter has been widely criticized for failing to adequately prevent or respond to claims of harassment on its platform, and last year former CEO Dick Costolo admitted “We suck at dealing with abuse”. Twitter has yet to turn a profit, and doesn’t have the resources to match Facebook’s investments in AI, but has still been making a valiant effort.
To fuel the fight, Twitter acquired a visual intelligence startup called Madbits, and Whetlab, an AI neural networks startup. Together, their AI can identify offensive images, and only incorrectly flagged harmless images just 7 percent of the time as of a year ago, according to Wired. This reduces the number of humans needed to do the tough job, though Twitter still requires a human to give the go-ahead before it suspends an account for offensive images.
Facebook shows off its AI vision technologies
A Brutal Job
When malicious users upload something offensive to torment or disturb people, it traditionally has to be seen and flagged by at least one human, either a user or paid worker. These offensive posts that violates Facebook’s or Twitter’s terms of service can include content that is hate speech, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.”
For example, a bully, jilted ex-lover, stalker, terrorist or troll could post offensive photos to someone’s wall, a Group, Event, or the feed. They might upload revenge porn, disgusting gory images, or sexist or racist memes. By the time someone flags the content as offensive so Facebook reviews it and might take it down, the damage is partially done.
Previously, Twitter and Facebook had relied extensively on outside human contractors from startups like Crowdflower, or companies in the Philippines. As of 2014, Wired reported that estimates pegged the number of human content moderators at around 100,000, with many making paltry salaries around $500 a month.
The occupation is notoriously terrible, psychologically injuring workers who have to comb through the depths of depravity, from child porn to beheadings. Burnout happens quickly, workers cite symptoms similar to post-traumatic stress disorder, and whole health consultancies like Workplace Wellbeing have sprung up to assist scarred moderators.
But AI is helping Facebook avoid having to subject humans to such a terrible job. Instead of making contractors the first line of defense, or resorting to reactive moderation where unsuspecting users must first flag an offensive image, AI could unlock active moderation at scale by having computers scan every image uploaded before anyone sees it.
Today we have more offensive photos being reported by AI algorithms than by people
Following his talk at the MIT Technology Review’s Emtech Digital conference in San Francisco this week, I sat down with Facebook’s Director of Engineering for Applied Machine Learning Joaquin Candela.
He spoke about the practical uses of AI for Facebook, where 25% of engineers now regularly use its internal AI platform to build features and do business. With 40 petaflops of compute power, Facebook analyzes trillions of data samples along billions of parameters. This AI helps rank News Feed stories, read aloud the content of photos to the vision impaired, and automatically write closed captions for video ads that increase view time by 12%.
Candela revealed that Facebook is in the research stages of using AI to build out automatic tagging of faces in videos, and an option to instantly fast-forward to when a tagged person appears in the video. Facebook has also built a system for categorizing videos by topic. Candela demoed a tool on stage that could show video collections by category, such as cats, food, or fireworks.
But a promising application of AI is rescuing humans from horrific content moderation jobs. Candela told me that “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100%, the fewer offensive photos have actually been seen by a human.”
Facebook, Twitter, and others must simultaneously make sure their automated systems don’t slip into becoming draconian thought police. Built wrong, or taught with overly conservative rules, AI could censor art and free expression that might be productive or beautiful even if it’s controversial. And as with most forms AI, it could take jobs from people in need.
Sharing The Shield
Defending Facebook is an enormous job. After his own speaking gig at the Applied AI conference in San Francisco this week, I spoke with Facebook’s director of core machine learning Hussein Mehanna about Facebook’s artificial intelligence platform Facebook Learner.
Mehanna tells me 400,000 new posts are published on Facebook every minute, and 180 million comments are left on public posts by celebrities and brands. That’s why beyond images, Mehanna tells me Facebook is trying to understand the meaning of text shared on the platform.
AI could eventually help Facebook combat hate speech. Today Facebook, along with Twitter, YouTube, and Microsoft agreed to new hate speech rules. They’ll work to remove hate speech within 24 hours if it violates a unified definition for all EU countries. That time limit seems a lot more feasible with computers shouldering the effort.
That same AI platform could protect more than just Facebook, and thwart more than just problematic images.
“Instagram is completely on top of the platform. I’ve heard they like it very much” Mehanna tells me. “WhatsApp uses parts of the platform…Oculus use some aspects of the platform.”
The application for content moderation on Instagram is obvious, though WhatsApp sees a tremendous amount of images shared too. One day, our experiences in Oculus virtual reality could be safeguarded against the nightmare of not just being shown offensive content, but being forced to live through the scenes depicted.
We don’t see AI as our secret weapon
But to wage war on the human suffering caused by offensive content on social networks, and the moderators who sell their own sanity to block it, Facebook is building bridges beyond its own family of companies.
“We share our research openly” Mehanna explains, regarding how Facebook is sharing its findings and open-sourcing its AI technologies. “We don’t see AI as our secret weapon just to compete with other companies.”
In fact, a year ago Facebook began inviting teams from Netflix, Google, Uber, Twitter, and other significant tech companies to discuss the applications of AI. Mehanna says Facebook’s now doing its fourth or fifth round of periodic meetups where “we literally share with them the design details” of its AI systems, teach the teams of its neighboring tech companies, and receive feedback.
“Advancing AI is something you want to do for the rest of the community and the world because it’s going to touch the lives of many more people” Mehanna reinforces. At first glance, it might seem a strategic misstep to aid companies that Facebook competes with for time spent and ad dollars.
But Mehanna echoes the sentiment of Candela and others at Facebook when he talks about open sourcing. “I personally believe it’s not a win-lose situation, it’s a win-win situation. If we improve the state of AI in the world, we will definitely eventually benefit. But I don’t see people nickel and diming it.”
Sure, if Facebook doesn’t share, it could save a few bucks others have to spend on human content moderation or other toiling avoided with AI. But by building and offering up its underlying technologies, Facebook could make sure it’s computers, not people, doing the dirty work.