“Our hope and belief is that Facebook will be just the first of many” companies to use what has proven to be highly effective technology, said Ernie Allen, chief executive of the National Center for Missing & Exploited Children. “ Online services are going to become a hostile place for child pornographers and pedophiles.”
PhotoDNA is being used to find and remove only known images of sexual exploitation of pre-pubescent children to avoid trampling on the privacy and free-speech rights of consumers of adult pornography, he said. The courts have ruled that pornographic pictures of children are child abuse, not legally protected free speech..
By focusing on images of children under 12, the initiative is battling “ the worst of the worst” images, which are often shared over and over again, he said. Child pornography is growing increasingly violent and depicting increasingly young children, including infants and toddlers.
“These are crime scene photos,” not porn, Mr. Allen said. “This tool is essential to protect these victims and to prevent, to the greatest degree possible, the redistribution of their sexual abuse.”
Tests conducted on Microsoft’s SkyDrive, Windows Live and Bing services during the last year indicate a chillingly large trade in these images. A network that compares 10 million images to the center’s inventory of 10,000 illegal photos can expect to have about 125 hits a day, according to Hany Farid, a Dartmouth computer science professor and expert in digital imagery who worked with Microsoft to hone the technology. At least 50,000 child pornography images are being transmitted online every day, he estimated.
“This is not a tiny dark little world,” he said. “The problem is phenomenal.”
PhotoDNA works by creating a “hash,” or digital code, to represent a given image and find instances of it within large data sets, much as antivirus software does for malicious programs. However, PhotoDNA’ s “robust hashes” are able to find images even if they have been altered significantly. Tests on Microsoft properties showed it accurately identifies images 99.7 percent of the time and sets off a false alarm only once in every 2 billion images, and most of them point to nearly identical images, Dr. Farid said.
Until now, Facebook has relied primarily on abuse reports from its users, reviewed by trained employees, to find and eliminate offensive images. But with PhotoDNA, it can keep child pornography from making it onto its site in the first place. “We’ve found it to be a very powerful tool in identifying these images,” Chris Sonderby, Facebook’s assistant general counsel said.
The results of the yearlong pilot at Microsoft, he said, should provide “enormous reassurance to companies that this works, that this is something they should do, that it’s the responsible thing to do and that they can use it without fear of violating anybody’s rights.”