Facebook Sucks - Amazon Google Facebook Criminal Antitrust Conspiracy
Facebook asked users if pedophiles should be able to ask kids for 'sexual pictures'
Over 300 cases of child exploitation went unnoticed by Facebook – Reports show that as users on Facebook have increased, so has the number of child exploitation cases.
A report suggests the tech giant is not fully enforcing its own standards banning content that exploits or endangers children
Facebook failed to catch hundreds of cases of child exploitation on its platform over the past six years, a study published on Wednesday found.
The site was used as a medium to sexually exploit children in at least 366 cases between January 2013 and December 2019, a report from the not-for-profit investigative group Tech Transparency Project (TPP) analyzing Department of Justice news releases found.
Only 9% of the 366 cases were investigated because Facebook alerted authorities, while the rest of the investigations were initiated by authorities without prompting from the social media giant.
This suggests Facebook is not doing all it can to enforce its community standards, which bans “content that sexually exploits or endangers children,” said TPP executive director Daniel Stevens.
Social media firms to be penalised for not removing child abuse
“The data shows Facebook is not doing as much as it should to address this very serious problem affecting many lives in this country,” Stevens said.
The reports analyzed by the TPP include a Rhode Island man who allegedly posed as a teenage girl to lure boys into live streaming sexual activity on Facebook Messenger, a Kentucky man accused of sending thousands of messages to more than one child target over Facebook, and a convicted Missouri sex offender who authorities said used Facebook Messenger to communicate with a 13-year-old girl.
As users on Facebook have increased, so has the number of child exploitation cases. There were as many as 23 cases per quarter in 2019 compared to just 10 per quarter in 2013.
Facebook CEO Mark Zuckerberg has repeatedly noted the company’s efforts to address child exploitation on the platform. Facebook did not respond to a request for comment.
“Child exploitation is one of the most serious threats that we focus on,” CEO Mark Zuckerberg told lawmakers in October 2019. “We build sophisticated systems to find this behavior.”
The company also appears to have taken on more enforcement responsibility since the passage of FOSTA-SESTA, which allows law enforcement to hold companies liable for what occurs on their platforms. Though the legislation has been criticized for its adverse affects on sex workers and other professions, it has forced Facebook to address online sexual exploitation of children, the report showed.
One month after FOSTA-SESTA was passed, Facebook was sued by an alleged victim of sexual abuse who said that at age 15 she was targeted and groomed by sex traffickers using Facebook.
In the five years before the controversial bill’s passage, Facebook averaged less than one cyber tip per quarter, according to TTP analysis. Since the bill was passed in March 2018, it has averaged more than three reports per quarter. Facebook and the National Center for Missing and Exploited Children have made more reports in the last two years since the passage of FOSTA-SESTA than in the prior five years combined.
Facebook has been criticized in the past for inaction in the face of reports regarding the exploitation of children on the platform. In February 2016 the BBC discovered Facebook groups where pedophiles swapped stolen images of children and reported 20 inappropriate images to Facebook as part of the investigation. The company took down only four. Following its report, the TPP alerted Facebook to a public page hosting an inappropriate picture of a young girl aimed at pedophiles, but the company did not remove it.
Facebook has said it has “zero tolerance” for such images and uses a technology called PhotoDNA to scan each image and flagged known child exploitative material to stop uploads of such imagery on the platform.
The TPP report comes as US regulators are set to introduce legislation to force tech giants to crack down on child exploitation on their platforms. A bipartisan bill from senators Lindsey Graham and Richard Blumenthal, called the Earn It Act, is expected to be introduced as early as Wednesday. Under the new act, platforms would be required to more aggressively address child sexual exploitation or risk losing protections under Section 230, a measure that prevents platforms from being held responsible for content posted on them.
While bipartisan support grows for holding tech giants accountable for exploitative content, the digital rights not-for-profit the Electronic Frontier Foundation has called Section 230 “the most important law protecting internet speech”. Facebook has expressed concerns the Earn It Act would weaken those free speech protections and roll back privacy efforts like encryption.
“We share the Earn It Act sponsors’ commitment to child safety and have made keeping children safe online a top priority by developing and deploying technology to thwart the sharing of child abuse material,” Facebook spokesman Thomas Richards said in a statement. “We’re concerned the Earn It Act may be used to roll back encryption, which protects everyone’s safety from hackers and criminals, and may limit the ability of American companies to provide the private and secure services that people expect.”
The justice department will unveil its own action against child exploitation on Thursday, with a proposal of 11 “voluntary principles” for tech platforms to target the issue. It was co-authored with members of the tech industry and is already backed by leaders of five countries, the Washington Post reported.
Kari Paul - The Guardian
A report suggests the tech giant is not fully enforcing its own standards banning content that exploits or endangers children
Facebook failed to catch hundreds of cases of child exploitation on its platform over the past six years, a study published on Wednesday found.
The site was used as a medium to sexually exploit children in at least 366 cases between January 2013 and December 2019, a report from the not-for-profit investigative group Tech Transparency Project (TPP) analyzing Department of Justice news releases found.
Only 9% of the 366 cases were investigated because Facebook alerted authorities, while the rest of the investigations were initiated by authorities without prompting from the social media giant.
This suggests Facebook is not doing all it can to enforce its community standards, which bans “content that sexually exploits or endangers children,” said TPP executive director Daniel Stevens.
Social media firms to be penalised for not removing child abuse
“The data shows Facebook is not doing as much as it should to address this very serious problem affecting many lives in this country,” Stevens said.
The reports analyzed by the TPP include a Rhode Island man who allegedly posed as a teenage girl to lure boys into live streaming sexual activity on Facebook Messenger, a Kentucky man accused of sending thousands of messages to more than one child target over Facebook, and a convicted Missouri sex offender who authorities said used Facebook Messenger to communicate with a 13-year-old girl.
As users on Facebook have increased, so has the number of child exploitation cases. There were as many as 23 cases per quarter in 2019 compared to just 10 per quarter in 2013.
Facebook CEO Mark Zuckerberg has repeatedly noted the company’s efforts to address child exploitation on the platform. Facebook did not respond to a request for comment.
“Child exploitation is one of the most serious threats that we focus on,” CEO Mark Zuckerberg told lawmakers in October 2019. “We build sophisticated systems to find this behavior.”
The company also appears to have taken on more enforcement responsibility since the passage of FOSTA-SESTA, which allows law enforcement to hold companies liable for what occurs on their platforms. Though the legislation has been criticized for its adverse affects on sex workers and other professions, it has forced Facebook to address online sexual exploitation of children, the report showed.
One month after FOSTA-SESTA was passed, Facebook was sued by an alleged victim of sexual abuse who said that at age 15 she was targeted and groomed by sex traffickers using Facebook.
In the five years before the controversial bill’s passage, Facebook averaged less than one cyber tip per quarter, according to TTP analysis. Since the bill was passed in March 2018, it has averaged more than three reports per quarter. Facebook and the National Center for Missing and Exploited Children have made more reports in the last two years since the passage of FOSTA-SESTA than in the prior five years combined.
Facebook has been criticized in the past for inaction in the face of reports regarding the exploitation of children on the platform. In February 2016 the BBC discovered Facebook groups where pedophiles swapped stolen images of children and reported 20 inappropriate images to Facebook as part of the investigation. The company took down only four. Following its report, the TPP alerted Facebook to a public page hosting an inappropriate picture of a young girl aimed at pedophiles, but the company did not remove it.
Facebook has said it has “zero tolerance” for such images and uses a technology called PhotoDNA to scan each image and flagged known child exploitative material to stop uploads of such imagery on the platform.
The TPP report comes as US regulators are set to introduce legislation to force tech giants to crack down on child exploitation on their platforms. A bipartisan bill from senators Lindsey Graham and Richard Blumenthal, called the Earn It Act, is expected to be introduced as early as Wednesday. Under the new act, platforms would be required to more aggressively address child sexual exploitation or risk losing protections under Section 230, a measure that prevents platforms from being held responsible for content posted on them.
While bipartisan support grows for holding tech giants accountable for exploitative content, the digital rights not-for-profit the Electronic Frontier Foundation has called Section 230 “the most important law protecting internet speech”. Facebook has expressed concerns the Earn It Act would weaken those free speech protections and roll back privacy efforts like encryption.
“We share the Earn It Act sponsors’ commitment to child safety and have made keeping children safe online a top priority by developing and deploying technology to thwart the sharing of child abuse material,” Facebook spokesman Thomas Richards said in a statement. “We’re concerned the Earn It Act may be used to roll back encryption, which protects everyone’s safety from hackers and criminals, and may limit the ability of American companies to provide the private and secure services that people expect.”
The justice department will unveil its own action against child exploitation on Thursday, with a proposal of 11 “voluntary principles” for tech platforms to target the issue. It was co-authored with members of the tech industry and is already backed by leaders of five countries, the Washington Post reported.
Kari Paul - The Guardian
Ten Years of Protecting Our Children - Cracking Down on Sexual Predators on the Internet
A decade ago, a 10-year-old boy disappeared from his Brentwood, Maryland, neighborhood. Within weeks, the investigation would uncover two pedophiles and a larger ring of online child pornographers. Within two years, it would spawn a major national initiative that is now the centerpiece of the FBI’s efforts to protect children from predatory pedophiles in cyberspace.
Here’s how the events unfolded: When FBI agents and Prince George’s County police detectives went door-to-door to talk with neighbors following the boy’s disappearance in 1993, they encountered a pair of suspicious men who had been “befriending” local children, showering them with gifts and even taking them on vacation.
Evidence followed that the men had been sexually abusing children for a quarter century. More recently, they had moved online, setting up a private computer bulletin board service not only to “chat” with boys and set up meetings with them but also to share illicit images of child pornography.
That, in turn, led investigators to a larger ring of computer pedophiles. When a similar case with national reach turned up the following year, the FBI realized it was onto an alarming new trend: sexual exploitation of children via the Internet.
A Program is born. In 1995, the FBI created its Innocent Images National Initiative (IINI). Its goals: to break up networks of online pedophiles, to stop sexual predators from using the Internet to lure children from their families, and to rescue victims.
Today, 28 of the FBI’s 56 field offices have undercover Innocent Images operations. More than 200 FBI agents work these cases. Some pose as teenagers or pre-teens in chat rooms to identify “travelers” who seek to meet and abuse children.
Others focus on dismantling major child exploitation enterprises.
Since 1995, we’ve opened more than 10,000 total cases and helped secure nearly 3,000 convictions.
Keeping Safe. To report child pornography and/or potential cases involving the sexual exploitation of children, please contact the Crimes Against Children Coordinator at your local FBI Field Office. You can also file an online report at the National Center for Missing and Exploited Children’s CyberTipline at www.cybertipline.com; these reports are forwarded to the appropriate law enforcement authorities.
Reporting to authorities
Calling 800–4-A-Child connects you with the Childhelp National Abuse Hotline. — How to Report Suspected Pedophile Activity
Suspicions of internet-based child sexual abuse, like hosting child pornography websites, should be directed to the FBI.
Suspicions of a local adult who may be grooming and engaging in sexual contact with children should be reported to the local police.
Reports of suspected sexual abuse should include as much information as possible, including the names of the parties involved, the location where the alleged abuse has occurred or is currently occurring, whether the reporter believes the child is in immediate danger, the signs that led the reporter to believe the child is being abused and contact information for the child’s parent or legal guardian.
Some professions require legal reporting to the authorities:
•Teachers.
•Children’s librarians.
•Police officers.
•Clergy members.
•Medical professionals.
•Social services employees.
A decade ago, a 10-year-old boy disappeared from his Brentwood, Maryland, neighborhood. Within weeks, the investigation would uncover two pedophiles and a larger ring of online child pornographers. Within two years, it would spawn a major national initiative that is now the centerpiece of the FBI’s efforts to protect children from predatory pedophiles in cyberspace.
Here’s how the events unfolded: When FBI agents and Prince George’s County police detectives went door-to-door to talk with neighbors following the boy’s disappearance in 1993, they encountered a pair of suspicious men who had been “befriending” local children, showering them with gifts and even taking them on vacation.
Evidence followed that the men had been sexually abusing children for a quarter century. More recently, they had moved online, setting up a private computer bulletin board service not only to “chat” with boys and set up meetings with them but also to share illicit images of child pornography.
That, in turn, led investigators to a larger ring of computer pedophiles. When a similar case with national reach turned up the following year, the FBI realized it was onto an alarming new trend: sexual exploitation of children via the Internet.
A Program is born. In 1995, the FBI created its Innocent Images National Initiative (IINI). Its goals: to break up networks of online pedophiles, to stop sexual predators from using the Internet to lure children from their families, and to rescue victims.
Today, 28 of the FBI’s 56 field offices have undercover Innocent Images operations. More than 200 FBI agents work these cases. Some pose as teenagers or pre-teens in chat rooms to identify “travelers” who seek to meet and abuse children.
Others focus on dismantling major child exploitation enterprises.
Since 1995, we’ve opened more than 10,000 total cases and helped secure nearly 3,000 convictions.
Keeping Safe. To report child pornography and/or potential cases involving the sexual exploitation of children, please contact the Crimes Against Children Coordinator at your local FBI Field Office. You can also file an online report at the National Center for Missing and Exploited Children’s CyberTipline at www.cybertipline.com; these reports are forwarded to the appropriate law enforcement authorities.
Reporting to authorities
Calling 800–4-A-Child connects you with the Childhelp National Abuse Hotline. — How to Report Suspected Pedophile Activity
Suspicions of internet-based child sexual abuse, like hosting child pornography websites, should be directed to the FBI.
Suspicions of a local adult who may be grooming and engaging in sexual contact with children should be reported to the local police.
Reports of suspected sexual abuse should include as much information as possible, including the names of the parties involved, the location where the alleged abuse has occurred or is currently occurring, whether the reporter believes the child is in immediate danger, the signs that led the reporter to believe the child is being abused and contact information for the child’s parent or legal guardian.
Some professions require legal reporting to the authorities:
•Teachers.
•Children’s librarians.
•Police officers.
•Clergy members.
•Medical professionals.
•Social services employees.
Paedophiles using secret Facebook groups to swap images
Paedophiles are using secret groups on Facebook to post and swap obscene images of children, the BBC has found.
Settings on the social network mean the groups are invisible to most users and only members can see the content.
Children's Commissioner for England Anne Longfield said Facebook was not doing enough to police the groups and protect children.
Facebook's head of public policy told the BBC he was committed to removing "content that shouldn't be there".
A BBC investigation found a number of secret groups, created by and run for men with a sexual interest in children, including one being administered by a convicted paedophile who was still on the sex offenders' register.
The groups have names that give a clear indication of their content and contain pornographic and highly suggestive images, many purporting to be of children. They also have sexually explicit comments posted by users.
We found pages specialising in pictures of girls in school uniform - accompanied by obscene posts.
Images appeared to be stolen from newspapers, blogs and even clothing catalogues, while some were photographs taken secretly, and up close, in public places. One user had even posted a video of a children's dance show. - by Angus
Crawford BBC News
Paedophiles are using secret groups on Facebook to post and swap obscene images of children, the BBC has found.
Settings on the social network mean the groups are invisible to most users and only members can see the content.
Children's Commissioner for England Anne Longfield said Facebook was not doing enough to police the groups and protect children.
Facebook's head of public policy told the BBC he was committed to removing "content that shouldn't be there".
A BBC investigation found a number of secret groups, created by and run for men with a sexual interest in children, including one being administered by a convicted paedophile who was still on the sex offenders' register.
The groups have names that give a clear indication of their content and contain pornographic and highly suggestive images, many purporting to be of children. They also have sexually explicit comments posted by users.
We found pages specialising in pictures of girls in school uniform - accompanied by obscene posts.
Images appeared to be stolen from newspapers, blogs and even clothing catalogues, while some were photographs taken secretly, and up close, in public places. One user had even posted a video of a children's dance show. - by Angus
Crawford BBC News
Facebook responsible for 94% of 69 million child sex abuse images reported by US tech firms
The figures emerge as the UK is among seven nations warning of the impact of end-to-end encryption on public safety online.
Facebook was responsible for 94% of the 69 million child sex abuse images reported by US technology companies last year.
The figures emerged as seven countries, including the UK, published a statement on Sunday warning of the impact of end-to-end encryption on public safety online.
Facebook has previously announced plans to fully encrypt communications in its Messenger app, as well as its Instagram Direct service - on top of WhatsApp, which is already encrypted - meaning no one apart from the sender and recipient can read or modify messages.
The social media site said the changes are designed to improve user privacy on all of its platforms.
But law enforcement agencies fear the move will have a devastating impact on their ability to target paedophiles and protect children online.
But the National Crime Agency (NCA) has warned the number could drop to zero if Facebook presses ahead with end-to-end encryption.
Millions of child sex abuse images have been shared on Facebook
Some 16.9 million referrals were made by US tech firms to the National Centre for Missing and Exploited Children (NCMEC) last year, including 69 million images of children being abused - up 50% on the previous year.
Some 94% of the reports, which include the worst category of images, came from Facebook, Home Office officials said.
Robert Jones, the NCA director responsible for tackling child sexual abuse, said of the plan: "The lights go out, the door gets slammed, and we lose all of that insight. It is as simple as that.
"And nothing, you know we're relying on the best technical expertise... in the UK, the same people that keep the UK safe against terrorists, hostile states, cyber attacks, are telling us there is no viable alternative. I believe them. And I am deeply concerned."
The NCA believes there are at least 300,000 people in the UK who pose a sexual threat to children, with 86,832 UK-related referrals to NCMEC last year, including 52% from Facebook and 11% from Instagram.
Mr Jones said industry reporting led to the arrest of more than 4,500 offenders and the safeguarding of around 6,000 children in the UK in the year to June 2020.
He continued: "The end-to-end encryption model that's being proposed takes out of the game one of the most successful ways for us to identify leads, and that layers on more complexity to our investigations, our digital media, our digital forensics, our profiling of individuals and our live intelligence leads, which allow us to identify victims and safeguard them.
"What we risk losing with these changes is the content, which gives us the intelligence leads to pursue those offenders and rescue those children."
Home Office officials say Facebook has not published credible plans to protect child safety a year on from Home Secretary Priti Patel's open letter to the firm's co-founder Mark Zuckerberg asking it to halt its end-to-end encryption proposals.
A statement signed by Ms Patel, along with the US, Australia, New Zealand, Canada, India and Japan - whose populations represent around a fifth of Facebook's two billion global users - is calling for tech companies to ensure they don't blind themselves to criminality on their platforms.
Ms Patel said: "We owe it to all of our citizens, especially our children, to ensure their safety by continuing to unmask sexual predators and terrorists operating online."
The statement calls for public safety to be embedded in systems, for law enforcement to be given access to content, and for engagement with governments.
It reads: "Encryption is an existential anchor of trust in the digital world and we do not support counter-productive and dangerous approaches that would materially weaken or limit security systems
"Particular implementations of encryption technology, however, pose significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children."
Sky News, Tom Gillespie - October 12, 2020
The figures emerge as the UK is among seven nations warning of the impact of end-to-end encryption on public safety online.
Facebook was responsible for 94% of the 69 million child sex abuse images reported by US technology companies last year.
The figures emerged as seven countries, including the UK, published a statement on Sunday warning of the impact of end-to-end encryption on public safety online.
Facebook has previously announced plans to fully encrypt communications in its Messenger app, as well as its Instagram Direct service - on top of WhatsApp, which is already encrypted - meaning no one apart from the sender and recipient can read or modify messages.
The social media site said the changes are designed to improve user privacy on all of its platforms.
But law enforcement agencies fear the move will have a devastating impact on their ability to target paedophiles and protect children online.
But the National Crime Agency (NCA) has warned the number could drop to zero if Facebook presses ahead with end-to-end encryption.
Millions of child sex abuse images have been shared on Facebook
Some 16.9 million referrals were made by US tech firms to the National Centre for Missing and Exploited Children (NCMEC) last year, including 69 million images of children being abused - up 50% on the previous year.
Some 94% of the reports, which include the worst category of images, came from Facebook, Home Office officials said.
Robert Jones, the NCA director responsible for tackling child sexual abuse, said of the plan: "The lights go out, the door gets slammed, and we lose all of that insight. It is as simple as that.
"And nothing, you know we're relying on the best technical expertise... in the UK, the same people that keep the UK safe against terrorists, hostile states, cyber attacks, are telling us there is no viable alternative. I believe them. And I am deeply concerned."
The NCA believes there are at least 300,000 people in the UK who pose a sexual threat to children, with 86,832 UK-related referrals to NCMEC last year, including 52% from Facebook and 11% from Instagram.
Mr Jones said industry reporting led to the arrest of more than 4,500 offenders and the safeguarding of around 6,000 children in the UK in the year to June 2020.
He continued: "The end-to-end encryption model that's being proposed takes out of the game one of the most successful ways for us to identify leads, and that layers on more complexity to our investigations, our digital media, our digital forensics, our profiling of individuals and our live intelligence leads, which allow us to identify victims and safeguard them.
"What we risk losing with these changes is the content, which gives us the intelligence leads to pursue those offenders and rescue those children."
Home Office officials say Facebook has not published credible plans to protect child safety a year on from Home Secretary Priti Patel's open letter to the firm's co-founder Mark Zuckerberg asking it to halt its end-to-end encryption proposals.
A statement signed by Ms Patel, along with the US, Australia, New Zealand, Canada, India and Japan - whose populations represent around a fifth of Facebook's two billion global users - is calling for tech companies to ensure they don't blind themselves to criminality on their platforms.
Ms Patel said: "We owe it to all of our citizens, especially our children, to ensure their safety by continuing to unmask sexual predators and terrorists operating online."
The statement calls for public safety to be embedded in systems, for law enforcement to be given access to content, and for engagement with governments.
It reads: "Encryption is an existential anchor of trust in the digital world and we do not support counter-productive and dangerous approaches that would materially weaken or limit security systems
"Particular implementations of encryption technology, however, pose significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children."
Sky News, Tom Gillespie - October 12, 2020
Facebook reported more than 20 million child sexual abuse images in 2020, more than any other company The material was flagged to the NCMEC, a charity that fights child sexual abuse.
Facebook reported more than 20 million child sexual abuse images on its platform in 2020, according to a new report by the National Council for Missing and Exploited Children (NCMEC).
According to the report released Wednesday, Facebook recorded 20,307,216 instances for child sexual exploitation on its platforms in 2020. The figures cover Instagram as well as the main Facebook site.
Insider first reported the figures in January, when Facebook confirmed the number. The full report has figures for other companies, and shows that Facebook made more than 35 times as many reports as the next company on the list, Google.
Facebook's platforms contain the vast majority of all child sexual content flagged to the NCMEC, which represent a 31% increase on the 16 million images reported to them by the platform in 2019.
Facebook highlighted its proactive policies and use of technology to detect and remove child exploitation material in response to the increase.
"Using industry-leading technology, over 99% of child exploitation content we remove from Facebook and Instagram is found and taken down before it's reported to us," said a spokesperson to Insider in January.
Other sites remove material after it is found or flagged to them, but don't have proactive policies to find it.
Following Facebook, the platforms with the most reports were:
•Google with 546,704.
•Snapchat with 144,095.
•Microsoft with 96,776.
•Twitter with 65,062.
•TikTok with 22,692.
•Omegle (a video and text chat platform) with 20,265.
Mindgeek, the company that owns porn websites including PornHub, logged 13,229 reports. Last year a series of credit card companies severed ties with Pornhub after it was revealed by The New York Times' that the site was hosting child sexual exploitation videos.
Facebook said in a blog post ahead of the release of the NCMEC report that it was building new tools to track down child sexual abuse material, and that most of the material it identified was old material being shared or re-sent.
"We found that more than 90 percent of this content was the same as or visually similar to previously reported content," said the post.
"And copies of just six videos were responsible for more than half of the child exploitative content we reported in that time period. While this data indicates that the number of pieces of content does not equal the number of victims, and that the same content, potentially slightly altered, is being shared repeatedly, one victim of this horrible crime is one too many."
The NCMEC told Insider in January that COVID-19 lockdowns were likely among the factors behind the overall increase in the amount of material reported to them in 2020.
Vulnerable children were less able to get help, and there was a new trend of abuse being livestreamed on demand, said the NCMEC at the time.
The 160 companies signed up to the NCMEC's child sexual abuse reporting mechanism voluntarily share the information, which is then used by law enforcement to investigate people committing the crimes.
Insider by Tom Porter Feb 26, 2021
Facebook reported more than 20 million child sexual abuse images on its platform in 2020, according to a new report by the National Council for Missing and Exploited Children (NCMEC).
According to the report released Wednesday, Facebook recorded 20,307,216 instances for child sexual exploitation on its platforms in 2020. The figures cover Instagram as well as the main Facebook site.
Insider first reported the figures in January, when Facebook confirmed the number. The full report has figures for other companies, and shows that Facebook made more than 35 times as many reports as the next company on the list, Google.
Facebook's platforms contain the vast majority of all child sexual content flagged to the NCMEC, which represent a 31% increase on the 16 million images reported to them by the platform in 2019.
Facebook highlighted its proactive policies and use of technology to detect and remove child exploitation material in response to the increase.
"Using industry-leading technology, over 99% of child exploitation content we remove from Facebook and Instagram is found and taken down before it's reported to us," said a spokesperson to Insider in January.
Other sites remove material after it is found or flagged to them, but don't have proactive policies to find it.
Following Facebook, the platforms with the most reports were:
•Google with 546,704.
•Snapchat with 144,095.
•Microsoft with 96,776.
•Twitter with 65,062.
•TikTok with 22,692.
•Omegle (a video and text chat platform) with 20,265.
Mindgeek, the company that owns porn websites including PornHub, logged 13,229 reports. Last year a series of credit card companies severed ties with Pornhub after it was revealed by The New York Times' that the site was hosting child sexual exploitation videos.
Facebook said in a blog post ahead of the release of the NCMEC report that it was building new tools to track down child sexual abuse material, and that most of the material it identified was old material being shared or re-sent.
"We found that more than 90 percent of this content was the same as or visually similar to previously reported content," said the post.
"And copies of just six videos were responsible for more than half of the child exploitative content we reported in that time period. While this data indicates that the number of pieces of content does not equal the number of victims, and that the same content, potentially slightly altered, is being shared repeatedly, one victim of this horrible crime is one too many."
The NCMEC told Insider in January that COVID-19 lockdowns were likely among the factors behind the overall increase in the amount of material reported to them in 2020.
Vulnerable children were less able to get help, and there was a new trend of abuse being livestreamed on demand, said the NCMEC at the time.
The 160 companies signed up to the NCMEC's child sexual abuse reporting mechanism voluntarily share the information, which is then used by law enforcement to investigate people committing the crimes.
Insider by Tom Porter Feb 26, 2021
FTC Sues Facebook for Illegal Monopolization
December 9, 2020
Agency challenges Facebook’s multi-year course of unlawful conduct
The Federal Trade Commission today sued Facebook, alleging that the company is illegally maintaining its personal social networking monopoly through a years-long course of anticompetitive conduct. Following a lengthy investigation in cooperation with a coalition of attorneys general of 46 states, the District of Columbia, and Guam, the complaint alleges that Facebook has engaged in a systematic strategy—including its 2012 acquisition of up-and-coming rival Instagram, its 2014 acquisition of the mobile messaging app WhatsApp, and the imposition of anticompetitive conditions on software developers—to eliminate threats to its monopoly. This course of conduct harms competition, leaves consumers with few choices for personal social networking, and deprives advertisers of the benefits of competition.
The FTC is seeking a permanent injunction in federal court that could, among other things: require divestitures of assets, including Instagram and WhatsApp; prohibit Facebook from imposing anticompetitive conditions on software developers; and require Facebook to seek prior notice and approval for future mergers and acquisitions.
“Personal social networking is central to the lives of millions of Americans,” said Ian Conner, Director of the FTC’s Bureau of Competition. “Facebook’s actions to entrench and maintain its monopoly deny consumers the benefits of competition. Our aim is to roll back Facebook’s anticompetitive conduct and restore competition so that innovation and free competition can thrive.”
According to the FTC’s complaint, Facebook is the world’s dominant personal social networking service and has monopoly power in a market for personal social networking services. This unmatched position has provided Facebook with staggering profits. Last year alone, Facebook generated revenues of more than $70 billion and profits of more than $18.5 billion.
Anticompetitive Acquisitions
According to the FTC’s complaint, Facebook targeted potential competitive threats to its dominance. Instagram, a rapidly growing startup, emerged at a critical time in personal social networking competition, when users of personal social networking services were migrating from desktop computers to smartphones, and when consumers were increasingly embracing photo-sharing. The complaint alleges that Facebook executives, including CEO Mark Zuckerberg, quickly recognized that Instagram was a vibrant and innovative personal social network and an existential threat to Facebook’s monopoly power.
The complaint alleges that Facebook initially tried to compete with Instagram on the merits by improving its own offerings, but Facebook ultimately chose to buy Instagram rather than compete with it. Facebook’s acquisition of Instagram for $1 billion in April 2012 allegedly both neutralizes the direct threat posed by Instagram and makes it more difficult for another personal social networking competitor to gain scale.
Around the same time, according to the complaint, Facebook perceived that “over-the-top” mobile messaging apps also presented a serious threat to Facebook’s monopoly power. In particular, the complaint alleges that Facebook’s leadership understood—and feared—that a successful mobile messaging app could enter the personal social networking market, either by adding new features or by spinning off a standalone personal social networking app.
The complaint alleges that, by 2012, WhatsApp had emerged as the clear global “category leader” in mobile messaging. Again, according to the complaint, Facebook chose to buy an emerging threat rather than compete, and announced an agreement in February 2014 to acquire WhatsApp for $19 billion. Facebook’s acquisition of WhatsApp allegedly both neutralizes the prospect that WhatsApp itself might threaten Facebook’s personal social networking monopoly and ensures that any future threat will have a more difficult time gaining scale in mobile messaging.
Anticompetitive Platform Conduct
The complaint also alleges that Facebook, over many years, has imposed anticompetitive conditions on third-party software developers’ access to valuable interconnections to its platform, such as the application programming interfaces (“APIs”) that allow the developers’ apps to interface with Facebook. In particular, Facebook allegedly has made key APIs available to third-party applications only on the condition that they refrain from developing competing functionalities, and from connecting with or promoting other social networking services.
The complaint alleges that Facebook has enforced these policies by cutting off API access to blunt perceived competitive threats from rival personal social networking services, mobile messaging apps, and other apps with social functionalities. For example, in 2013, Twitter launched the app Vine, which allowed users to shoot and share short video segments. In response, according to the complaint, Facebook shut down the API that would have allowed Vine to access friends via Facebook.
The lawsuit follows an investigation by the FTC’s Technology Enforcement Division, whose staff cooperated closely with a coalition of attorneys general, under the coordination of the New York State Office of the Attorney General. Participating Attorneys General include: Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, the District of Columbia, Florida, Guam, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, Tennessee, Texas, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.
The Commission vote to authorize staff to file for a permanent injunction and other equitable relief in the U.S. District Court for the District of Columbia was 3-2. Commissioners Noah Joshua Phillips and Christine S. Wilson voted no.
NOTE: The Commission issues a complaint when it has “reason to believe” that the law has been or is being violated, and it appears to the Commission that a proceeding is in the public interest.
The Federal Trade Commission works to promote competition, and protect and educate consumers. You can learn more about how competition benefits consumers or file an antitrust complaint.
Betsy Lordan elordan@ftc.gov - Office of Public Affairs 202-326-3707
December 9, 2020
Agency challenges Facebook’s multi-year course of unlawful conduct
The Federal Trade Commission today sued Facebook, alleging that the company is illegally maintaining its personal social networking monopoly through a years-long course of anticompetitive conduct. Following a lengthy investigation in cooperation with a coalition of attorneys general of 46 states, the District of Columbia, and Guam, the complaint alleges that Facebook has engaged in a systematic strategy—including its 2012 acquisition of up-and-coming rival Instagram, its 2014 acquisition of the mobile messaging app WhatsApp, and the imposition of anticompetitive conditions on software developers—to eliminate threats to its monopoly. This course of conduct harms competition, leaves consumers with few choices for personal social networking, and deprives advertisers of the benefits of competition.
The FTC is seeking a permanent injunction in federal court that could, among other things: require divestitures of assets, including Instagram and WhatsApp; prohibit Facebook from imposing anticompetitive conditions on software developers; and require Facebook to seek prior notice and approval for future mergers and acquisitions.
“Personal social networking is central to the lives of millions of Americans,” said Ian Conner, Director of the FTC’s Bureau of Competition. “Facebook’s actions to entrench and maintain its monopoly deny consumers the benefits of competition. Our aim is to roll back Facebook’s anticompetitive conduct and restore competition so that innovation and free competition can thrive.”
According to the FTC’s complaint, Facebook is the world’s dominant personal social networking service and has monopoly power in a market for personal social networking services. This unmatched position has provided Facebook with staggering profits. Last year alone, Facebook generated revenues of more than $70 billion and profits of more than $18.5 billion.
Anticompetitive Acquisitions
According to the FTC’s complaint, Facebook targeted potential competitive threats to its dominance. Instagram, a rapidly growing startup, emerged at a critical time in personal social networking competition, when users of personal social networking services were migrating from desktop computers to smartphones, and when consumers were increasingly embracing photo-sharing. The complaint alleges that Facebook executives, including CEO Mark Zuckerberg, quickly recognized that Instagram was a vibrant and innovative personal social network and an existential threat to Facebook’s monopoly power.
The complaint alleges that Facebook initially tried to compete with Instagram on the merits by improving its own offerings, but Facebook ultimately chose to buy Instagram rather than compete with it. Facebook’s acquisition of Instagram for $1 billion in April 2012 allegedly both neutralizes the direct threat posed by Instagram and makes it more difficult for another personal social networking competitor to gain scale.
Around the same time, according to the complaint, Facebook perceived that “over-the-top” mobile messaging apps also presented a serious threat to Facebook’s monopoly power. In particular, the complaint alleges that Facebook’s leadership understood—and feared—that a successful mobile messaging app could enter the personal social networking market, either by adding new features or by spinning off a standalone personal social networking app.
The complaint alleges that, by 2012, WhatsApp had emerged as the clear global “category leader” in mobile messaging. Again, according to the complaint, Facebook chose to buy an emerging threat rather than compete, and announced an agreement in February 2014 to acquire WhatsApp for $19 billion. Facebook’s acquisition of WhatsApp allegedly both neutralizes the prospect that WhatsApp itself might threaten Facebook’s personal social networking monopoly and ensures that any future threat will have a more difficult time gaining scale in mobile messaging.
Anticompetitive Platform Conduct
The complaint also alleges that Facebook, over many years, has imposed anticompetitive conditions on third-party software developers’ access to valuable interconnections to its platform, such as the application programming interfaces (“APIs”) that allow the developers’ apps to interface with Facebook. In particular, Facebook allegedly has made key APIs available to third-party applications only on the condition that they refrain from developing competing functionalities, and from connecting with or promoting other social networking services.
The complaint alleges that Facebook has enforced these policies by cutting off API access to blunt perceived competitive threats from rival personal social networking services, mobile messaging apps, and other apps with social functionalities. For example, in 2013, Twitter launched the app Vine, which allowed users to shoot and share short video segments. In response, according to the complaint, Facebook shut down the API that would have allowed Vine to access friends via Facebook.
The lawsuit follows an investigation by the FTC’s Technology Enforcement Division, whose staff cooperated closely with a coalition of attorneys general, under the coordination of the New York State Office of the Attorney General. Participating Attorneys General include: Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, the District of Columbia, Florida, Guam, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, Tennessee, Texas, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.
The Commission vote to authorize staff to file for a permanent injunction and other equitable relief in the U.S. District Court for the District of Columbia was 3-2. Commissioners Noah Joshua Phillips and Christine S. Wilson voted no.
NOTE: The Commission issues a complaint when it has “reason to believe” that the law has been or is being violated, and it appears to the Commission that a proceeding is in the public interest.
The Federal Trade Commission works to promote competition, and protect and educate consumers. You can learn more about how competition benefits consumers or file an antitrust complaint.
Betsy Lordan elordan@ftc.gov - Office of Public Affairs 202-326-3707
Facebook Just Gave 2.8 Billion Users A Reason To Quit Their Accounts - Forbes, April 15, 2021
Facebook has had a bad month and, having lost the data of 533 million users, new revelations may just make the social media giant’s 2.8 billion active users think about calling it quits.
The revelations come from two very different sources: a university student and a prominent UK newspaper, but both are likely to significantly undermine trust in the social network.
The first comes from a viral thread posted by student Zamaan Qureshi:
“So I decided to download my Facebook data after learning I was a part of the 533m breach,” he explained. “Clicked on a folder called “your_off_facebook_activity” and was unsurprised to learn that Facebook is following me all over the internet.”
Qureshi attaches a video (below) of what this means, showing hundreds of files recording his browsing activity. This ranges from records of him ordering pizza to university applications and registration to political sites. Moreover, Qureshi explains that this happened after he deleted his “off-Facebook activity” from the site and disabled off-site tracking (two Facebook privacy settings).
Serious Warning Issued For Millions Of WhatsApp Users - Forbes, April 17, 2021
Facebook has its own problems right now, but things just got a lot worse as a serious warning has now been issued for millions of WhatsApp users.
In a new report titled: “How a WhatsApp status loophole is aiding cyberstalkers”, cybersecurity firm Traced has revealed flaws in WhatsApp security are creating a growth industry in tracking and stalking the app’s users. Moreover, there’s nothing you can do about it.
“When someone comes online in WhatsApp (that is, they open the app or bring it to the foreground), an indicator changes, setting their status to ‘Online’”, Traced explains. “This indicator is public information, and can be used by anyone to build a service that watches out for this online status indicator.”
While Traced redacted the names of the services exploiting this (and I have respected this), it did publish some chilling examples of what they promise:
“Tracker1‘s own marketing on their website: ‘If you suspect a cheating spouse, boyfriend or girlfriend… [Tracker1]’s WhatsApp last seen tracker online can help you to confirm whether or not your suspicions are really true.”
Facebook has had a bad month and, having lost the data of 533 million users, new revelations may just make the social media giant’s 2.8 billion active users think about calling it quits.
The revelations come from two very different sources: a university student and a prominent UK newspaper, but both are likely to significantly undermine trust in the social network.
The first comes from a viral thread posted by student Zamaan Qureshi:
“So I decided to download my Facebook data after learning I was a part of the 533m breach,” he explained. “Clicked on a folder called “your_off_facebook_activity” and was unsurprised to learn that Facebook is following me all over the internet.”
Qureshi attaches a video (below) of what this means, showing hundreds of files recording his browsing activity. This ranges from records of him ordering pizza to university applications and registration to political sites. Moreover, Qureshi explains that this happened after he deleted his “off-Facebook activity” from the site and disabled off-site tracking (two Facebook privacy settings).
Serious Warning Issued For Millions Of WhatsApp Users - Forbes, April 17, 2021
Facebook has its own problems right now, but things just got a lot worse as a serious warning has now been issued for millions of WhatsApp users.
In a new report titled: “How a WhatsApp status loophole is aiding cyberstalkers”, cybersecurity firm Traced has revealed flaws in WhatsApp security are creating a growth industry in tracking and stalking the app’s users. Moreover, there’s nothing you can do about it.
“When someone comes online in WhatsApp (that is, they open the app or bring it to the foreground), an indicator changes, setting their status to ‘Online’”, Traced explains. “This indicator is public information, and can be used by anyone to build a service that watches out for this online status indicator.”
While Traced redacted the names of the services exploiting this (and I have respected this), it did publish some chilling examples of what they promise:
“Tracker1‘s own marketing on their website: ‘If you suspect a cheating spouse, boyfriend or girlfriend… [Tracker1]’s WhatsApp last seen tracker online can help you to confirm whether or not your suspicions are really true.”
Should Amazon, and/or CEO Jeff Bezos, along with Google and/or Sundar Pichai be prosecuted for Criminal Antitrust Conspiracy in violation of the Sherman Antitrust Act or the Clayton Antitrust Act, and if found guilty how many years should Jeff Bezos, Andy Jassy and Sundar Pichai or any of the other high ranking executives be incarcerated in Federal Prison?