TAKE ACTION: Apple Should Lead Big Tech in Protecting Children

This past August, Apple announced a series of measures designed to fight the exponentially increasing crime of child exploitation. The new protections featured three main changes:

  1. Application of cryptography to detect previously confirmed and hashed child sex abuse images (CSAM) at the device level before being uploaded to iCloud.
  2. A tool in iMessage that would blur nude images sent to minors, accompanied with warnings, helpful resources, and reassurance that it’s okay to not view the photos. Similar warnings would be triggered if a child tried sending nude image. Critically important, parents of children 12 and under were to have received an alert if a child chose to view or send a nude image anyway (keep reading for the update…)
  3. Updates to Siri and Search to provide parents and children more resources if they encounter “unsafe” situations and interventions if users try to search for CSAM-related topics

These new features were to be implemented while holding to Apple’s core value of protecting privacy and maintaining end-to-end encryption of messages. Child protection advocates were encouraged by these proactive measures by the world’s most valuable corporation to stem online child exploitation at a time when it is reaching crisis levels. And we must give Apple credit for announcing this move before the recent wave of Senate hearings that placed Facebook, YouTube, Snapchat, and TikTok executives in the Hill hotseat—amplifying to the general public what so many of us have been saying for years:  BigTech is not only failing miserably in protecting kids, but it is preying on them, perpetuating the ills, and even profiting from the harms.

Big Tech has an immense opportunity—and responsibility—to protect kids. #Default2SafetyCLICK TO TWEET

Just as importantly, Apple was making a critical point that privacy and child protection are not mutually exclusive—that this dichotomy is a “false choice” (to use Facebook Whistleblower Frances Haugen’s term) that privacy-rights groups propagate. And in fact, the well-resourced privacy-rights groups did just that after Apple’s August announcement—unleashing a coordinated campaign against Apple. Tech experts like Hany Farid pushed back, noting that Apple’s (“modest and long overdue”) steps are necessary, not even all that new, and limited to only a portion of child sex abuse material as it does not apply to videos.

Unfortunately, the outrage unleashed by privacy-absolutist groups, unfounded cries by critics of potential abuse, and hypocritical (notes Farid) pushback by Apple’s Big Tech peers (worried they’d be compelled to follow Apple’s principled lead?) forced Apple to pause implementation of these features pending further consultation with experts—delaying tools that could quite literally be saving children’s lives.


Apple has the power to implement tools that could quite literally be saving children’s lives. #Default2Safety

CLICK TO TWEET


The National Center on Sexual Exploitation joined ECPAT and dozens of leading child safety organizations around the globe expressing their support for the industry-leading steps Apple was taking, encouraging them to roll out these features as soon as possible, establish a clear timeline, and to “go even further” in protecting children.

The Latest: Parents Won’t Be Alerted if Their Child Views or Sends Sexually Explicit Images

There were no more updates since early September—until last week when several news sources reported that the iMessage opt-in tool to blur nude images sent through the app to kids 17 and under (based on Apple ID) would be rolled out in beta, but with changes from the initial announcement: Parents of children ages 13 and under would not receive an automatic alert if their kids viewed or sent a flagged (likely sexually explicit) image. In fact, that option won’t even exist, so parents have no way of knowing if their young child has been exposed to or is engaging in high-risk, potentially illegal and life-altering action.

NCOSE strongly believes in the need for parents to have greater control over what their young children experience online. We are disappointed that this critically important feature was removed from the initial plans.

We’re also dismayed that this tool will not be turned on automatically: parents will need to turn it on for their children, which inherently leaves many children without the privilege of highly involved caretakers vulnerable to grooming and abuse. Putting all of the burden on young children to determine what is and isn’t appropriate to view presumes too much given the fact that critical brain development is still ongoing.


Child online exploitation is at crisis levels and many parents don’t know the risks. #Default2Safety

CLICK TO TWEET


Though the iMessage feature is a step in the right direction, it also places an incredible burden on children to consent to accept, view, and/or send nude images. The growing trend of sexting is in itself very risky and can cause incredible harm to the child—and would likely be considered child sex abuse material, which is a federal crime to possess or create. Disturbingly, among 9–12 years olds surveyed in 2020, 1 in 7 said they had shared their own nudes—up from 1 in 20 in 2019, according to a new report by child safety organization, Thorn. Even more terrifying, 50% of 9–17 year-olds who reported sending a nude image sent it to someone they had never met in real life, and 41% believed they were sending the images to an adult. (Thorn’s blog on the report is a MUST read for anyone with children in their lives).

Furthermore, survivors of all forms of sexual abuse and exploitation, law enforcement, and child safety experts consistently warn that sharing sexually explicit imagery is a primary way predators use to groom children: often posing as children themselves or using nude images for sextortion: as blackmail used to coerce children into doing what the predator wants. And even when children may be sharing images with each other—perhaps out of age-appropriate curiosity—the instances of those images then being shared with other people and/or being uploaded to the internet (onto porn sites or social media platforms) is common.

It’s important to note that even Apple’s original plan to alert parents was a feature that would have been triggered after the fact—meaning, once the child already viewed or sent a nude image.

Many parents don’t even know these risks or understand the growing crises of CSAM (including self-generated CSAM/sexting), sextortion, or image-based sexual abuse. How can teens and tweens understand it? How can an 8-year-old with a smartphone possibly understand? We also know that many children aren’t going to say anything because they’re too ashamed—or they have no idea who they can confide in. We urge to Apple reconsider their decision to not alert parents, or at the very least allow parents to decide what is best for their young children by giving them the ability to be block nude images all together and/or being notified.

Apple, We Urge You to Reconsider and Take Further Steps to Protect Kids

We also ask Apple to turn on this tool for Apple IDs under the age of 18 as the default to potentially prevent a life-altering, even criminal activity from occurring. Parents are often overwhelmed and frustrated with the many steps already required to set up and use Screen Time. And so many parents don’t. Defaulting to safety would also further protect children who don’t have an adult in their life with the capacity or desire to provide the necessary oversight of their online life. Defaulting this tool could—at the very least—give children pause before making a terrible decision with high-risk consequences they literally don’t have the brain development to fully understand.


Apple should default safety measures that protect kids using their products. #Default2Safety

CLICK TO TWEET


NCOSE and our ally Protect Young Eyes, together with other child safety organizations, have reached out to Apple about the new iMessage tool and CSAM scanning, and have encouraged them to consider several other areas Apple could continue protecting kids on their products:

  1. Automatically engage age-based safety defaults during device setup, based on the Apple ID age (including defaults for school-issued iPads like Google did for Chromebooks).
  2. Create an accurate, accountable, age-based app rating system with better, individualized descriptions and control over sexualized ads.
  3. Hold top social media apps to a high standard for privacy, content moderation, and parental controls, given their massive impact on children.

Please join NCOSE and Protect Young Eyes in thanking Apple in taking concrete measures to protect kids on their products and ask them to take some additional critically necessary, common sense steps—especially when the stakes are so high.


Apple can and should lead Big Tech in protecting children! Urge them to #Default2Safety!

CLICK TO TWEET


EDITORS NOTE: This National Center on Sexual Exploitation column is republished with permission. ©All rights reserved.

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *