2022 trend: Evolving eSafety

Karl Finn
3 min readSep 22, 2022

--

Can policymakers and big tech finally turn the tide on child exploitation online?

In 2022, Stanford University introduced a new course on internet trust and safety. Led by renowned cybersecurity expert, Alex Stamos, the syllabus covered online threats, including child sexual abuse material (CSAM).

According to Stamos, Stanford’s research into internet safety is two-pronged: “One of our goals is to inject a little more realism into the discussion of [tech] abuses. The second is to expand the discussion of what should be considered the responsibility of tech companies.”

Concerns over online child safety have intensified since 2020. In September 2022, Bloomberg cited research from the National Centre for Missing and Exploited Children (NCMEC), showing that online CSAM had increased by 73% from 2019 to 2021.

One country is confronting the issue head-on. In 2022, Australia introduced the Online Safety Act through the eSafety Commissioner, its independent online safety regulator. As part of the world-first legislation, the government can penalise tech companies for failing to protect users. Protecting minors is one of the initiative’s topmost priorities.

These problems converge on public platforms such as Twitter, where bad actors can connect with users all over the world, and evade detection using “dog whistling” tactics. Our researchers uncovered sensitive material on Twitter by searching with coded emojis, which they manually reported. The posts were taken down, though this raises serious questions about the platform’s content moderation system.

In 2022, a report by The Verge revealed the scale of Twitter’s failings. It referenced a damning internal investigation, which admitted “Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale.”

Other platforms are also struggling to filter harmful content. As TikTok grows, the company is increasing its content moderation workforce. However, Insider revealed that many of the company’s overworked moderators have only seconds to assess video submissions, increasing chances for graphic content to slip through. Because an estimated 25% of TikTok’s users are aged 10–19, the UK’s child protection charity has issued safety guidance.

This is leading to greater demand for AI detection tools. Google equips specialist human teams with automated detection tools, to identify CSAM and facilitate legal investigations. A high-profile case where a parent was permanently banned for taking a photo of his son’s groin (for medical purposes) proves how robust Google’s tools are - and the risks of possessing CSAM.

There are reasons to be hopeful. Recent breakthroughs in AI-based language-to-image technology, from Google and OpenAI, could one day be used to detect graphic content online with greater efficiency and accuracy. Australia’s world-leading legislation hints at a safer future, where big tech companies and policymakers work proactively, and do more to protect the internet’s most vulnerable users.

For more trend analysis and special reports, sign up to Predictedit’s regular newsletter.

Hero image courtesy of SHVETS at Pexels.

--

--

Karl Finn

Writer in London. Currently run events at Google, formerly V&A and Sotheby’s. Founder of Predictedit, a newsletter bringing together trends, research and ideas.