How ethical data sourcing and artificial intelligence can help tackle online child sexual abuse

When we envision where AI might go next, we often feel fear. After a Google engineer was fired in June for claiming the artificial intelligence he created was sentient, the idea that conscious machines with the ability to think and feel could soon exist seemed imminent. At the same time, there is concern in a world where law enforcement is using artificial intelligence to invade our privacy – a 2019 survey by the Ada Lovelace Institute found that more than half (55%) want the government to use facial recognition technology against the police.

But what is often overlooked is how AI can be used to improve our safety. One area where it could be beneficial is addressing child sexual abuse and exploitation. The rise of such material online is alarming. According to the Internet Watch Foundation, compared to pre-pandemic levels, there has been a 15-fold increase in child sexual abuse content, and a 374% increase in the number of sexual images children take of themselves.

The longevity of the internet is another cause for concern. This insidious material exists on chat rooms, private messaging apps, illegal websites and cloud platforms. A 2017 survey by the Canadian Centre for Child Protection found that 67 per cent of abuse victims reported that the distribution of their images had been affecting them. Nearly 70 percent said they often fear being recognized by people who see pictures of their abuse.

Governments, law enforcement and child protective services need to play a leading role in solving this problem, and AI can help them. A project in Australia is using ethical data sources to reduce the incidence of child sexual abuse. The My Picture Matters initiative was created by researchers at the AiLecs lab, a collaborative project between Monash University and the Australian Federal Police. It involved asking the adult public to upload voluntary, clothed photos of their own children (ages 0-17) to the project’s website, eventually crowdsourcing 100,000 “happy” childhood photos.

[See also: “AI is invisible – that’s part of the problem,” says Wendy Hall]

The pictures will then be analysed using machine learning algorithms to understand what children look like at different stages of childhood. When a suspect’s laptop is confiscated, the algorithm could be used in the future to obtain images of child sexual abuse, distinguishing abusive images from benign ones. Ultimately, it will be able to scan documents and find indecent images faster than humans, and be able to simplify referrals to police while minimizing repeated exposure to indecent material, the researchers say.

Unlike most databases of child sexual abuse material, this data item consists of images that are safe and consensual.Project leader Nina Lewis told spotlight This aims to “promote informed and meaningful consent” for the use of images of children in machine learning research.


The UK has its own Child Sex Image Database (CAID) which helps police identify victims and offenders. The police report, established in 2014, said it significantly accelerated the process of reviewing images. “Previously, a case of 10,000 images would typically take up to three days,” said one of the first units to use it. “Now, after matching an image to a CAID, a case like this can be reviewed within an hour.” The project received another £7 million in funding in 2020. To improve efficiency, the Home Office has partnered with tech company Vigil AI to use an artificial intelligence tool to speed up the process of identifying and classifying images of abuse.

While the UK database has the same purpose as the My Pictures Matter project, its ethical implications are different. Rather than encouraging consenting adults to share childhood photos of themselves, CAID relies on images of child abuse to function. In the six months to January 2021, 1.3 million unique images of such images were added to the database. Retaining these images could cause further distress and trauma to the victims, as well as the police or online moderators responsible for censoring the content.

Content from our partners

How to create a responsible

'Unions are helping improve conditions for drivers like me'

Shipping is at the heart of the upgrade

A pilot project in Australia shows how ethical artificial intelligence can be used to improve the process of identifying such material. In the UK, the Online Safety Act, currently being passed in Parliament, aims to make the internet safer, especially for children and young people, by placing the onus on social media sites and technology platforms to deal with content such as child sexual abuse material. They will be obligated to monitor their website, remove objectionable material, and even prevent people from seeing it in the first place. The AILecs Lab project shows how a more consistent approach to data collection can make the process easier for victims and moderators.

[See also: Is facial recognition tech in public spaces ever justified?]

Leave a Comment

Your email address will not be published.