Can Tech Companies Keep Children Safe Online?

5 mins

Social media is a divisive issue wherever children are concerned. For all the benefits that social media brings to young people, real and serious dangers exist. Cyberbullying, exposure to inappropriate content, grooming, data security, and mental health issues such as addiction and anxiety are all potential pitfalls for a generation that has grown up with social media. 

Recent findings from the UK reveal some shocking statistics about social media usage among minors, and invite the question: who should be keeping them safe — parents, app stores, or the tech giants?

According to Ofcom, approximately a quarter of children aged 5–7 in the UK own a smartphone, and 37% use WhatsApp despite it having a minimum age of 13. While most social media platforms have the same age limit, Ofcom reports that more than half of the country’s children aged below 13 are active on social media. 

While parental responsibility is clearly a large part of the problem — it is, after all, difficult for children to buy smartphones without the help of their parents — industry insiders, such as Mark Bunting, Strategy Delivery Director at Ofcom’s Online Safety Group — feel the industry needs to do more. “We've known for a long time that children, under the age limit on a lot of the most popular apps, are widely using those apps, and companies are now under a legal obligation to take steps to keep those children safe,” Bunting told BBC News.

Taming the Algorithm

Policymakers in the UK are stepping up their efforts to tackle the issue. Last year, the UK introduced the Online Safety Act — a set of laws aimed at protecting both adults and children online. Earlier in May, Ofcom proposed 40 requirements for social media companies aimed at improving online child safety. The proposals put the onus squarely on social media companies themselves to tackle the problem. According to Ofcom’s CEO Melanie Dawes, the proposals “firmly place the responsibility for keeping children safer on tech firms”. 

There is, however, reluctance on the part of social media companies to shoulder this responsibility. Mark Zuckerberg, CEO of Meta (which owns Facebook, WhatsApp and Instagram) has this year suggested that parental consent is the job of app stores (i.e. Apple and Google) rather than social media apps. “I don’t think parents should have to upload an ID to prove that they’re the parent of a child in every single app that their children use,” Zuckerberg told a congressional online safety hearing in January. “A place where it’d be actually very easy for it to work is within the app stores themselves,” he continued, adding that Apple already requires parental consent when a child makes a payment within an app.

Ofcom’s proposals, though, would make Meta — whose social media apps are used by 3.24 billion people daily — and other social media developers such as X (formerly Twitter) and Snapchat responsible for ensuring that minors cannot access inappropriate content. For example, the second of the 40 proposals states that social media companies must “ensure that algorithms which recommend content do not operate in a way that harms children”. 

These firms “will need to tame aggressive algorithms that push harmful content to children in their personalised feeds and introduce age-checks so children get an experience that’s right for their age,” said Dawes.

Which Companies are Improving Online Safety?

Despite shouldering much of the responsibility for improving online safety, big tech has cut resources dedicated to it. Google, X and Meta have all seen divisions dedicated to online trust and safety gutted in a string of job cuts over the past two years: Meta, for example, cut 200 content moderation jobs in January 2023 in addition to 100 trust and responsibility positions and 16 roles in Instagram’s wellbeing group. 

However, various (often smaller) tech companies are making child safety online their primary focus.

For example, US-based Bark Technologies is a leading player in the realm of child online safety. Bark’s software employs advanced algorithms and machine learning to monitor children’s social media activity, emails, and text messages for signs of cyberbullying, predators, and other potential threats. 

Parents receive alerts about concerning activities, along with recommendations on how to handle specific issues, ensuring they can intervene promptly while respecting their child’s privacy. 

Bark launched its own smartphone, the Bark Phone, in November 2022. The phone made Time’s list of the top 200 Inventions for 2023. In total, Bark’s software protects 6.8 million children in the US. Qustodio, headquartered in the UK, offers robust parental control software designed to safeguard children online. 

The application provides parents with tools to monitor their children’s social media usage, set screen time limits, and block inappropriate content. One of Qustodio’s standout features is its detailed activity reports, which give parents insight into how their children are spending time online, allowing for informed conversations about digital safety. 

Finally, Net Nanny is a suite of content control software that enables parents to monitor and manage their children’s online activity. Known for its dynamic content filtering, real-time alerts, and comprehensive monitoring capabilities, it helps protect children from explicit material, cyberbullying, and online predators across social media platforms. 

In 2020, Net Nanny was acquired by SafeToNet, a UK-based company specialising in safeguarding children by using AI to analyse and block harmful messages — including cyberbullying, sexting, abuse and aggression — before they are sent. 

At Oho, we believe that constant innovation will make the internet not only better for everyone to use, but also safer.  

If you are looking to partner with any of the innovators in this field — whether as a jobseeker or an employer — contact our team today.

  • 020 7622 6244
  • Tintagel House, 92 Albert Embankment, London SE1 7TY
Oho Group ltd.
Site by Venn