If you have spent as much time on the internet as I have, you’ll know that YouTube is legitimately one of the most fucked up places on the internet. Dig a little deeper beyond the movie trailers, vloggers, fail compilations and fan videos and you’ll find a reasonably depraved underbelly of surreal, algorithmically-generated content aimed squarely at kids.
The New York Times did a big piece on the seedier side of kids YouTube on the weekend, pointing out that unsupervised kids can easily stumble across some absolutely bizarre content while looking for their favourite things like Peppa Pig or Disney movies. The Times writes:
But the app contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms.
In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes. Many have taken to Facebook to warn others, and share video screenshots showing moments ranging from a Claymation Spider-Man urinating on Elsa of “Frozen” to Nick Jr. characters in a strip club.
It’s not hard to find this stuff, either. For example – on the milder end of the spectrum – a quick search for the Disney film Cars turned up… this video. It’s not disturbing, but it is fucking weird and unsettling, and it has 52 million views thanks to its word salad title designed to confound search engines.
Some of the content definitely lurches towards the more unsettling, as the Times writes. Like this really, really odd Peppa Pig video, which seems based around the idea of Peppa being tortured.
Or this one, for which the title should be somewhat self-explanatory, which appeared in the New York Times article. It has over 3 million views.
A post on Medium by writer James Bridle is going gangbusters on the internet right now, which digs a little deeper into this problem. The piece – in my opinion – overcooks the problem a little bit, and amplifies the message with some horror-like writing. But the issue is real, and Bridle makes a series of good points:
To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.
There’s a big conversation to be had here about the role of platforms like YouTube and Google in filtering or monitoring this kind of content. Of course, we don’t want a broad censorship regime – and it’s hard to justify the platforms acting as the arbiter of what people can and can’t watch.
Also, it’s worth remembering that our generation – millennials, or Gen Y, or whatever you want to call it – were no doubt exposed to some truly wild stuff online during the internet’s formative period. So there’s a no doubt a bit of pearl-clutching going on right now. But YouTube is obviously on a much more industrial scale than the shock content we watched as kids, and it’s being purposefully directed at very young kids searching for specific, innocent terms.
Malik Ducard, YouTube’s global head of family and learning content, told the NYT that the content described in their article is “the extreme needle in the haystack,” but that “making the app family friendly is of the utmost importance to [YouTube].”
Food for thought.