Spotify has sought to play down reports that its UK subscribers could have their accounts deleted as the streaming platform complies with the new Online Safety Act and the requirement to check a user’s age before they access certain kinds of content.
That legal requirement for websites to check a user’s age went into force last Friday, resulting in millions of British internet users logging onto online age verification services, and a simultaneous surge in people signing up for VPNs, which help users circumvent the UK-specific restrictions.
Although the new legal requirement most obviously applies to pornography sites, other websites and digital platforms that carry age-restricted content also need to comply.
And that includes Spotify, where some content may have been labelled as eighteen plus, for one reason or another, by the label, distributor or producer that supplied it. As a result, some UK-based Spotify users accessing that content have been told they now need to go through an age verification process.
An accompanying FAQ about that process states, “if you cannot confirm you’re old enough to use Spotify, your account will be deactivated and eventually deleted”. Which has resulted in news stories declaring that Spotify is threatening to delete the accounts of subscribers that fail an age-verification process that involves having your face scanned by the streaming app.
In response, a Spotify spokesperson asked 404 to clarify that “we are not forcing users to go through our age assurance checks, these are voluntary”, and that “there are multiple ways that users can go through our age assurance checks (eg ID verification) – not just ‘face scanning’”.
The age checks are technically mandatory, if a user wants to access age restricted content. But as that’s a small minority of the content available on the platform, it is true that Spotify in general can still be used without having to go through the process of proving your age.
The UK government has published guidance on what content is harmful to children and therefore should be subject to age-checks under the Online Safety Act.
The list obviously include sexually explicit content, but also includes content that encourages or promotes self-harm, eating disorders or suicide; which depicts or encourages serious violence; which encourages dangerous stunts or the use of harmful substances; or which is ‘hateful’ or constitutes bullying.
That means social media, user-upload, streaming and community platforms also need to be aware of what content requires age-checks and have systems in place to do those checks, with platforms that fail to comply with the new laws facing fines of up to £18 million or 10% of their worldwide revenues.
On Spotify – which officially doesn’t allow sexually explicit content on its platform – you’d expect it to be podcasts that most likely include other content that could be deemed harmful to children. However, in its guidance, the example it gives of content that may require a subscriber to verify their age is music videos that have been tagged as only being suitable for users over the age of eighteen by the releasing label.
In its FAQ, Spotify explains that, to access such content, a user will have to go through a ‘facial age estimation’ process, which uses technology from Yoti that takes a photo of the user and estimates their age. If that estimates they are under the age of eighteen, but they are not, they can then provide a passport or driving licence that proves their age.
If a user can’t get through that process, they won’t be able to access any content tagged as being eighteen plus. Account deletion should only happen if the user can’t prove they are over the minimum age required for using Spotify in general, which in the UK is thirteen years old.