A coalition of musicians and human rights groups persuaded music streaming company Spotify to secure against possible use of speech recognition tool it recently developed to suggest songs— painting the technology as ‘creepy’ and ‘invasive’.
Earlier this year, Sweden-based Spotify patented a technology which examines users’ speech and background noise to suggest songs based on factors like their mood, age, gender, accent or surroundings.
The company pointed out in a letter published in April that it had never implemented the tool in its product and had no plans to do so in the future.
However, in an open letter, over 180 artistes and activists urged the firm to drop the project completely and make a public commitment to never use, license, sell or monetize it.
‘Any recommendation technology is dangerous, a violation of privacy and other human rights, and should not be implemented by Spotify or any other company,’ the letter said.
‘Any use of this technology is unacceptable.’
Some signatories include American guitarist Tom Morello of Rage Against the Machine, rapper Talib Kweli, Laura Jane Grace of rockband Against Me! and advocacy groups Amnesty International and Access Now.
‘You can’t rock out when you’re under constant corporate surveillance,’ Morello said in a statement.
In the patent application first filed in 2018, Spotify with its 356 million active users said it wasn’t uncommon for a media streaming platform to include features which provide personalized recommendations to users.
But moulding suggestions around someone’s taste usually requires them to ‘tediously input answers to multiple queries’, it said.
The technology had the goal to streamline the song suggesting process to fit people’s mood or setting with background noise, which could be used to infer whether someone is listening to music alone or not.
However, the signatories of the letter raised privacy concerns as devices could collect private information and make inferences about people present in the room who may not be aware they were being listened to.
According to them, using artificial intelligence to recommend music could also expand existing disparities in the music industry.
‘Claiming to be able to infer someone’s taste in music based on their accent or detect their gender based on the sound of their voice is racist, transphobic and just plain creepy,’ said musician Evan Greer.
Voice recognition software use has shot up and is being used in a wide range of sectors from customer services to automatic transactions and digital assistants. However, it suffers from the same problems facial recognition software has in terms of potential discrimination, inaccuracy and surveillance, said Daniel Leufer, Europe policy analyst at Access Now.
‘When designing voice recognition systems, certain languages, dialects and even accents are prioritized over others,’ Leufer said.
‘This ends up effectively either excluding people who don’t speak those languages, dialects, or with those accents, or forcing them to adapt their speech to what is hardcoded into these systems as “normal”,’ he said in a statement.
By Marvellous Iwendi.