I know. I know. “What we need is a Spotify/Netflix/Uber/Deliveroo/Blockchain/AirBnB for learning”. It’s another one of those posts. Well…yes. It is, I guess. But with good cause, I hope.
I do not intend to argue that the runaway success of these apps signals a UX for learning products. Maybe. Maybe not. (Although, I am convinced that their design and designers have lessons for all of us trying to find and keep users of our stuff).
I write this post from my poderings as I consider my session at the Learning Technologies Summer Forum (session T2S4 for the curious and keen). In the session I will spend a short moment looking at how Spotfiy handle data in their music recommendation design and how they decide whether songs are similar or not. I do this because I think it is a long way in advance of the kinds of content analysis the L&D world is undertaking (a generalisation I know but not without reason).
Spotify takes three approaches:
Collaborative filtering examines and compares an individual’s behavior to other people’s taste.
Natural Language Processing (NLP) analyzes the text in each song.
Audio modeling uses a song’s raw audio to understand the tune of the song and compares it to other songs.
They look at content beyond their boundaries
So, they look at user behaviour (in detail across 100 million plus users) and view the important similarities between our listening repertoires. This is one source of the ‘you liked this, other people who liked it aslo liked that‘. That is incomplete though (shared accounts can really mess up a listening history, for example). To counter this they run language processing of lyrics and titles and crucially, they analyse web data on blogs and social platforms to see how listeners describe and comment around them. They look at content beyond their boundaries – not just ‘Spotify’ behaviour. In addition they analyse the raw audio files to compare and categorise music genres and styles. Not all music has lyrics, you see (as the greatest jazz tune will attest).
Spotify are intensly dedicated to gathering and analysing content and user data to target recommendations. They also use it to add serendipity in discovery and broaden our playlist behaviour. The more we discover, the more we listen and the more loyal we are.
Even smarter still. The big data (proper big data, not an LMS, content portal and collaboration platform) is supplemented with editorial inputs from what we call humans. These particular humans are expert in various music genres and local/national music scenes. They ‘get’ music culture in ways that machines are yet to grasp. They filter and suggest playlists – looking way beyond the boundaries of the organisation – for signals of interest and gatherings of similar ears. Backed by data they can focus and hone the listings and to whom they are pushed. Many ex BBC Radio producers are now in Spotify and Apple Music for this reason (and the salary too I suspect).
As you can tell, I am a fan. This is mainly because I can see the benefits of this approch as a punter. It works for me. I am yet to see much of this kind of analysis and judgement in the L&D world. Or do I need to get out more and travel more widely?
If you feel moved to, come and tell me what you think next week.