Automated content creation – coming to your learning projects (maybe soon)

Over the last year or so, I have been looking at technology and other trends that are likely to have an impact on learning, through a few research projects for clients. This endeavour was given some wings for a couple of reasons. Firstly, not starting from the perspective of Learning Technologies (i.e. those created to be sold to L&D budget holders) is liberating. Additionally, an editorial view over a three or so year time horizon raises the sights above predictions of specific implementation. (Spoiler: AI is going to be big from tomorrow onwards. Even bigger than it already is).

One central theme of the research, and subsequent workshops, is automation. Ranging from process automation, through NoCode/LowCode and into content automation, there are some interesting implications for our industry. This article in Wired about the impact of Dalle-2 on creative professions took me back to the theme. (This video is a good shorthand for Dalle-2, what it does and how). Whilst the dystopian possibility of ranks of photographers, editors, writers and composers out of work as the robots take over their commissions is not on the immediate horizon, things are starting to shake up, however.

The ability to use Dalle-2 as a workshopping tool and idea sketch pad is quite powerful. As someone who draws very badly, it gives me a fighting chance of visualising ideas and representations with less risk of ridicule. Most importantly, it does seem to disrupt the stock image library proposition – why bother with those pointing, shiny people when you can have exactly the avocado on a bicycle you want?

In a similar vein, AI tools to create video content from scripts are advancing rapidly. Synthesia is one such business. Draft a script, chose an avatar and see the uncannily real, head talk. You can create a quick free clip on the site to see what you make of it. No need for actors, no need for retakes and camera anxiety. Andrew Jacobs stirred my memory of the research with this Twitter thread. My own view is that Synthesia chose training videos as a customer segment because so much digital learning content is predictable and similar in treatment. We are a good target for a replicable tool. This is a cheaper and scalable approach.

There is much skill and knowledge required to use such a tool well for learning purposes. That’s for us, as customers, to solve with our design insight. Synthesia is the production tool and does not need to be a learning service to save us time and money without compromising quality.

These are a couple of examples of content automation which are coming our way. Software will get very good at making words, pictures, video and sound very quickly. It will also, over time, help us weave those elements together. There is a host of writing tools out there which can help us with all manner of drafting. We have come a long way from autocorrect. This can liberate us to perform other tasks and free our imagination. It can also undermine current design and production practices. What do you think?

For those interested, other topics covered in the research include synthetic media and experiences; the metaverse at work; sensing and monitoring the workforce; neural technology for learning and remote working tools. Let me know if any of that sounds interesting for you and your teams.

A picture of Kismet the smiling robot. On display in the MIT museum.
Let me do that for you.

Leave a Reply