ChatGPT makes me feel predictable and ordinary. Maybe that’s the point?

Neon yellow lettering saying blah blah blah

My fingers hesitate over the keyboard in drafting this. Does the world need another lukewarm take on Chat GPT? Obviously not. And yet…here I am…adding to the well of, erm, wisdom.

There is something that made me wonder about the uncanny feeling of reading the output of the tool. Particularly about that first moment of watching the machine ‘type’ the response to a prompt. That visceral response is important in framing our emotional response to the idea of AI applications. For many of us, this was the first time we saw AI do something which appears so human.

Our anthropomorphic reflexes are stirred. We see a machine making sentences in much the same way we make them and with a very similar result. Not quite the same, but weirdly close. It is very easy to attribute human behaviour to what we witness.

As many expert folks have pointed out, what Chat GPT does is not human. Not even close. It is predicting the relationship and order of words and concepts at an utterly inhuman scale. And at a completely inhuman pace.

Still pretty uncanny, though. “That looks like what I might have written” – originality really is hard to come by.

“If bots can just do it, why should humans even bother?”

Ashley Sinclair, LinkedIn

It is easy to be drawn into concern about negative impacts of these new technologies. Change will be far reaching. The negative impacts are easier to discern when we see a loss of what we know and understand. The positives seem, rather, a more vague glimpse of imprecise future benefits.

Amongst the concerns about misuse, misrepresentation, lack of accountability, job losses and bias, are we worried that much of what we do, in online content terms, is kind of ordinary and predictable when viewed as a whole body of effort? This is what is making me wonder.

The criticism that Chat GPT is simply ordering and organising the treatment of documented concepts as expressed online, lacking any understanding and devoid of insight, seems to me to miss the point.

An ordinary ocean of content to choose from

I do not mean to denigrate the work of dedicated and intelligent people. It’s just that the ocean of content these generative AI systems are trained on is, by definition, largely unremarkable. From the full works of Chimamanda Ngozi Adichie, as an exemplar of a pinnacle of written work, all the way down to the broad swathes of drivel like the tweets of Laurence Fox, the majority of what is available for analysis is not going to further human endeavour. The fact that it seems so familiar when sifted and reflected back to us might actually be a signal of accuracy.

Most of what we spend our time writing and reading is not that special. Is it? It can’t be. We are motivated, preoccupied and intrigued by what we do, of course, but when it is laid out like this, those sentiments are gone.

‘Input not output’ and other L&D implications

As many commentators have already indicated, the results of tools like Chat GPT should be seen as inputs to our work rather than outputs to be used as they are. They can be inspiration, a form of note taking from research, help us order thoughts more quickly and make connections between ideas clearer. They are some way from a finished piece, requiring a sceptical editorial eye to make the most of them. At least, that is true in the current releases. I suspect that GPT 4 and other evolutions will start to erode that logic in the not too distant future.

One freelance writer on Twitter was describing that they had been nudged aside by the arrival of AI writing services. Not that they were being replaced, but that the client would pay them, for less time, to “rewrite” what the tool creates for free. Input not output. But such an unpleasant way of going about it.

This is one implication for L&D. An industry which is built on content is going to change dramatically as the creation and management of that content is reconfigured from top to bottom. Replace freelance writer with instructional designer. The raw materials of so much online learning content are the documents and slide decks of subject matter experts. Many AI products already create learning content from these sources. That staple multiple choice question bank is easy to replicate (there are so many of those to study).

From there, the creation of audio, video and text based first drafts is a next step. Firstly, through AI editing tools currently on the market to a fully fledged package ready for that final editorial polish and check. For those of us close to this kind of production work now, it seems impossible to believe that machines can replace it. From the distance of a generative AI system training itself on millions of SCORM objects and the like, I wonder how unique these things really are? Hence, my wondering about how ordinary and predictable all this content really is. A couple of decades of leadership courses might not reveal the surprises and unique insights we may hope.

Optimists need to try harder

An optimist will counsel that time can now be dedicated to consultative efforts and the identification of the real problem to be addressed with learning design efforts. I hope this is right. It should be true now, however. It should already be our bulwark against poor quality output and inappropriate projects. Lower cost and quicker routes to delivery will rise the tide against it further. This feels like an urgent change to make. It is not a technology problem to solve but a purpose and intent problem. Automation and simple cost reduction will be an easier sell unless our sights are high.

This is an extract from my regular newsletter. You can sign up for it here.


Leave a Reply