This autumn has been a season of data analytics work. I realise that we are not all fans of data analysis, but I have found this to be cause for some modest celebration. It is a largely absent topic of work in learning – rarely something we chose to do whilst recognising that we should. So, it is good to receive the requests to attend to a foundationally important topic. Perhaps the data worm is turning?
Data is one foundation of digital. Without the effective gathering and use of data, we cannot ‘do digital’. Or, we cannot do it to the greatest effect. We certainly can’t enact those transformations that have been such a popular theme of discourse. At the same time, the realisation grows that the design and delivery of content and events doesn’t matter so much, after all, if we are unable to describe the usefulness of them.
So, data analytics to the rescue yes? Well up to a point. I have bemoaned the lack of useful analytics about webinar behaviour on more than one occasion. This seemed like a weakness pre-COVID but is a real challenge given the predominance of the tool since March of 2020. There is a strong tendency to rely on our judgement of whether a webinar went well. ‘Going well’ itself being defined as a sense of fulfilment and engagement from participants. The volume of comments and staying to the end are the proxy measures applied. In the absence of any standard measures or comparative data, we rely on that sense of it feeling good. (If comments are disabled, one is quite simply doomed to darkness).
I remember participating as a panellist a few years ago on the theme of digital learning at a conference event. The question arose of the quality of data to describe the effectiveness of digital learning as compared to more traditional classroom delivery. The assumption being that classroom delivery is not required to bear the same burden of proof. This still makes me cross to think about. The standard of proof was then, and largely remains, attendance and appreciation. “Booked, showed up and said nice things” is proof of something but not of efficacy. Requiring greater proof of digital learning simply because we can’t see people smile and nod is an odd posture to take. I think that posture is still held, however, if not universally then reflexively, as the profession gradually matures in this respect.
The great work from Ben Betts and the Learning Pool team on this area is a good step towards progress. Seeking us to move beyond mere describe and anlayse, to predict and prescribe, stretches our ambition and our practice. The opportunity to gather significant volumes of data on individual behaviour and use the technologies now available, such as the LRS, should be revolutionary. The promise of that revolution will require us to overcome our fixation with making people feel good. That is the wrong flavour of engagement to anlayse.
The narrative our funders are most interested in runs along the lines of usefulness and efficacy. They are most interested in our data story helping them take decisions – decisions beyond learning provision. The evidence required to tell a convincing story for the organisation needs to reach beyond attendance and appreciation. The former should be a given and the latter should only be a contributory piece of evidence to the greater efficacy story.
If our funders are not asking for more evidence than attendance and appreciation, they are probably not really interested in efficacy anyway, perhaps?
Receive exclusive newsletter articles about digital learning and keep up to date with other posts and the 10L interview series by signing up to the newsletter here: