Here in the summer of 2023, you’d expect a huge edtech conference of academic staff discussing teaching and learning to be dominated by discussions surrounding Chat-GPT and its mutant generative spin-offs.
The twin pillars of opportunity to transform the teaching and learning experience and the alarmist threats to academic integrity and professional security loom large in what already amounts to the biggest conference centre I’ve ever sat in.
As such I’d perhaps cynically expected my sit-down chat with Ryan Lufkin, Vice President of Global Strategy at Instructure, to be full of bland platitudes about the way in which his company and its partners might maximise the opportunities and help neutralise the threats.
What I got was a refreshingly honest and reflective conversation with someone who’s clearly treading a fine line between supporting the staff who are worried about the future, and those whose embrace of it can tend to leave others feeling left behind.
Hell in a handcart
The previous hour’s panel session had been interesting – centred on the idea that the pace of change around AI can be overwhelming, especially given the onslaught of catastrophizing headlines.
The university leader that had described her personal embrace of AI and the subsequent creation of discipline-based teams of staff and students to interrogate the issues was also a world away from the AOB on the end of a senate or academic board meeting.
And the chats around the lunch station confirmed the sense that academic staff struggling with low engagement and cheating concerns barely understand the way in which students have been using AI tools to automate almost all of the tasks that go into the production of an essay that is somehow supposed to symbolise achievement for some time.
One woman from Utah was especially depressed about the whole thing:
If the only thing they learn is how to prompt the tools to produce the assignment, what’s the point in me setting it?
Lufkin was hardly going to argue that the third party plugins for his learning management system were dangerous – but as well as sharing concerns about false flags, he was pretty unequivocal about the idea that they could at least start a conversation about good writing:
As far as I’m concerned, if one of the academic integrity tools threw a red flag and said this was AI all a student needs to say is I didn’t and you can’t prove it. I have written things and it was actually GPT-zero that I got to throw a false flag. And not just one section – it was saying this whole thing was written by AI. It was badly written, but it was badly written by me. The conversation about the writing and improving it is more interesting and has more potential than the conversation about cheating.
Perhaps, we speculated, that someone using a student-facing AI detector might at least start to learn what sorts of written work sounds and looks dull – encouraging them to augment it with more interesting observations or commentary. But in the end Lufkin’s view was that it’s the tasks that students are given that’s the problem:
So now let’s fix the model for assessment. Let’s go back in and say okay, instead of prompting software to generate an article about something, maybe the assessment should be about doing that and then telling me where it’s flawed, or could be argued better, or where there could have been more detail around something.
But eventually, couldn’t the tools do that too? Isn’t that just the next question you ask the next iteration of Chat-GPT?
Maybe. But we’ve got to let the kids teach us something. Me and my 12 year old Dexter were having a conversation where he was talking about his history class, and how he didn’t feel like he learned much about the civil war – so I said look, there’s some really great books, even kids books that are focused on the civil war that you could go to kind of self study.
It hadn’t naturally occurred to him that that learning path was available. I think we as humans always have to struggle with not knowing what’s out there, not knowing what’s available, not getting the paths that are open to us. And that’s where educators come in – opening up the pathways rather than just teaching the curriculum.
Aiming higher
The “choose your own adventure” response is interesting, because it shifts the debate away from the individual assessment to something bigger – implying a shift towards higher level learning at an earlier stage on the basis that we don’t really need as humans to be able to do or know what we think a first year undergraduate might need to do or know now:
I lived through the invention of the internet, when a whole spate of articles came out about how much it had decreased productivity for American workers. They’re so busy surfing the internet, they’re not getting any work done.
And I was like, I used to have to drive to the library to do research for work. I used to have to stand there and fax things and wait for a response. We are massively more productive than we were during the 70s or 80s. We can spend our time doing better things now.
Some of the disdain for the tools on offer appears to reflect age old concerns about “kids today” – but is infused by a fear that subject mastery may no longer be necessary:
Think about it for people our age, YouTube is a time suck, it’s cat videos and whatever else. But my son is big into basketball, and playing better basketball. He has an encyclopaedic knowledge of basketball players, and how they play – and it’s all from YouTube.
Back in the day, he’d have learned that through trial and error or by borrowing a book from a library. Now he can deep dive into the detail and learn from people who have mastered communication through video. We need to accept that, because that is what’s happening.
But doesn’t that raise all sorts of questions about who verifies the authenticity or accuracy of what’s online? And what role does so-called higher education play if those things are possible?
Campus antics
I put to Lufkin that once tools automate what was once a human necessity, maybe undergraduate study needs to be more like PGT study, that PGT study ought to be more focused on the creation of new knowledge, and that maybe there was a way to refocus time on campus instead on interdisciplinary teamwork, real-world application of knowledge or what some still call “soft skills”:
Yeah well one of the things that colleges and universities are supposed to teach you to do is evidence based decision making, empathy for people with different backgrounds, working in groups, all things we need desperately for the health of our society.
But in an age of mass higher education where debates about food insecurity and the complex lives of “full-time” students dominate, isn’t this all just a pipe dream? Won’t students just want to know what they have to do to pass and then do as little as possible to get there? And how can overworked staff get the support to redefine their role in facilitating something different?
We’re not creating content or defining pedagogy – that’s educators’ job, and we create tools to support them. But look, social emotional learning in the US is kind of a hot topic because it gets criticised by the right as all about touchy feely students. So I said, okay, before I write about this I want to make sure I’m looking at this in a balanced way and not just being blinded by the media that I read.
So I asked Chat-GPT to tell me about social emotional learning. Then I said tell me all the benefits of social emotional learning. Then I said tell me why it is controversial. And it did such a good job – it summarised it in such a compelling way – that I felt okay, I can write an article on this in a much more knowledgeable way than I could have half an hour ago.
Students’ use of AI tools and the way in which many of them go on to create compelling and ultimately educational content on platforms that some deride is all around us. And the sense that good education meets students where they are, and not where they used to be, is also ever-present.
The question is whether we can get to a point where tools and development interventions can meet staff where they are – and take them to where they’d like to be, too.
I think we as humans will always have to struggle with not knowing what’s out there, not knowing what’s available, not getting the paths that are open to us, and feeling confident in navigating all that. And that’s where educators will still matter.
For many at the event, that message comes as some relief.