This article is more than 1 year old

Everybody’s talking – an approach to growing AI literacy in higher education

When the whole world is talking about AI, separating the signal from the noise can be hard work. Martin Compton introduces a MOOC that could help
This article is more than 1 year old

Martin Compton is Programme, Module & Assessment Design Lead at King's Academy at King's College London

Everybody’s talkin’ at me/ I don’t hear a word they’re saying.

Everybody’s Talkin’ lyrics by Fred Neil, 1966

From existential threats to damp squibs; from academic integrity meltdowns to minor modifications: we’ve heard a lot of talk this last year about AI and generative AI in particular.

In a rapidly transforming study and employment landscape, fraught with unknowns, unknowables and significant education-related controversies or broader ethical debates, it is no surprise that it has taken a while to find positive (and collective) approaches given the number of initially tempting blind alleys like “detection” and “banning”. The Russell Group’s principles on the use of generative AI tools in education have helped evolve a growing sense of consensus across UK higher education. The first of these principles states that “universities will support students and staff to become AI literate.”

On the surface, this sounds relatively straightforward: we need to work out what we mean by AI and then establish protocols to ensure that people are literate in their understanding of the capabilities and limitations of those tools and in the implications of the utilisation of them by themselves or their students. For those of us tasked with facilitating AI literacy development (mine is from a faculty development perspective in the King’s Academy, coordinating cross-institutional staff and student guidance, support initiatives and investigatory collaborations) we must begin to find manageable and coherent messaging from within the babble and filter it in ways that is generalisable to the whole community whilst navigating the treacherous path between clarity and over-simplification.

Cognisant of this, we are evolving an engagement strategy that centres opportunities to build literacy with a coherent, multi-faceted approach and enables degrees of departmental agency, emphasising responsibility to make informed decisions that involve students wherever possible.

I don’t hear a word they’re saying

One of the key things about all of this noise around AI is that it’s everywhere. Some of it is measured but a lot is anxiety-ridden, panicky or hyped. Who is driving these narratives? How can I trust anything that I read if I’m keen to grow my own literacy? There’s no need here for me to restate issues and causes of discord or signal academic integrity concerns lest I add to the noise, but ways of challenging misinformation about as well as consequent of generative AI are worth highlighting.

The Russell Group definition of AI literacy is a good launch point and includes: limitations and capabilities of tools; accuracy or otherwise of the information that is generated; data privacy; bias and misinformation. Literacy is also about the practical applications to build that understanding of when use is appropriate and what the implications are for cognitive development of students, for example. But moving to application and deeper understanding of implications is much harder. Despite considerable, necessity-driven upskilling consequent of Covid-19 university closures, our sector still only manages to support 39 per cent of staff in terms of essential digital skills guidance. A one-size-fits-all approach to realising the growth of AI literacy was never on the cards, especially in a large multi-faculty institution like my own.

There is, of course, an ongoing drive to centre information literacy and critical thinking skills in the development of our students. At King’s we have an established digital skills framework and development opportunities for academic staff, professional services staff and students but, like everywhere, the key resources of time and prioritised motivation are anything but limitless. So how to grow that AI literacy?

Echoes of my mind

One of the things that we have been keen to do is not ignore the lessons strongly reinforced during Covid-19 and the liminal time between locked-down and return-to-”normal” universities. Workshops on online and hyflex teaching, for example, were beneficial in terms of awareness raising, reassurance, and for opening dialogue, but still tended towards the instrumental. I personally facilitated “Covid” workshops to over 800 colleagues, which was unprecedented in such a short time frame, but still that was only barely one third of the academic members of staff who were teaching.

How do you reach wider and deeper? How do you grow and sustain that? Of course, there was an essential role for technical upskilling – this was a time when few people were Zoom or Teams competent – but now, as then, it is so much more than a skills-deficit issue. Change at an institutional level requires space to critically engage with ideas and genuine agency as implications unfold.

Goin’ where the sun keeps shinin’

With all these considerations in mind and in search of scalable, generalisable and wide-reaching opportunities to address some of the echoes cited above, at King’s we committed to producing an accessible, AI literacy-led resource to inspire critical engagement. The self-access, self-study MOOC on generative AI in higher education is designed for academic and professional services staff as well as students and has, in fact, already engaged participants from all areas of education and from across the globe.

With content contributions from academic and professional services staff as well as students, the in-course interactions and feedback have been exceptional. The focus on issues, dialogue, discussion and debate as well as application can be seen in this example post which asks: Can generative AI be creative? The course summarises key information, capabilities and limitations of generative AI; focuses on opportunities and challenges in teaching, learning, feedback and assessment; discusses potential impacts on the employment landscape and considers key ethical debates.

At institutional level we are layering AI onto the King’s digital skills framework and, in collaboration with staff and students from across the faculties, we have produced college-wide guidance for staff and for students. These are designed to trigger department-level discussions and decisions about the nature of engagement with AI in teaching and assessment. Additionally, we are funding collaborative small scale research projects. This year, we’ve had 40 submissions to this fund and every submission by necessity has a degree of student input or collaboration with several of them student-led. A hackathon and two dissemination events have been scheduled to build cross-project collaboration and awareness. We also have the PAIR framework, devised by Professor Oguz Acar, which is being used to support the integration of AI into assessment design.

The course, guidance, funded research and integration framework are augmented with generalised workshops for staff and students with work ongoing to enable dialogue through evolving communities of practice. We are also facilitating faculty and departmental-level events where pedagogic practice and assessment design are driving the conversations.

Finally, we have the wider interconnections with other institutions which will be increasingly critical if higher education aims to drive AI narratives rather than be merely reactive. There is ongoing work led by Dr Caitlin Bentley on developing responsible AI education, designed to empower students and educators to address the failures and limitations of current AI design and implementation, encouraging everyone to get involved in making the impacts of AI on our lives and societies more sustainable and just.

Through the pouring rain

There is so much that is yet to be known or understood about the AI landscape. But moratoria, outright bans, or putting our heads in the sand will not make anyone more AI literate. Yes, we need to be cautious, but being over-cautious means risking failing our students at what is clearly a time of significant change. We need to acknowledge the limitations, but we also need to do what universities do: research the opportunities and potentials so that we can begin to drive narratives and shape developments. This is happening now; not six or 12 months down the line.

Finally, I’d like to reiterate the importance of a holistic approach beyond workshops to ensure that AI literacies penetrate every corner of higher education institutions. We need to reduce noise but also keep talking productively and one of the places where the dialogue is incredibly rewarding is on the MOOC! To that end, I invite you to join the thousands of people from across the world from 118 countries who are already talking to each other and learning together.

Find out more about the Generative AI in higher education MOOC here. The course is free to anyone for two weeks from the time you enrol, or free indefinitely with a FutureLearn Unlimited account or if your institution has a FutureLearn Campus account.

One response to “Everybody’s talking – an approach to growing AI literacy in higher education

  1. Helpful and timely article with a genuinely open approach to understanding & using AI. The MOOC feedback is very positive. I will join the course.

Leave a Reply