Perhaps like many adopters and enthusiasts of Generative Artificial Intelligence, my interest stemmed from the furore surrounding the launch of a ChatGPT almost a year ago.
As a Learning Technologist at the University of Northampton and being new to this role and my background being in primary education – any new tools for learning pique my interest, and this “new tool” certainly grabbed my full attention.
It has been exciting to be part of a higher education institution during the past year, where conversations around AI and its potential within education have proven to be multifaceted and often dichotomous.
A proactive approach was taken at the University of Northampton, as recommended by the Quality Assurance Agency. Within our Centre for Active Digital Education (CADE), an AI Special Interest Group was established in 2022 which allowed staff to engage in discussions and debates about generative AI, fostering digital literacy and exploring various aspects, from ethical use to data security.
In December 2022, as part of the Learning Technology team at UON, my colleague and I began documenting the use of generative AI technologies. We interviewed early adopters who incorporated GenAI into their teaching and everyday lives and started to create case studies of some of these examples on our Learning Technology blog.
Along with the media’s focus on the use of AI in education for cheating and research pertaining to the potential benefits and limitations of AI, we began to wonder where the voice of the student was. How did they feel about this seemingly disruptive technology? Were they harnessing the power of AI for good or evil? Were they harnessing the power of the AI at all?
Survey says
As a newcomer to academia, it was interesting to begin a research project to understand the ethics of creating a survey and carefully consider the questions that we might ask in terms of the analysis that we might then do.
We decided upon a student survey that, apart from a few demographic questions, would begin with a branching question that simply asked the students if they had used any AI tools within their studies. This allowed us to ask more targeted questions depending on how the respondents answered, including questions on barriers to use, the perceived usefulness of AI tools, and students’ thoughts on staff use.
From the responses across all faculties, it was easy to ascertain without any real analysis that the perceptions of our students were dichotomous and often polarised.
Ethical concerns
A more thorough analysis revealed that the number of students who had adopted AI tools in their studies was in the minority, which was quite surprising, and is a useful stat for staff to keep in mind.
What was perhaps even more surprising was how ethically aware many of our non-adopters were, citing concerns of cheating as the main reason they were not using AI tools. Other popular factors were that students did not feel the need to use AI tools in their studies and that they did not have the skills to use them.
Fear of the unknown
Students were also asked for their opinions regarding the usefulness of tools, availability of AI tools, equity in tool provision, the impact of AI on future opportunities, and their awareness of university guidelines regarding the use of AI. The most interesting finding to come out of the responses to these questions was how powerful the impact of prior engagement with these tools was on the user’s attitude towards them.
If already using AI, students perceived them positively, found it useful in a variety of ways, were less anxious about needing restrictions, and were less concerned about their impact on their future selves. Conversely, students who had not yet adopted AI use expressed strong opinions about the need to restrict access and the unfairness of their use and were more anxious about the impact on their future opportunities.
The qualitative findings offer a deeper look into the benefits and challenges of using generative AI in higher education.
While these tools provided assistance in idea generation, information synthesis, and text summarisation, ethical and academic integrity concerns loomed. Content generated by generative AI could lack personal perspectives and sometimes contain inappropriate references. Students also noted concerns regarding bias, accuracy, and the possibility of the use of AI diminishing their degree and the skills they adopt during their studies.
Benefits to teaching
The use of AI in assessment and feedback raised scepticism among students, who doubted its ability to provide personalised and nuanced feedback which they placed a lot of value on.
On the other hand, some students saw the benefits of potentially quicker and more consistent feedback and highlighted how this efficiency could lead to an improved student experience.
Regarding AI-generated teaching content, opinions were again varied. Some students were enthusiastic about the potential for more engaging and innovative content. In contrast, others worried about its lack of personality and specificity, as well as its impact on the role of human educators.
This survey seemed to provide us with a much-needed narrative of the student in what has become quite a noisy landscape of voices. It helped us to further understand this complex landscape of generative AI in education and, in particular, to better understand and support our students by being in a position to address concerns, mitigate issues, and develop more transparent guidance that is fit for purpose.
This process has been a real eyeopener for me. A key takeaway is it has taught me just how valuable research is in gathering the perceptions of all the stakeholders. In a world where AI’s influence of AI is rapidly growing, understanding student perceptions of generative AI is not just an academic exercise. This is a critical step in shaping the future of education and ensuring that AI technologies align with students’ educational needs and expectations.