There is a bit of impatience in the higher education sector to “do something” about generative AI.
In the last year, academics and leaders have been exhorted to bolster academic integrity in the face of new AI tools, prevent misconduct, integrate AI into the curriculum, and ensure assessments are AI-proof – while simultaneously implementing measures to reorient their courses and curricula to exploit the many advantages generative AI offers.
The positive potential of generative AI is, of course, the same potential that directly links to concerns about assessment integrity – AI authored essays, code snippets and software, graphics, audio, posters, and video.
As engineering educators, we are instinctively inclined to welcome new technology tools and to be progressive in assessment formats. The engineering curriculum offers a wealth of opportunity for innovative and authentic assessment formats including designs, portfolios, test reports, specifications, code, and business cases as well as traditional exams and the occasional essay.
However, at this genesis of accessible and widespread generative AI, we lack authority, governance, and visibility of these tools to understand how they work. It is with this in mind that we urge special caution in the integration of generative AI, specifically in two main areas: creative work (which includes engineering), and work-based learning.
The question of creativity
In creative work, we ask students to innovate through objects, designs, written works; it is one of the most fulfilling and rewarding activities of learning and assessment, and used well it can evidence high levels of competence and achievement due to the inherent recognition of the steps taken along the creative journey to produce the end artefact.
When directed through generative AI, the work becomes embellished by the public corpus of work that the AI model has learnt from. This raises questions. Firstly, to what extent is generative AI truly inspiring and developing a learner’s creative skill, if self-formulated ideas are being filtered, and possibly diluted, with the consensus of public work held by the model.
At present these tools offer no transparency into the process they follow and how they reach the conclusions they present, effectively degrading the learner’s creative journey to a series of keyboard taps and a few strategic clicks.
Secondly, at the point at which work is loaded through generative AI, this now forms a part of the public corpus that the model derives its learning from – raising the question of who now owns the intellectual property and any further iteration of that work.
There might be further ramifications of using AI-enabled assessments particularly with regards to fairness and inherent biases. Since these AI tools have been trained on public corpus, they assimilate underlying biases, including but not limited to racial, gender and other forms of discriminatory text which can then trickle into grading and assessments.
Confidential matters
In engineering, the workplace is a powerful and frequently integral part of the learning experience, whether through an apprenticeship, an industrial placement year or industrially located project. Here, at the Dyson Institute of Engineering and Technology, our students are comprehensively inducted to a broad range of project work, spanning a wider range of engineering disciplines for which they build and refine their workplace competencies.
Provision of opportunities to gain professional experience is considered best practice by PSRBs and greatly add to student employability, their sense of their own direction, and identity as engineers.
However, this workplace activity may be in a work context where there would be strong security or commercial sensitivities. Across different institutions, we have supervised students who have worked in – for example – nuclear reprocessing, commercial aeronautics, agriculture, or innovative materials.
When these students return to their course and seek to evidence their workplace learning, it is usual for a student to negotiate a non-disclosure agreement (NDA) in respect of what they can and cannot share with their university supervisor and the wider university community. It goes without saying that uploading any such material to a generative AI tool to – for example – aid compilation of a portfolio, technical poster, or reflective report would compromise the confidentiality the student had promised and could inadvertently place the institution in a precarious legal position.
These dual issues – of confidentiality and intellectual property relating to generative AI – are still in their infancy. In August this year, OpenAI introduced ChatGPT Enterprise as a way of attempting to mitigate some of the concerns around data and security by allowing organisations to tailor the period with which internal data lives in the model, provide different levels of data access and prevent business-critical data from becoming foundational to a model’s training.
In the theoretical instances where a business allows such data to be used with generative AI, you can foresee such configurations coming at a significant expense, with responsible universities needing to pay to ensure there are appropriate safeguards in place.
If institutions are to encourage students to utilise generative AI in the innovative ways they are currently calling for, this would likely involve commercial data. However, in reality, the prospect of a business willingly sharing cutting-edge commercial data is unfathomable until generative AI tools afford greater visibility and clarity of internal workings, rather than the current black box (input/output) experience they offer. This calls for distributed control with industry taking ownership of their own models with formalisation of their own protocol(s) and governance structures to support the use of business data confidentially.
And what about students who have produced their own designs or creative work? It is still unclear whether ChatGPT (for example) is trained on copyrighted material.
Cases are still working their way through the courts, for example bestselling household-name fiction and non-fiction authors John Grisham, Jodi Picoult and Julian Sancton who have taken legal action against OpenAI. OpenAI outlined a legal strategy that proposes paying costs for clients sued for copyright infringement, as opposed to outlining an arguably more sustainable solution to take action to remove copyrighted content from their repository. In this scenario, universities embedding the use of AI in learning, teaching and assessment might have their legal costs covered by the OpenAI service provider, for as long as OpenAI pledges to commit to, but they would not be indemnified from fault.
Don’t just “lean in”
So, we urge caution. More progressive educators, educational developers, and institutions will want to lean into generative AI as an exciting teaching tool. But they should only do that when they know with clarity that generative AI is truly bolstering creativity, understanding whose data is being input, what the expectations for IP and confidentiality are, and whether those requirements can be met.
As it stands, generative AI services operate on legally unstable ground with cost models that have yet to emerge and stabilise. These are fundamental questions that can’t be answered with any real degree of confidence, and the stakes are high.
Nobody wants to hear this but this is why everyone is racing into the arms of Microsoft in Higher Education because they have thought about or tried to pre-empt these questions in most areas.
For example: “In this scenario, universities embedding the use of AI in learning, teaching and assessment might have their legal costs covered by the OpenAI service provider”.
Microsoft are clear in the contractual document that there is a “Copilot Copyright commitment” – if there is a legal challenge, they are clear that they will assume responsibility.
Same with what happens to data that is used within the organisation they are clear that it’s not used for training datasets.
For most organisations, it’s an extension of an solid existing relationship with a mature player.