Despite the ever-accelerating take up of educational technology products in higher education, there is still a lot of work to be done around data privacy and control, the direction of innovation in edtech, and democratic governance.
Students and staff do not currently have much insight or much say into which technology they should use for their studies and work, with what functionalities, what user data is collected, for what purpose now and in the future, for what kind of innovation, and so on.
Grappling with unicorns
It’s true that students and staff generally are asked to give user consent for the tools they use. But there is a clear power asymmetry here in that terms and conditions and privacy policies are issued by their respective universities and edtech companies, and individuals basically have no choice but to agree.
As in other sectors, student and staff personal data in higher education is legally protected by privacy legislation. While this legislation is important and relevant, it does not cover non-personal and de-identified user data. It also does not address instances of data enclosure (a platform owner capturing and controlling user data collected by the platform, benefitting from data aggregation and processing), monopoly tendencies in the business models common in the digital economy, barriers to competition in providers of data-based services, potential value redistribution from data innovation, or democratic discussion about the kind of innovation we want.
Additionally, there is a lack of evidence on the impact and effectiveness of particular edtech products and services, as well as the impact of particular operations, such as generative AI. Universities mostly have to monitor the quality of the tools they use alone, which is challenging – and sometimes impossible.
UK universities have a dependence on digital technologies for teaching and learning, research, and institutional management. The expanding edtech industry brings new expectations for the personalisation of the student experience and or increases in institutional efficiencies – to be achieved by analysing user data collected as students and staff use various digital platforms and applications for their studies and work.
It matters who collects, controls, accesses, processes, and innovates with user data in higher education – and with what aims and principles. While there are many edtech frameworks and advice on higher education digital transformation already published, some fundamental long-term challenges remain less debated, including individual and group ability to choose, fair competition and responsible innovation, and evidence on the impact of edtech.
Kean Birch, Sam Sellar and I unpack these key issues in a set of policy recommendations we published at the end of June, based on findings from our recently finished ESRC-funded research project Universities and Unicorns: building digital assets in the higher education industry.
The question of consent
At the university level, there’s a need for collective deliberation and changes to user consent. Some universities already have committees that consult on digital strategy and procurement and include student and staff representatives.
But more generally, staff and students lack a clear understanding of how their data is being collected, and why. We suggest universities introduce more democratic and transparent ways of communicating with their constituents and including them in decision-making.
On the issue of consent, we propose introducing selective and collective user consent. Selective user consent could be fostered by separating a digital service from analytics. For example, individuals could select to use the service only (for example, accessing and reading e-books) without the platform collecting and saving user data (such as data relating to their reading patterns). Or else, individuals could selectively agree that their user data be collected and processed separately for each purpose, such as personalising the service, supporting institutional efficiency, or research purposes. This could work in a similar way as consent requested for web cookies.
Collective user consent addresses current practices that focus on individual consent, but are inadequate in addressing the complex nature of data processing. Many data processing operations are inherently relational and involve continuous comparisons and grouping of individuals in search of trends. These trends are then used for purposes such as predictive analytics.
Implementing models of collective consent would recognise the interconnectedness and interdependencies within data processes. Collective consent emphasises the importance of considering the impact on communities, as well as individuals. It ensures that decisions about data usage are made collectively, taking into account the potential risks and benefits that extend beyond individual users.
Digital technologies used by students and staff could be seen to operate at three levels. First is the “primary” service, such as reading an e-book, using a virtual learning environment, or attending a lecture via an online meeting platform. The second level is using the primary service with individual analytics functions, for example getting statistical feedback on behaviour while using the service, such as an overview of time spent reading, writing, or answering emails. Both of these levels work between the user and the platform without other users’ data.
The third level is using the primary service with analytics at the aggregate level of multiple users, such as an individual getting an automated recommendation based on comparison to their peers, or a university getting an overview of group trends, potential inferences and predictions. Selective user consent could be required for the first two levels, while collective consent could be required for the third level.
A sector data trust for trust in data
We propose that sector stakeholders discuss user data aggregation and governance to address the question of how non-personal and de-identified user data is used.
One way would be to set up a sectoral data trust as a legal structure that could separate control over user data from data-empowered services. Run by trustees, the trust could establish rules for which data is collected, the purposes of data access, use and processing, levels of protection and security, and potential compensation for accessing and analysing data.
Stakeholder coordination towards establishing a data trust could be a step process. For example, the first step could be for the sector to establish a forum that brings key actors together for a more coordinated discussion on the issues raised here without establishing a formal organisation. Such a forum could provide advice on dealing with collective problems and stimulating innovation. If stakeholders see it as desirable, it could be a precursor to establishing a data trust.
Edtech oversight at the national level
On the question of how the sector as a whole evaluates, monitors, and potentially accredits or certifies edtech products and operations, there are various options. This might be through an existing sector-level organisation with a clear stakeholder-supported mandate, new data trusts as mentioned above, existing research institutes at universities that are already undertaking research on edtech, or a network of such universities.
Our proposals outline a set of possible directions to motivate stakeholder discussion about key issues. They are medium- to long-term, and require time, dedication and continuous stakeholder coordination.
Higher education stakeholders, edtech companies, investors in edtech companies, and policymakers need to work together to achieve a system in which edtech and user data are governed to support individual and collective rights, institutional progress, and fair market competition and innovation. We invite everyone to engage with our proposals and get in touch.