Professor Rebecca Eynon , from the Oxford Internet Institute (OII) and the Department of Education, sees the proposed reforms to the school curriculum as a positive step towards equipping students to respond to social and technological change. As the education community works to implement the recommendations of the review, she argues that we should go beyond merely equipping young people to be passive users of generative technologies, but also empower them to actively shape the future with artificial intelligence (AI).
The recent curriculum and assessment review recommends strengthening digital literacy at each key stage of the curriculum, to equip pupils for an era of rapid social and technological change. It includes recommendations to improve the clarity of the contents of the computer curriculum; to replace the much-critiqued GCSE Computer Science with a Computing GCSE that is broader in focus; and to map the teaching of digital literacy and the use of technology across all subject disciplines.
Professor Rebecca Eynon. Credit: John Cairns.
Indeed, these concerns echo many of the findings from our ongoing ESRC Education project, Towards Equity Focused Edtech , where we carried out very rich ethnographic research in secondary schools in England. In significant contrast to the impervious discourse around digitally savvy youth, we found students often had notable gaps in core digital skills such as word processing, dealing with files, or sending emails. In many schools there was a lack of clarity around who was responsible for teaching digital literacy, or where in the curriculum it should be taught. We also found significant variation in digital infrastructures across schools, leading to inequalities in access and use of technologies for learning. As well as confusion and concern amongst educators and students alike in all schools about whether and how to use AI 'appropriately'.
Towards a proactive approach
The proposals put forward by the review will likely be welcomed by many schools who identify with these current problems. However, as the government and the education community take forward these recommendations, it is important that they are not interpreted in ways that inadvertently promote a reactive approach to AI. There is an important need for the curriculum to 'equip young people for a world that is changing quickly,' but it is not the case that we must simply prepare them for some kind of inevitable AI future.
AI is not just something to react to, but something that people should actively shape in relation to the kinds of education, and indeed society, we want. This requires a proactive, not reactive, response to AI in schools.
We must instead recognise that young people (and indeed all of us) make the future, and are making it now. AI did not magically appear. It is made and used by people, and reflects past cultural, economic and political choices and values. Indeed, AI is not a straightforwardly good thing. Many are concerned about its large-scale use of personal data scraped from the web, its biases, its environmental costs and the commercial logics that are fuelling the current AI spring. Yet AI is also not fixed; it can be changed. AI is not, therefore, just something to react to, but something that people should actively shape in relation to the kinds of education, and indeed society, we want.
This requires then a rejection of any kind of inevitability with AI nor a set future, and a proactive, not reactive, response to AI in schools. An important strand of this response is the development of digital literacy for students. Three important elements are needed to ensure this is done as a proactive response- one that promotes criticality, inclusion and responsibility.
Criticality
The review sets out important foundations of digital literacy (and relatedly media literacy), which can enable young people to have the knowledge and skills to engage with learning, and to participate in social life and use technology safely. It is important that young people are not positioned as 'end users' of fixed AI technologies. Instead, they should be supported in becoming citizens who can use and engage with technology critically in the richest sense - including awareness of economic, political and cultural issues.
Students should be taught not only to identify misinformation and disinformation, but to also learn about the complex sociological as well as technical reasons for why it occurs and its social implications.
For example, students should be taught not only to identify misinformation and disinformation, but to also learn about the complex sociological as well as technical reasons for why it occurs and its social implications. Other areas of work could include the wider political economy of AI that favours powerful companies in certain parts of the globe, the environmental costs of AI, or the implications of surveillance capitalism. This approach would not just support young people in becoming responsible and discerning users of AI, but people who can potentially change it through their use, forms of refusal, or through re-design of AI.
Inclusion
Design is a key aspect of digital literacy, offering students ways to reflect on and make visible social injustices while examining how technology's affordances and values can support or hinder inclusion. This might involve creating digital artefacts that express community realities, using coding to explore bias and discrimination in AI, or participating in design projects that address the needs of their school or local community. Such projects could improve students' sense of self and awareness of inequities in school and society, and promote a stronger sense of social responsibility and the limits of technology to solve social problems. The review indicates that design will be part of the new broader computer science GCSE that is designed to be more inclusive and appealing to a wider group of students. It is important, however, that these principles are not only reserved for the GCSE but are embedded across the computing curriculum at all key stages.
Responsibility
Teaching students how to question and critique generative technologies should not be the panacea for biased, unregulated, and problematic AI. There is a societal responsibility that does not just fall on young people, to find ways to better govern, regulate and change AI.
Generative AI is error prone, often biased and can be inaccurate. Rather than hold the companies that build AI to account, individuals are tasked with developing the appropriate knowledge and skills so that they can try to identify and deal with these problems of AI; thus, moving responsibility away from developers to individuals.
However, teaching students how to question and critique generative technologies should not be the panacea for biased, unregulated, and problematic AI. There is a risk that a focus on digital literacies can responsibilise young people in dealing with the significant problems of generative AI. Thus, beyond the national curriculum, there is a societal responsibility that does not just fall on young people, to find ways to better govern, regulate and change AI in ways that account for the multiple environmental, legal and social costs of such technology.
Developing the agenda
The review is a productive basis to develop a digital literacy agenda for schools that forms part of a proactive response to AI in schools. But in determining the new curriculum's details and how it will operate, it is important that varied voices and expertise are part of defining and setting the terms. This includes academic experts, those working in the third sector and, crucially, teachers. In the past, commercial voices have been too prominent.
Of course, the digital literacy curriculum can only go so far. It will be taken up and engaged with in varied ways by teachers and students, and is only one aspect of a proactive response to AI in schools. However, it is an important starting point in efforts to support young people and teachers towards social and educational change.