How can AI help with measuring what is important (social and emotional skills, creativity, collaboration, critical thinking) and not just want is easy to measure (literacy and numeracy)?
How do we promote equity between different groups and not discriminate against any learners?
Where could we be in 10 years with the right guidance, support and regulation for EdTech?
How can we maximise the benefits while reducing the risks of AI in education?
These are some of the questions that I discussed in the panel on “Ensuring Ethical Use of AI in Education” at the Education World Forum with Jean-Claude Brizard, CEO Digital Promise, Robert Hawkins, Senior Education & Technology Policy Specialist and Global Lead for Technology and Innovation, World Bank and Dr Stephan Vincent Lancrin, Senior Analyst and Deputy Head of IMEP Division, OECD. We agreed that it is possible to develop and use AI in education in ways that are equitable, ethical, and effective while mitigating weaknesses, risks, and potential harm.
Some key takeaways were:
- Involve teachers, students and other end users as co-designers in the research and development process to ensure the usefulness and understanding of the social context in which AI is used.
- Develop public-private partnerships between government, tech companies and researchers in universities to ensure widespread adoption and address bias.
- Do not aim to replace teachers but require human oversight of the machine at different points. Design as a hybrid human and AI system.
- Ensure transparency and accuracy of the algorithm so that it is socially acceptable.
- Establish ethical practices and governance mechanisms to ensure privacy protection. Strike a balance between risk-taking and data usage/sensitivity to reap the benefits while minimising the risks.
Last week, I was at the launch of Unlock Digital at Upublic, a government-focused tech company, during London Tech Week. It shows a heatmap of learning opportunities available to young people. To address the digital skills gap, we need more than information. We require hardware (laptops, tablets), software (content and apps), infrastructure (internet, data) and motivation (learners, teachers and parents) to use it.
In addition to procurement policies that fund devices and infrastructure for schools at affordable costs, the EdTech sector needs to develop more interoperable solutions that are codesigned with learners, teachers and parents. School leaders need to ensure teachers and staff have the learning opportunities to properly use the technology and digital resources at their disposal.
During London Edtech Week at UCL Edtech Lab, MindCET, an EdTech innovation centre in Israel, hosted a workshop on Unboxing School Movement. It was clear that we need to start from identifying the problem that we are trying to solve for learners, teachers, parents and not from fitting the available AI solution. We definitely need to work collaboratively to accelerate an agile transformation of educational systems so that every learner can achieve their potential.
AI can be beneficial in various ways
Technology has the potential to enhance equity by making education more inclusive and by providing additional learning opportunities for students from disadvantaged groups. There is the promise of cost-efficiency through automation as seen across many other sectors from retail to financial services. AI can make education more convenient, more enjoyable and more aligned with modern life.
Benefits of using AI in education range from helping students, supporting teachers and enabling the system itself to learn and improve.
- At a learner level: learning can be personalised, more relevant and more interesting by tailoring instruction and feedback based on learners’ specific interests, needs and level.
- At a teacher level: help reduce workload through automated assessment and monitoring, support easy diagnosis of learner differences and make teaching more learner-centred. Technology can improve the quality of teaching by:
- mediating discussions and analysing summaries through chatbots
- Identifying learners who struggle and suggesting what support is needed using predictive algorithms
- monitoring learner engagement and performance using formative assessments
- At a system level: good data can optimise learning outcomes and reduce learning loss at a system level. For example, Victor Godoy Veiga, minister of Education in Brazil mentioned that they are using predictive AI to reduce the drop out rate through use of a call centre and community help to get kids back into school. Through better feedback loops, the system can support students to achieve improved outcomes and teachers to teach (and also learn) better.
Challenges and issues related to AI
There are well known risks related to data such as privacy, security, bias, transparency, and fairness. There are also design risks as poor design practices could unintentionally harm classes of users and not meet the real needs of the end-users if they are not involved in the design and testing of the solution. Challenges exist around the ownership and usage of data. We need to ensure that data can be shared across platforms in an efficient, safe and appropriate way and that it narrows the equity gap.
Better evidence is needed on whether specific uses of AI yield positive outcomes. We need to answer questions about how technology supports pedagogy, how interventions are supported by tech for admin process and how AI contributes to student equity and social inclusion.
One recent controversy relates to the use of AI to decide A-level grades during the COVID lockdown in the UK. The algorithm exacerbated the inequalities that exist in the education system. There was an uproar and the government decided to use teacher-assigned grades instead as the use of the algorithm was unfair, opaque and undemocratic:
- it was unfair for individuals – as it impacted 40% of students whose grades were downgraded – disproportionately affecting the most disadvantaged.
- It was opaque – there is considerable question about the fairness of the statistical model as grades were predicted not just based on students performance but the historical performance of other students and the historical performance of schools
- It was undemocratic because it favoured independent schools – number of results at independent schools that were A grades and above jumped 4.7% year-on-year, secondary comprehensives rose by 2%, selective secondary schools, rose by 1.2%, while sixth form and further education colleges only saw growth of 0.3%.
Algorithms need to be transparent and accurate so that they don’t widen the inequality that is depicted in historical data. It is very important to have human feedback loops and not let the algorithm decide the fate of young people.
What does the future hold?
There are various scenarios from HolonIQ, OECD, World Bank and Nesta predicting the impact of AI in education. These depend on levels of regulation, affordability of solutions, use within or outside school, and scope of learning.
The skills that matter (creativity, collaboration, communication, critical thinking and emotional intelligence) are different from the narrow subjects being taught and assessed in our schools today. Systems that do well emphasise a broad set of skills, prepare children early on, reform continuously, and use the information for improvement and accountability.
No doubt that education will be reshaped by tech and connected devices. By creating a learning culture through iteration, controlled experimentation and nimble evaluations we can transform the education system. Through evidence-based use of EdTech, we can address learning challenges and support diverse learner needs. I see AI as a tool to augment teachers and not replace them. The future is upto us to shape based on the choices we make today.