Machines, algorithms, artificial intelligence, deep learning, machine thinking. There isn’t a day, perhaps, even a hour where our world (the one you create) doesn’t interact with a piece of technology. Have a look around you, what do you see? Connected heating controls, Bluetooth kettles, voice controlled speakers, a mobile phone so powerful that it has 6 times as much ram as our first family PC. Have a look at this wonderful Microsoft project in action if you are sitting on the fence, I’m not a betting person, however, I’d stake a small wager that you cannot fail to be impressed the potential this has to alter lives.
We are in an age of disruption.
Definition – ‘The Theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision making, and translation between languages’.
The distinguished historian, Roy Porter, through his research on the history of the enlightenment, society and the development of medicine talked of the guillotine ushering in a new wave of social change, dispensing political medicine whilst industry across Britain and parts of Europe wrangled with issues of class, mechanisation and the creation of new wealth. Fast forward some two hundred and nineteen years to the present, where in two thousand and eighteen the President of the republic, Emmanuel Macron, recently gave an interview with Wired magazine on France’s vision of ‘the fourth industrial revolution’, in which he said;
‘I think artificial intelligence will disrupt all the different business models. AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. If I manage to build trust with my citizens for AI, I’m done. If I fail building trust with one of them, that’s a failure’. (Thompson, 2018).
For Elon Musk, ‘Competition for AI superiority at national level most likely cause of WW3’, and called for AI to be regulated to keep us safe. Such a view seems odd, given that Musk’s company is at the forefront of engineering technology. Is that car he shot into space still floating up there?
It is Interesting then, that AI finds itself in the strange place of being both the technology of the future of humankind and harbinger of its potential downfall.
What of Education?
Gerd Leonhard/Flickr Creative Commons
AI remains a hot topic. Is it the new IPad, IWB, VR? There isn’t a day where someone asks whether teachers and lecturers will be replaced in the near future. Whilst any technology has the capacity to develop (and there appears no reason why this isn’t the case in AI), I cannot see AI standing front and centre in a classroom. Rather, the power of AI lies its ability to augment teaching and learning. The power of the educator is to acknowledge and understand the context in which it operates (the potential bias of algorithms, its limitations, ethics and application to teaching and learning).
Once such branch of AI is the development of Learning Analytics (LAs)
The most cited definition of LA was first coined by the first International Conference on Learning Analytics and Knowledge in 2011 which describes the process as;
‘the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs’ (International Conference on Learning Analytics and Knowledge: 2011)
LA involves the inputting of various datasets (VLE usage, attendance, assessment data, targets), aggregating the data and producing a model, which is then outputted into a graphical user interface, capable of interpretation, providing a rich source of data which can be used to make decisions, whether they are strategic or operational in nature. The most common use is the idea of early intervention for learners who are deemed to be at risk of withdrawing or not completing their course of study. The concept of early intervention is nothing new in many regards, teachers, lecturers and support services are all aware that to pick up problems early on is better than firefighting them further down the line.
Using a predictive model, learners can be brought to the attention of lecturers and support services at a much early stage, at which point interventions, personalised to the learner can be put in place to help them get back on track. One school of thought is that any LA tools simply allow for the more efficient use of staff time, who in any event would be asked to undertake any interventions based on data collected. How many spreadsheets do you have flying around?
I’ll leave you with something to ponder.
A question of ethics
In an increasingly inter-connected world, Having an open and transparent LA systems in education should be at the core of any design team. ‘They should act as a call to action rather than a restriction on action’. (Ferguson: 2016)
It is perhaps helpful at this conjuncture to consider some examples which illustrate the need for the ethical implications to be paramount. Let’s consider some of questions we must think about;
- Do educators and users of aggregated data from LA systems have a duty to act when they interpret the data to mean that a learner is not at risk of achieving?
- Does an LA system remove some of the agency and independence educators want to instill in their learners?
- The learner on knowing information published from an LA system. Do they have a duty to act on the information themselves, should they seek help knowing they are struggling?
- Input/Ouput – how good is the predictive model if the core data is missing (assessment data, VLE usage etc).
- How do educators ensure that they don’t fall into the confusing trap of causation and correlation, when it might not exist.
- Could the system be gamed?
What do you think?
Keen to discuss and collaborate?
Get in touch.