Judith Good
University of Amsterdam, Netherlands, Formerly, at the University of Sussex, United Kingdom
OECD Digital Education Outlook 2021
6. Serving students with special needs better: How digital technology can help
Abstract
This chapter explores the role of technology in supporting students with special needs. The support ranges from helping disabled students to access the curriculum to providing disability specific support so that students can participate in inclusive school settings. After highlighting the importance of supporting students with special needs, the chapter shows how technology can support a variety of special needs. It then focusses on three cutting-edge technologies which aim to: 1) support the development of autistic children’s social skills, 2) diagnose and support students with dysgraphia and 3) provide access to graphical materials for blind and visually impaired students. The examples highlight the importance of involving students and stakeholders in the design of the solutions and the need for developers to consider affordability as a key element of their development.
Introduction
The role of technology in providing support to students with special needs is widely recognised, and there is evidence for the effectiveness of both particular hardware platforms, such as mobile devices (Chelkowski, Yan and Asaro-Saddler, 2019[1]; Ok and Kim, 2017[2]) as well as numerous specialised software and apps. Educational technologies are likely to play an increasingly pervasive role for students with special needs, with educators called to keep abreast of technology developments so as to make informed decisions on the use of such technologies in the classroom (McLeskey et al., 2017[3]).
Interestingly however, despite the plethora of educational technologies for special needs (see, e.g. (Cheng and Lai, 2020[4]; Erdem, 2017[5]) for recent reviews), few could be considered to be “smart”. In parallel with this, there is a significant and longstanding body of work in the area of artificial intelligence and education (for recent reviews, see (Alkhatlan and Kalita, 2018[6]; Chen, Chen and Lin, 2020[7]) as well as clear evidence of the effectiveness of intelligent tutoring systems above and beyond other forms of computer-based learning (Kulik and Fletcher, 2016[8]). However, there is a dearth of educational artificial intelligence (AI) systems which target students with special needs. Indeed, a review of articles published in the International Journal of Artificial Intelligence in Education in the past five years did not uncover any which focused on inclusion, accessibility or special educational needs. As noted by Kazimzade et al. (2019[9]), although adaptive educational technologies and inclusion, in the broadest sense, are two key considerations in the current educational landscape, they intersect more seldom than one would expect.
It is difficult to say why there is so little overlap between these two areas. When considering educational provision for children with special needs, the World Health Organization (2011[10]) has highlighted the need to adopt more learner-centred approaches, which recognise differences in the way that people learn, and are able to flexibly adapt to individual learners. As such, this seems like the ideal opportunity to explore the ways in which already established and newly emerging methods and approaches in the field of AI and education could be extended and adapted to provide support for children with special needs.
The potential of smart technologies to serve and support students with special needs is particularly important in light of the likely increase in students identified with such needs. In 2000, the OECD estimated that at some point during their schooling, approximately 15 to 20% of young people would be considered to have a special educational need (OECD, 2000[11]). Twenty years on, this figure is likely to be even higher, given that the recognition of disability among children has been increasing steadily year on year (Houtrow et al., 2014[12]). Although the rate of physical disabilities has decreased over time, there have been significant increases in the rate of developmental disabilities (Zablotsky et al., 2019[13]) with the latter now estimated to affect 17.8% of children in the United States (Zablotsky and Black, 2020[14]).
There are a number of possible reasons for this increase, including changing definitions of what constitutes a particular disability (in the case of autism, Volkmar and McPartland (2014[15]) provide a detailed account of changing conceptualisations since its first formal description in 1943) as well as improved access to diagnostic services. An in-depth consideration of these phenomena is beyond the scope of this report, however, it is important to highlight two issues. Firstly, given that more than one in six children are now considered to have a developmental disability, it is extremely likely that any mainstream classroom will have at least one student, and probably more than one, who will require additional resources to support their learning. Secondly, the steady increase in diagnostic rates may continue as we uncover additional forms of disability and become more accurate at identifying those we already know, further increasing the number of children who will require additional support.
In an educational context, disabled children are at a disadvantage compared to their typically developing peers. According to the World Health Organization (2011, p. 208[10]), “despite improvements in recent decades, children and youth with disabilities are less likely to start school or attend school than other children. They also have lower transition rates to higher levels of education”, a trend which continues (UNESCO Institute for Statistics, 2018[16]). This, in turn, has long-term negative impacts on people’s futures, potentially affecting their integration within society, and career prospects. For example, in the UK, only 16% of autistic adults are in full-time paid employment, despite the fact that 77% have expressed a desire to work (National Autistic Society, 2016[17]). Furthermore, of the small minority who are in employment, over half feel that their jobs do not make use of the skills they actually possess.
A final important consideration is that, given this increasing rate of identified developmental needs, supporting students with special needs increasingly intersects with the broader equity agenda. The development of technologies that help diagnose and address student disabilities (e.g. dyslexia, dysgraphia, dyscalculia, some hearing or visual impairments) will help close the achievement gap and improve learning outcomes within countries.
In this report, I consider the near future of developing “smart” technologies for students with additional needs, by focusing on three case studies and drawing conclusions for future work. Before doing so, I consider definitions of disability and special needs, and their intersections with education and with technology, in more detail below.
Education, technology and special needs
Broadly speaking, special needs support in an educational context refers to support for needs that a disabled child may have that are different to those of their typically developing peers (see Box 6.1 for terminology). Providing effective support for special needs is complex, and requires careful thought and planning. Students’ needs change over time due to various factors (their individual developmental trajectory, previous support, etc.). Their needs may become more or less pronounced, requiring an ongoing assessment of what support is appropriate at any given time. Co-morbidity, defined as having more than one disability (also termed ‘multi-morbidity’ depending on the source), is another complicating factor.
Box 6.1. Conceptions of disability and special need support
There are no agreed definitions of disability or special needs and furthermore, the relationship between the two is not always straightforward. Definitions vary by country, and are categorised and classified in different ways. Diagnostic processes and pathways also vary, both within and across countries, and change over time. However, it is important to have a basic understanding of differing perspectives on the nature of disability, as they have significant implications for learning and education.
As our understanding of disability changes and develops, so too does the terminology used to describe it, which has led to different models of disability (Marks, 1997[18]). Traditional models, such as the medical model, focus on “impairment”, and locate the source of that impairment within the individual, often with a view to trying to provide a “cure”. In contrast, social models look at the intersection between individuals and their environments and, in particular, at the ways in which a particular environment might act to produce an impairment. For example, a wheelchair user within the medical model might be considered to have a prima facie impairment, whereas within the social model, the impairment could be considered to arise from the fact that a given building does not have ramps or lifts, rather than being an intrinsic characteristic of the individual.
Furthermore, certain terms can have negative connotations, such as “disabled”, which implicitly suggests that most people are “abled”. This can lead to stigma and exclusion (Sayce, 1998[19]). Proponents of the term “neurodiversity” (originally coined by Singer (1999[20]) in reference to autism, but now used more broadly for a number of conditions including attention deficit hyperactivity disorder (ADHD) and dyslexia) consider these conditions to be neurological variations which have both positive and negative aspects. They reject attempts at “normalisation” and instead argue for a deeper understanding of these different ways of being in, and interacting with, the world.
It is important to recognise the tensions inherent in these different views of disability, and the ways in which they shape our perspectives on the types of support that are necessary, and the types of technologies that are designed as a result.
The additional resources needed to provide this support can take a number of forms, including financial resources, personnel resources (e.g. additional teachers, or teaching assistants), or material resources. This chapter focuses on the latter type of resource, considering how technologies, and smart technologies in particular, contribute to support students with additional needs.
Although there are numerous ways of further categorising this support, it can be useful to consider the aim of the support along a continuum. At one end are technologies designed to facilitate access to the curriculum and allow disabled children to participate in typical classroom learning activities. In this case, the technology allows children to access the same curricular content as their typically developing peers. As an example, providing blind or visually impaired (BVI) students with technologies that have text-to-speech capabilities will allow them to access (at least some of) the curricular materials used by their peers, making it easier for them to learn in an inclusive school setting.
At the other end of the continuum are technologies designed to address and provide support for issues related to the child’s disability. In this case, the content of the intervention is typically not part of the standard school curriculum. An example of this type of technology would be interventions for autistic students which are designed to support the development of their social and communication skills. Technologies at this end of the continuum tend to be more contentious: as mentioned above, differing perspectives on disability can lead to debates around the types of interventions and technologies that might best support disabled children. Often these views are implicit, but nonetheless drive the development of educational technology, influencing decisions about what types of support are needed, and why.
As an example, a recent review of technologies for autistic children found that the majority focus on social skills training (Spiel et al., 2019[21]), which implicitly suggests that this is the area of most concern for parents and educators (even though it may not be). The authors maintain that many of these technologies require children to
“learn the modes of interaction that are deemed as appropriate by neurotypical adults without the adults having to learn how the autistic children might want to engage...” (Spiel et al., 2019, p. 18[21]).
At the same time, the authors do acknowledge that having such skills might provide autistic children with strategies for coping in a neurotypical world. It may be that increased coping skills might, in turn, lead to improvements in mental well-being, and indeed, this seems to be the case, with interventions targeting social skills also reducing both depression and anxiety (Rumney and MacMahon, 2017[22]).
Finally, it is important to note that many of the technologies designed primarily to provide support for the specific aspects of a child’s disability may well have a secondary effect of improving access to the standard curriculum. For example, providing support for social and communication skills for autistic children (support for special needs) may well help them to participate more easily in curricular activities which involve group work and collaboration (access to curriculum). Similarly, technologies designed to support children with ADHD with self-regulation skills, such as the ones described in Box 6.2 (support for special needs), may well lead to them being able to engage with more of the topics being taught, or to engage with them in more depth (access to curriculum).
Box 6.2. Technologies supporting students with attention deficit hyperactivity disorder (ADHD)
Technologies designed for students with ADHD focus on different aspects of the condition, for example self-regulation (i.e. learning to manage one’s thoughts, behaviours and emotions). The technologies described below are not yet widely used, but are in test phases in various countries.
A key component of self-regulation is emotional regulation, which involves learning to recognise one’s emotions and manage them in a way that is appropriate to the situation. One way of emotionally regulating and reducing stress is through breathing exercises. However, children may not find the exercises engaging or motivating. ChillFish, a Danish solution, is a biofeedback game in which children use a breath-based game controller to control the movements of a fish on the screen. The aim is to help the fish to collect as many starfishes as possible, achieved through slow, continuous breathing patterns (Sonne and Jensen, 2016[23]; Sonne and Jensen, 2016[24]). ChillFish’s impact was measured using Heart Rate Variability (HRV) and electrodermal activity and was found to be as calming as regular relaxation exercises for ADHD students.
Researchers in the United States are currently developing a smartwatch/smartphone application called CoolCraig (Doan et al., 2020[25]) to support co-regulation (where parents and teachers provide support such as helping redirect a child’s attention, helping them initiate tasks, giving praise, etc.) of children with ADHD. Parents and teachers can use the CoolCraig app on their phone to set goals for the child, who can then select a goal from his/her smartphone. Once the goal is achieved, the parent or teacher receives a notification, and the child receives a set number of points determined by the adult, which can later be exchanged for a reward. CoolCraig also helps with emotional regulation by asking children to report on their current emotional state (using a colour-based system). The system can offer appropriate suggestions (e.g. “take a deep breath”) and also allows children and adult to see a visualisation of their moods over time, which can encourage reflection.
Although the researchers feel that the approach behind CoolCraig may be of benefit, their preliminary design work with children with ADHD also uncovered a number of challenges (Cibrian et al., 2020[26]). Among them were children’s hesitations about whether an app would be the best way of receiving support (rather than through a parent or teacher), their concern that the support may actually be a distracting factor, their fear of stigma and potential embarrassment (not wanting to receiving alerts or notifications in front of friends), and their desire for privacy (not wanting to be obliged to share their personal information with their parents and/or teachers). Such issues make it crucial to listen to children and understand their lived experiences when designing technology to support them, otherwise the technology runs the risk of not having the desired impact.
Some examples of learner-centred approaches to smart technologies
How can technology support the need for flexible, adaptable and learner-centred approaches for children with special needs (World Health Organization, 2011[10])? In this section, I describe three such approaches, focusing on autism, dysgraphia and visual impairment respectively.
The ECHOES environment
ECHOES (Porayska-Pomsta et al., 2018[27]) is a technology-enhanced learning environment designed to scaffold autistic children’s exploration and learning of social communication skills through a series of playful learning activities, some of which involve a virtual character with which the child can interact. The target group for ECHOES is children with a developmental age of between 4 and 7 years (note that in the case of autistic children, their chronological age may be much higher as a result of having additional learning difficulties).
The ECHOES environment (Figure 6.1) was designed to run on a large multi-touch screen with sound output. Children can sit or stand in front of the screen, and physically interact with the system by dragging, tapping and shaking objects.
Interactions are set within a “magic garden” where the objects within the garden have unusual properties designed to spark curiosity and encourage exploration. For example, touching and dragging a flower head detaches it from its stem and transforms it into a bouncy ball. The magic garden is also home to Andy, an intelligent agent, with whom the child can play and interact. Andy acts both as a guide to the child, explaining activities and providing support, and also as a peer, taking turns with the child in activities such as a sorting task.
Autism is a lifelong, neurodevelopmental condition that affects the way in which a person interacts and communicates with others, and the manner in which they experience the world around them (National Autistic Society, 2016[28]). Social and communication difficulties are one of the hallmarks of autism and, given that autism is a spectrum condition, these difficulties will manifest in different ways at different points on the spectrum (e.g. they might range from difficulties in understanding the give and take of a typical conversation to not initiating communication at all). There can also be marked differences between individuals considered to be at the same point on the spectrum, and even within individuals at different times (for example, in situations of stress or anxiety, these difficulties are likely to be exacerbated).
Social and communication difficulties can have a profound and sustained impact on an individual’s social and emotional well-being, leading to issues with making and maintaining friendships (Kuo et al., 2013[29]), resulting in loneliness (Locke et al., 2010[30]), isolation (Chamberlain, Kasari and Rotheram-Fuller, 2007[31]) and a significantly increased likelihood of being bullied (Cappadocia, Weiss and Pepler, 2012[32]). Over time, these difficulties can have a profoundly negative effect on children’s mental health (Whitehouse et al., 2009[33]), and their sense of self-worth and self-esteem (Bauminger, Shulman and Agam, 2004[34]). Furthermore, these difficulties persist throughout an individual’s life, with many autistic adults reporting a marked sense of isolation, despite their desire to be more engaged with others (Müller, Schuler and Yates, 2008[35]).
ECHOES is based on the SCERTS model (Social Communication, Emotional Regulation, Transactional Support) (Prizant et al., 2006[36]). One of the overall goals of SCERTS is to support autistic children in developing competence and confidence in social activities. A particularly unique aspect of SCERTS is that it aims to identify children’s strengths and build on those in developing further skills.
Another interesting aspect of SCERTS is what it terms “transactional support”, which considers the role of the child’s environment, including the people in that environment, in supporting the development of the skills in question. Children will be more successful in developing social competence when the environment can adapt to their particular needs so as best to support them (and this includes social partners: children’s social competence increases when they are surrounded by partners who are understanding, supportive and enjoy interacting with them).
In terms of social communication, the SCERTS model focuses on two key foundational skills, namely, joint attention, and symbol use. Joint attention refers to the ability to share attention, emotion and intention with partners, engage in turn-taking and reciprocal social interactions. Symbol use includes using objects, pictures, words or signs to represent things and share intentions, and the ability to use objects in play.
In designing the virtual environment, the ECHOES team chose a participatory approach involving the widest range of stakeholders, including parents, carers, practitioners, teachers and, most importantly, autistic children (see, e.g. (Frauenberger, Good and Keay-Bright, 2011[37]; Frauenberger et al., 2013[38])).
The aim of ECHOES was to create an environment in which children’s strengths and abilities can be discovered and built upon, and it comprised both exploratory and task-focused activities. The exploratory activities were designed to give children a sense of autonomy and agency within the environment, while the task-focused activities provided opportunities for the agent to model initiation and response behaviours to the child. An example of the former would be taking turns with the virtual agent, Andy, to sort a fixed number of differently coloured balls into the appropriately coloured boxes. In the case of exploratory activities, there was no such fixed end point: examples include taking turns with the agent to shake clouds, which in turn causes it to rain, and for flowers to grow (see Figure 6.2). In these cases, the practitioner can use child interest and engagement in order to decide how long the activity should last, and when to move to a different activity.
The actions and behaviours of the intelligent virtual character, Andy, are underpinned by an autonomous planning-based agent. The planner works at both the reactive level and the deliberative level. The deliberative level is concerned with longer term plans related to a particular learning activity. For example, if the long-term goal within the activity is to encourage the child to pick up a basket, the deliberative level will focus on the set of actions that Andy needs to execute in order to encourage this to happen. By contrast, the reactive level refers to generating agent reactions to the child’s immediate interface actions (e.g. what the child is touching, for how long, whether it is the right object, etc.).
Although the system can perceive a child’s touch behaviours, and respond accordingly, it cannot detect other aspects of the child’s interaction with the system or general behaviours and, as such, is unable to determine when it might be appropriate to repeat a particular activity (because the child is finding it soothing or enjoyable), move to another activity (because the child is bored or frustrated), or stop the session.
Instead, these decisions are made by the practitioner accompanying the child by using a specially designed practitioner interface. The practitioner interface is accessed on a separate screen from the main ECHOES touch screen so as not to distract the child. The interface allows the practitioner to control the choice, duration and sequencing of the learning activities. The practitioner can also use this screen to prompt the agent to repeat a behaviour, or to move on, where necessary. Thus, action within the ECHOES environment is driven by a combination of practitioner/researcher expertise and the system’s intelligent planning.
As noted above, the team used a participatory design process, with the aim of involving end users in the design of the system as much as possible. One way of involving autistic children, where standard methods of gathering feedback such as focus groups and interviews would be inappropriate, was to quickly design prototypes and observe how children used them. In one such case, this led the team to completely reconceptualise the role of ECHOES within the broader context. It was initially envisaged that the primary social interactions would occur between the child and the social agent, and the study setup was arranged accordingly, with the researcher in retreat, and out of the line of sight of the child. However, from the initial prototype testing, the research team noticed that children often turned to the researcher or practitioner to share affect with them, or to initiate a conversation, typically about something happening in the ECHOES environment. This led to the reframing of ECHOES in a broader context, recognising the importance of the adult as another social partner, and arranging the setup to make it easier for the child to interact with the adult(s) in the room (see Figure 6.3).
ECHOES: Findings
The natural exchange of social interaction includes verbal and non-verbal initiations (e.g. asking a question, making an observation, pointing to something to draw another person’s attention to it) and responses to those initiations (e.g. answering a question, nodding, following the person’s pointing gesture to the object of interest). Although autistic children typically experience difficulties with both types of social interaction, initiations are typically more difficult than responses.
In evaluating the ECHOES environment, the team were interested in determining whether, when using ECHOES, autistic children showed an increase in initiations and responses, and whether there were differences in whether these social interactions involved either the agent or the human practitioner. Another research interest lay in understanding whether any increases transferred outside the ECHOES environment (in this case, in a free play session with a practitioner).
Interestingly, when children interacted with ECHOES, their initiations increased over time, and to both the human and the agent (although this increase was not statistically significant). Children’s responses to the human partner also significantly increased, but their responses to the intelligent agent decreased. These increases in initiation and response behaviours did not, however, transfer to the free play session.
These are interesting findings, with particularly interesting implications for the use of technology in an educational context. The increase in initiations is positive, and it is also interesting to note that, over time, children responded more to the human partner, but less to the agent. This suggests that they were aware of the limits of the technology, realising that Andy could only detect their response if it was expressed through touch, unlike the human partner.
Above and beyond the changes in initiations and responses, one of the interesting findings emerging from ECHOES was the fact that the children seemed to genuinely enjoy interacting with it. As noted above, although the team initially imagined that children would interact primarily with the virtual agent, they noticed that children often wanted to share their experience with someone in the room (as in the case of the child in Figure 6.3 expressing their enjoyment with one of the researchers).
This likely explains two things with respect to the results: firstly, social communication may have increased within the ECHOES setting because the on-screen interactions provided the child with things they felt were “worth communicating about” (Alcorn, Pain and Good, 2014[40]). This would also explain the decrease in social communication behaviours once they were no longer using ECHOES. This suggests, firstly, thinking about how we can design technologies which, rather than taking a drill and practice approach to skills learning, aim to provide experiences that are engaging and motivating. Ideally, where possible, we should be thinking about how to make these experiences enjoyable, where learning is embedded in, for example, environments which allow for playful, self-directed exploration (Mora-Guiard et al., 2016[41]) or playful multisensory learning experiences (Gelsomini et al., 2019[42]).
Secondly, rather than considering technology as a discrete entity from which the child transfers skills into the “real world”, it makes more sense to think about how children’s experiences with learning technologies can effectively be embedded and integrated within the broader educational environment. As such, it is important to understand human-computer systems as forms of social systems, and consider them in the round.
Dysgraphia: diagnosis and beyond
As noted above, the field of special needs and disability is vast and wide-ranging, encompassing both physical disabilities as well as cognitive and neurodevelopmental disabilities. The diagnosis of any disability will require specialist input and assessment. However, certain disabilities, such as dyslexia or dysgraphia, for example, may only become apparent in an educational setting, therefore, the presumption of a potential disability may begin with teacher concern.
The process of obtaining a diagnosis for a special need is typically extremely time-consuming, lengthy and stressful for both children and their families. At the same time, early intervention, specifically targeted to an individual’s needs, is often key to helping the child develop and progress. As such, any tools that might help teachers to recognise the early signs of a potential disability and support the child and their family in seeking a specialist diagnosis could have a huge impact on a child’s education, and their future. This is not to suggest in any way that teachers themselves should be carrying out a diagnosis. However, in the course of the diagnostic process, the teacher’s perspective is often sought as one input into the diagnostic process, usually in the form of a report, questionnaire, etc. As such, if tools were available to allow teachers to more quickly recognise differences in a child’s development within an educational setting, it could potentially allow families to initiate the diagnosis process more quickly. Furthermore, providing specialists with more detailed information than would otherwise be the case might also allow diagnoses to be made in a more timely fashion.
A team at the EPFL in Lausanne has developed a system for detecting dysgraphia in children which has shown very promising results (Asselborn et al., 2018[43]; Asselborn, Chapatte and Dillenbourg, 2020[44]; Zolna et al., 2019[45]). Dysgraphia refers to difficulties with handwriting, that can manifest as distorted writing and difficulties forming the letters correctly, with letters sometimes written backwards and/or out of order. Associated problems with spelling may also be present.
Dysgraphia is typically diagnosed using one of a number of standardised tests. Although there are variations across the tests, all involve the child copying some text, which is then assessed by an expert in order to determine legibility (by measuring it against a set of criteria) and efficiency (by counting the amount of text which is produced within a given period of time) (Biotteau et al., 2019[46]).
The disadvantages of such tests are that they are subjective and expensive. Furthermore, their focus is primarily on the output, i.e. the written text, rather than on the processes used to arrive at that output. Asselborn and colleagues have developed a machine-learning algorithm which can detect dysgraphia, and which runs on a standard, commercially available tablet. In order to develop this tool, they first collected data from almost 300 children (both typically developing and those diagnosed with dysgraphia), who were asked to copy text onto a tablet with a sheet of paper overlaid on the surface (to mimic typical writing practices). They then used part of this data to train a machine-learning classifier to detect dysgraphia, and the remaining data to test the accuracy of the classifier. They were able to detect dysgraphia with a high degree of accuracy (approximately 96% in the study reported in (Asselborn et al., 2018[43])).
In the process, they were able to extract 53 features which describe various aspects of a child’s handwriting, such as pen tilt, amount of pressure, speed and changes in speed. They were then able to determine which of these features were most discriminative, in other words, which ones distinguish handwriting from children with dysgraphia as compared to typically developing children.
One of the real advantages of the system is the fact that it does not require any specialist hardware: it works on a commercial graphics tablet, meaning that it is low cost, and potentially usable by non-specialists. Furthermore, compared to traditional methods of diagnosing dysgraphia, the system can analyse the process of writing, rather than just the product. The features identified above, such as pen pressure, pen tilt, etc., can provide a more finely grained analysis of the difficulties the child is experiencing, rather than simply identifying whether or not a child is dysgraphic. This then means that the child’s support needs can be addressed in a much more specific and targeted way.
These findings have been taken forward in the form of Dynamico (www.dynamico.ch/), a tablet-based app which will soon be commercially available. Designed for an iPad and an Apple Pencil, Dynamico supports children with handwriting difficulties in multiple settings, and can be used at home, in the classroom and with therapists. The app includes tools which can analyse children’s writing in 30 seconds, and then allow therapists to create a personalised remediation programme for the child, based on the results of the analysis. Teachers are also able to use the tool to create individualised sequences of learning activities for each child in the classroom. The tool can also be used at home, with teachers and/or therapists able to monitor the child’s progress remotely. From the child’s perspective, the activities take the form of games which have been designed to be playful and engaging. Figure 6.4 shows a screenshot from the app where children use the stylus to practice their letter tracing skills: doing so moves the raccoon from start to finish, and rewards for precise tracing can be collected along the way.
In addition to dysgraphia, technologies are also being designed to support students with dyslexia and dyscalculia: Box 6.3 provides some examples.
Box 6.3. Smart technologies for dyslexia and dyscalculia
There are a number of technologies which can provide support for students with dyslexia, ranging from generic tools which can be used within an educational context, to those that are designed specifically for educational use. A good example of the former are web browser plugins which aim to facilitate reading by giving users the ability to change aspects of the web page such as background colour, font size, word spacing, etc. One such example, Help me read! (Berton et al., 2020[47]), also provides an “easy reading mode”, which highlights and magnifies a single word of text at a time, allowing users to focus on each word, and move to the next at their own pace.
The use of smart technologies to diagnose dyslexia is also gaining traction. Interestingly, the diagnosis of dyslexia can be more or less difficult depending on the language. Languages differ in terms of the consistency of the relationship between grapheme (letter) and phoneme (sound). In languages with less consistent relationships, such as English, children with dyslexia may struggle more when learning to read, whereas in languages with a more consistent relationship, such as Spanish, dyslexia may not be picked up until much later, thus reducing the possibility for early intervention. Researchers in Spain and the United States have developed Dytective, an online game using machine-learning algorithms which was able to correctly diagnose dyslexia in Spanish speakers with over 80% accuracy (Rello et al., 2016[48]), thus increasing the potential that children can get the support they need as early as possible.
In addition to detection and diagnosis, smart technologies can also support the development of skills in children with dyslexia. Such technologies include PhonoBlocks (Fan et al., 2017[49]), a system incorporating 3D tangible letters which children can physically manipulate to spell words. The letters have the ability to change colour depending on the sound that they make in a given word. For example, the colour of the letter A is yellow in the word “fad”, but changes to red in the word “fade”, thus providing support for the child to better understand the relationship between the letter and the sound that it makes, as well as the ways in which these relationships can differ.
Contrary to dyslexia, there are very few new technologies designed to support students with dyscalculia (a difficulty with the comprehension of numbers and mathematical calculations). However, a research team in Germany has recently developed a system called Hands-On Math, which aims to support children in learning to use their fingers to represent numbers and perform simple calculations (Erfurt et al., 2019[50]). Typically, this is taught in one-on-one sessions with a trained practitioner, meaning that children have limited access to such support. The system developed calls out numbers or simple mathematical calculations which the child needs to represent using their fingers. By wearing gloves with markers attached to each finger, the system can use a camera to track the child’s calculations and determine if they are correct. Hands-On Math is currently in development and has received very positive reviews.
Visual impairment and interactive tactile graphics
Background
Similar to other disabilities, pupils who are blind or visually impaired (BVI) face gaps in educational attainment, decreased access to further and higher education, high unemployment rates, and lack of access to certain careers.
In many OECD countries, the overwhelming majority of BVI students attend or are encouraged to attend a mainstream school. This presents a number of challenges to ensuring their participation in everyday classroom activities. Metatla (2017[51]) provides an excellent overview of some of these challenges, such as facilitating collaborative work when BVI children and sighted children are using different materials which they cannot share between them, or the fact that classrooms walls are typically covered in visual materials, such as posters, charts, etc., which are deemed important for learning, but which are inaccessible to BVI students. In this section, I will focus exclusively on access to educational material, specifically graphics.
Most educational materials have a significant visual component, whether they take the form of text or graphics (or typically, some combination of the two). There are various options for accessing written text. Although not without usability issues, screen readers take written digital material and use text-to-speech engines to translate the written text into audio form. Screen readers are widely available for both PC and mobile devices (e.g. JAWS or NVDA for Windows, or VoiceOver for Mac and iOS). In situations where a BVI person prefers to read the text rather than listen to it, refreshable Braille displays offer a solution. These displays are a separate piece of hardware which take input from the screen reader and translate it into Braille output (using moveable pins) which can then be read by the user (note that these displays are also input devices, and allow BVI users to input text into a computer) (see Box 6.4). Many of these devices can, however, be quite costly.
Box 6.4. Technology supporting blind and visually impaired students
A number of technologies support the educational needs of blind and visually impaired (BVI) students.
The first type is hardware which could help BVI students in both note-taking and reading. Such solutions are used in both low- and high-income countries. Portable refreshable Braille display devices featuring refreshable six- or eight-dot Braille cells allow BVI students to have access to written learning materials and books in Braille. This technology works for many languages and reads texts in Braille (mere reader) as well as translates other text formats from a variety of applications (Braille translator) – and allows for printing on Braille printers. They can work with other devices using Bluetooth or USB and thus allow teachers to interact with their BVI students using compatible apps on their smartphone, computer or tablet – providing teachers with real-time text translation of the Braille being read or written by the students on the device (or vice versa). Braille Me, BrailleRing, and Orbit Reader are examples of such devices. For example, eKatibu, EdTech Hub and the Leonard Cheshire foundation used the Orbit Reader 20 in the Kenyan Nyanza region during the COVID-19 school closures in conjunction with a teacher-training programme to ensure that blind and visually impaired students continued to learn (see https://edtechhub.org/2021/01/08/using-innovative-methods-to-train-teachers-of-blind-children-what-we-learned/). Another type of device based on AI technology and designed to support BVI students (and people) is the Finger Reader, a wearable device worn as a ring around the index finger that reads aloud whatever one points it to thanks to its camera, text recognition and text-to-speech algorithms (Shilkrot et al., 2015[52]).
Other forms of technology are software-based and use regular computers to boost the independence of visually impaired students. For example, SuperNova allows users to use magnification, announce punctuation, replace difficult colours, increase verbosity, hear webpages, and turn on Braille. This system allows the user to hear characters and words aloud as they type, and reads aloud web pages, applications, documents and emails to the user. Programmes that support reading in this manner are a key way for visually impaired students to be more self-sufficient in a regular classroom setting and are used throughout the world for such purposes. Beyond SuperNova, examples of such programmes are Jaws, Microsoft Narrator, NVDA, Orca or Window Eyes. Besides aiding blind or visually impaired students, this technology can also support students with dyslexia or related forms of learning difficulties.
However, both text-to-speech capabilities and refreshable Braille displays are designed to work with text-based materials. There is no analogous low-cost solution for graphical representations, meaning that access to graphical content for BVI individuals remains a challenge. On web pages, alt-text (alternative text) can be used to provide a textual description of an image, which is then read out by the screen reader. However, these alt-text descriptions are sometimes missing (as they are reliant on the content creator of the web page to include them in the page’s source code) and often vary in quality (as the content creator has free rein in deciding how best to describe the image).
Recent research, although not education specific, but relevant and certainly timely, suggests a further reason for such difficulties. In a study on access to public health information about the COVID-19 pandemic, Holloway et al. (2020[53]) found that over 70% of the websites they surveyed used graphical representations to convey information about the pandemic (for example, visualisations of real-time statistical data). However, less than a quarter included alt-text information for these graphics, not because the web page creator did not include them, but because graphics which are interactive, or automatically updated, do not support alt-text. This means that important information which is present in these visualisations is inaccessible to BVI users.
This gap in access to graphical information seems to be widening unfortunately, as graphics are increasingly being favoured over text as a way of exchanging information (Gorlewicz et al., 2018[54]), and providing information in a digital format is both easier and more cost effective as compared to paper-based alternatives. Paradoxically, novel interactive technologies for learning may further decrease potential access to information for BVI children, given their reliance on visual content and interactions such as drag and drop (Metatla et al., 2018[55]). Particular curriculum subjects also present challenges. In some subjects, graphical information might be purely illustrative, or supplement textual information, however, in other subjects, it is difficult to present some types of information in any other way. This is particularly the case for STEM subjects, with their heavy reliance on representations such as charts and graphs.
One approach to graphical access for BVI relies on touch, similarly to Braille. Tactical graphics can be created by using an embosser, which raises the elements of the graphic so that they can be perceived through touch, or swell paper, which uses specialised paper and heating machines to cause the printed parts of an image to swell, again, becoming perceivable through touch. However, the output of such systems is static, meaning that a new graphic would need to be created if there were any changes or updates. Given the importance of dynamic graphics in education (e.g. understanding how changes to m affect the slope of a line in the equation y =mx+b), it’s clear that these types of graphics represent only a partial solution.
In order to create dynamic graphical displays, similar technologies to refreshable Braille displays have been developed using pin arrays which allow BVI users to explore the graphic through touch. However, these solutions rely on bespoke hardware and are extremely expensive. Even newly released technologies, e.g. Graphiti (https://www.orbitresearch.com/product/graphiti/), which uses an array of pins which can be set to varying heights to convey topographical information, still take the form of specialised hardware. Although the aim is to eventually be able to reduce the price of purchasing Graphiti to USD 5 000 as a result of bulk orders, this still represents a significant expense for a school, particularly for a piece of hardware that can only be used for one purpose.
In summary, providing access to dynamic graphics which does not require specialised hardware is a major and yet incredibly important challenge. It could be transformative not just in the educational sector, but in almost every aspect of the daily life of a BVI individual.
Promising approaches
In trying to address the issue of graphical access in a way which benefits the greatest number of users, Gorlewicz et al. (2018[54]) make the case for using touchscreen-based smart devices such as phones or tablets as the hardware platform.
There are a number of advantages to such an approach. Firstly, the hardware platform is low in cost, readily available, and is already widely used by a significant proportion of the intended user group.
Furthermore, these devices already have the inbuilt capacity to provide information through multiple modalities, including visual, auditory and tactile modalities. In the case of BVI users, the fact that almost all touchscreen devices include a sound card and speaker, and text-to-speech capabilities, means that they can provide auditory access to information. Furthermore, the fact that many touchscreen displays also have vibratory capabilities gives them the capacity to provide vibrotactile feedback. Although this type of feedback is not typically used as a primary form of interaction with a device, there is no reason it could not be and, in the case of BVI users, it offers an additional modality through which to provide information.
As such, this approach offers advantages over current solutions such as tactile graphics displays, which only present information through a single modality: touch. And the fact that these features are already present in most touchscreen devices potentially eliminates the need to create expensive, specialised hardware which can only be used for a single purpose.
Gorlewicz et al. (2018[54]) envisage a display such as the one shown in Figure 6.5. In this case, textual information in the bar graph can be converted to auditory information, while the graphical, spatial elements of the graph (in this case, the bars) can be conveyed through vibratory feedback. This represents an obvious strength over systems which can only provide both types of information inherent in a graphic through a single modality.
In this example, although the technology exists to provide a potential solution to a key issue, further research is needed before such a system can become a reality. In particular, Gorlewicz et al. (2018[54]) note the need for research into understanding how people encode, interpret and represent graphical information which is presented via non-visual channels. Although they note that previous research has considered these issues in relation to tangible (i.e. raised) graphics, the results may not be applicable as the touchscreen does not offer a similar tangible experience and is therefore likely to trigger different sensory receptors. At the same time, a recent study comparing the use of touchscreen-based graphics to embossed graphics showed that there was no significant difference in performance (Gorlewicz et al., 2020[56]), suggesting that this is a promising area to continue to explore.
Looking to the future
The research described in this paper suggests three promising avenues in terms of developing and deploying smart technologies that could have the most potential impact in the lives of learners with special educational needs. The first is a more holistic development of smart systems, considering the need, the user, and the context of use. The second is creating smart systems using technologies which have the highest chance of being available to the intended users (“smart systems for all”). Finally, systems which incorporate a blend of human and artificial intelligence have great promise, and it is important to consider how to go about achieving the optimum blend. I consider these in turn below.
Holistic smart systems
If we want to see a real impact of smart technologies to support children with special needs in the short to medium term, we should prioritise the development of “holistic smart systems” by which I mean technologies that 1) address a real need, and are designed with an understanding of both 2) the end users and 3) the context of use. Neglecting any of these aspects is likely to lead to technologies that are either not adopted in the first place, or are quickly abandoned.
Address a real need
As we have seen above, there is a pressing need to support learners with special needs. There are vast pockets of need, and some types of disabilities receive less of a research and development focus than others (e.g. dyscalculia). Although it is clear that technology plays, and will continue to play, an increasing role in providing support for learning with special needs, it is important to take the time to understand what is really needed, rather than rely on the carers’ or vendors’ sense of what is needed. Equally, it is important to consider where the greatest potential impact lies: the case of access to graphical content for BVI users is one such example where providing a readily available and low-cost solution could have a hugely substantial impact, both within and beyond the classroom.
Design for users
The World Health Organization (2011[10]) highlights the importance of ensuring that the voices of disabled children are heard, while acknowledging that this is frequently not the case unfortunately, and the same holds true for the design of new technologies.
Involving children with disabilities in design activities leading to smart technologies supporting their education may present additional challenges (e.g. how to ensure that the needs and desires of non-verbal children can be expressed during the design process), however, the fact that their needs and desires are likely to be very different from the adults actually designing the technology makes it all the more important to do so. Fortunately, there is an extensive body of work which has considered participatory design with children with various types of disabilities: Benton and Johnson (2015[57]) provide a good overview.
In addition to the voices of the children, it is important to involve teachers in the design of any educational technology. For a start, they have unique insights and expertise in supporting children with special educational needs, and will be able to provide their views on what is likely to work, and what isn’t (see e.g. (Alcorn et al., 2019[58])).
Design for context
In addition to designing for users, it is important to design with an understanding of the context of use. Any type of learning technology exists in a wider pedagogical ecosystem (typically comprising the classroom and the school), and introducing advanced technologies into such a setting requires an awareness of the constraints within which the technology must operate. These include a number of practical considerations such as school budgets and funding priorities, fit with the curriculum, the robustness of the technology, its cost, and potential integration with existing technologies. Ease of use (for both teachers and children) is also a major concern, as are issues of maintenance and support.
Ease of use is something that can be addressed through the design process (ideally, by keeping end users involved throughout), however, the initial choice of hardware is a decision that will have a critical influence on whether the technology will actually be used in the first place.
When considering the potential uptake of technology for learning, there are two extremes (and a continuum in between). At one extreme are technologies being developed which are unlikely to make it into the classroom, either because the technology is too expensive, or the hardware or software is too difficult for non-specialists to set up, use, and/or maintain (such as ECHOES). At the other are systems which use existing, low-cost technologies, and which are designed to be easily usable by non-specialists (such as the Dynamico app and, once they are developed, systems which make use of touchscreen displays for vibrotactile graphics).
Although not usable in classrooms, ECHOES provided us with a much deeper understanding of how to build environments with the potential to provide playful, motivating interactions. These insights were then taken forward in a system which aimed to be both technologically simpler while also allowing non-specialists to author their own content (Porayska-Pomsta et al., 2013[59]). The ECHOES project also challenged accepted conceptions about the nature of autism, in particular, the purported “need for sameness”, leading to further research and exploration which made an important theoretical contribution to the field (Alcorn, 2016[39]). We need to research and develop both types of system but, at the same time, researchers need to be clear about where on this continuum their systems are situated.
Smart systems for all
Part of the issue in designing for the context of use involves understanding issues around cost and availability. Many assistive technologies are prohibitively expensive for public schools and require specialist hardware that, in many cases, is limited to a single purpose. This means that many children are not able to access the technologies which could support their learning. If we really want to see the potential positive impact of smart technologies for special educational needs in the short term, then we need to think about how best to bring the latest advances in AI into schools in ways that are affordable and readily available.
One very promising way of doing so is likely to be through the use of touchscreen devices, i.e. smartphones and tablets, for two reasons. Firstly, these devices are commercially available, reasonably low cost, and are multi-functional. This gives them obvious advantages over specialised, bespoke hardware systems which are developed for a single use and are typically very expensive (such as many of the solutions for BVI students). Secondly, above and beyond their processing power, modern touchscreen devices incorporate numerous inbuilt sensors which offer multiple possibilities for multimodal input and output. This opens up opportunities for creating novel and innovative learning experiences for students with special needs, while still ensuring that they remain relatively low in cost and are widely available.
Two of the case studies presented in this report (for dysgraphia and access to graphical information respectively) offer excellent examples of how this can be achieved. Dynamico builds on robust scientific research and makes use of complex AI algorithms which can run on commercially available tablets. And it uses the tablet’s inbuilt sensors (e.g. pressure) to be able to analyse a child’s handwriting and detect dysgraphia. Although at a much earlier stage of development, the researchers investigating access to graphical information were taking much the same approach. They are currently carrying out foundational research in order to better understand how BVI students understand and use graphics presented on a multisensory tablet (Hahn, Mueller and Gorlewicz, 2019[60]), and stress that much more research will be needed before a commercial product can be envisaged. However, their ultimate aim is to use the existing capabilities of touchscreen devices in innovative ways in order to provide multimodal output to BVI students (in this case, a combination of auditory and vibrotactile feedback).
Investigating how the sophisticated capabilities of modern touchscreen devices could be leveraged to support students with a wider range of special needs, and via new and innovative forms of input and output, seems like a very promising way forward.
Blending human and artificial intelligence
Baker (2016[61]) points out that, initially, the grand vision of intelligent tutoring systems was to develop intelligent tutors that were as skilled as human tutors. They would be able to use the same strategies as expert human tutors, incorporating knowledge about the domain, and how to teach it. And while there are now a number of intelligent tutoring systems that are being used at scale, with hundreds of thousands of students benefiting from them (for example, Cognitive Tutor, ALEKS, Mindspark, and Alef), these systems are rather more simplified versions of this initial vision.
As Baker notes, one issue with automated interventions is that they are brittle, in the sense that if an intervention is not working, it is difficult for the system itself to recognise this and react accordingly. And these breakdowns in interaction do not go unnoticed by learners. In a study of children’s perceptions of a humanoid empathic robot tutor, Serholt (2019[62]) found that children who had interacted with the robot previously were more critical of the concept of emotion recognition in robots than those who had not. One child graciously allowed that perhaps social robots “may become more useful in the future, when they [researchers/developers] have had a few more years to develop them and make them more human” (Serholt, 2019, p. 95[62]). This was similar to the ECHOES experience: although the children were not able to verbalise this, the fact that they stopped responding to Andy quite as much suggests that they were aware of the limits of his interaction skills. Similarly, a number of their initiations to Andy were concerned with offering him help when, due to planner issues, he did not behave as expected (for example, incorrect moves in the sorting task, walking off screen...).
On a related note, some aspects of teaching are simply easier for humans to perform, at least currently. This point is particularly relevant to students with special needs. Supporting their learning is a subtle and individual art, where skilled teachers and teaching assistants will have spent many hours and considerable effort becoming attuned to each child’s particular skills, needs, and ways of communicating.
This point can be illustrated using the ECHOES environment. The original conception of ECHOES was much more complex, aiming to be able to recognise the children’ emotional state using facial emotion recognition, and to track their gaze and respond appropriately. In reality, there were problems with both. Children moved around, quite energetically in some cases, meaning that the tracking system could not function correctly. However, it seemed counter intuitive, in a system designed to encourage playful exploration, to require the child to maintain a fixed position. Furthermore, research suggests that in the same way that autistic individuals can struggle to understand the facial expressions of others, neurotypical individuals often struggle to understand autistic facial expressions (Brewer et al., 2016[63]). Therefore, it seemed unwise to rely on an automated system, which would necessarily be built from a neurotypical perspective, to detect the child’s emotional state. This was also important more broadly in the sense that if the child became particularly distressed, they would be likely to move away from the screen, where they could no longer be detected, even though appropriate intervention would be required immediately. Therefore, it was important to ensure that anything included in the environment did not have the potential to lead to distress as a result of a breakdown in the interaction.
The research team therefore decided to use the intelligence of the ECHOES system to provide the playful, motivating interaction sequences with an intelligent agent, which children seemed to appreciate, while the system relied on human support to interpret the meaning of children’s behaviours and ensure their overall well-being during the sessions.
This situation corresponds to a new vision of intelligent educational systems, where the best of human and artificial intelligence are combined in a way that most effectively supports learners.
Artificial intelligence, in its current form, excels at finding patterns in data, something humans are rather less good at. However, access to this data has the potential to improve teaching and learning. This is a vision which Baker (2016[61]) offers, where intelligent educational systems are used to provide data to humans, who can use this data to guide their pedagogical decisions. The use of AI in this way was evidenced in Dynamico, which provides practitioners with access to previously unavailable data, thus leading, in this case, to improvements in the way that dysgraphia can be diagnosed, and to the ways in which learners with dysgraphia can best be supported.
On the other hand, human intelligence is better suited to situations where the data might not be so clear-cut. For example, teachers working with autistic children have very specialised knowledge of autism, and will have devoted many hours to developing a nuanced and in-depth understanding of each individual child over time. They are able to interpret cues from the child’s behaviours in ways which are likely to be impossible for people who do not know the child. This might include understanding particular triggers that may lead to a child becoming distressed. Teachers will probably also be sensitive to the specific gestures, behaviours or utterances that indicate that the child is becoming distressed. And they are also likely to know how to intervene in a way which de-escalates the situation, and provides support and reassurance to the child. However, although teachers might be best placed to provide this type of support, that is not to say that artificial intelligence could not provide insight into the child’s unique way of interacting, thus allowing the human practitioner to intervene in the most effective way.
In line with this view, a positive view of smart technologies for learning could be one in which smart technologies are used to enhance the complex orchestration involved in working with a child with special educational needs. Although teachers would be responsible for this orchestration, smart technologies could support them in at least three ways:
1. by offering support for recognising the needs in the first place (as in the dysgraphia example discussed earlier);
2. by providing teachers and teaching assistants with additional knowledge and insight about the child, which may help them support the child in a more appropriate way (as discussed above);
3. by providing adaptable support at multiple levels (described in more detail below).
In-depth adaptivity and personalisation
One of the promises of the use of artificial intelligence in the field of education is that of adaptivity, both being able to adapt to the student’s current level of knowledge and/or skill, but also, in some cases, to their current level of motivation and/or affective disposition. In most cases, this adaptivity works at the level of the individual learner.
In addition to adaptivity at the individual level, supporting learners with special needs offers a unique space in which to consider additional types of customisation and personalisation, which could be achieved through a combination of human and artificial intelligence, as explained below.
When describing ECHOES earlier in the report, it was explained that the system planner operated at the level of the individual learning activities within the system, while the practitioner worked at a higher level to structure the overall session, determining the choice of particular activities, including their duration, and sequencing. Although this worked well, it was sometimes necessary to customise the environment prior to being able to use it in a given school. Andy, the AI agent, used speech and gesture to communicate with children. However, different schools used different key phrases and gestures, for example, to signal to children the end of an activity and prepare them, emotionally and cognitively, to move on to the next. Therefore, the agent’s speech and gestures needed to be changed prior to use in each school. The research team was able to do this as system developers, however, it would be better if schools had these options themselves.
In thinking about personalisation and adaptation in the context of support for disabled learners, it is likely that it could happen not only at an individual level, which is typically the case for intelligent systems, but also at the level of the disability and the particular school context. I describe these three levels below (using examples from ECHOES to illustrate):
1. Disability level customisation: this level involves customising and adapting the interaction based on what we know about working with particular types of disabilities. In the case of ECHOES, designed for autistic children, this meant thinking about the pacing of the interaction (slowing it down to give children time to reflect), the language used for instructions (i.e. using direct, simple language, not rephrasing instructions), and deciding how much time to allow for a child to process an instruction before re-issuing it. However, these decisions resulted in fixed parameters, built into the environment, and it would be better if these could be customised by the practitioner.
2. School level customisation: as explained above, customisation to the school’s particular use of language, symbols and signs.
3. Child level customisation: for example, do not use particular sounds, turn off agent facial expressions, remove certain keywords or phrases that may be triggering.
Being able to provide more finely grained customisation and adaptation that can be activated through a combination of human and artificial intelligence, and that encapsulates extant knowledge of a particular disability, the broader school context and the particular child increases the chances that the environment will be truly useful, and in a range of situations.
To conclude, smart technologies offer real promise in offering targeted and more sophisticated support for learners with special needs. By embedding the latest advances in artificial intelligence, as well as the most up-to-date understanding of special needs and disability, in readily available, low-cost technologies, there is a real opportunity to make a difference to learners across the globe.
Acknowledgements
The ECHOES system, presented here, results from a large, inter-institutional and interdisciplinary project. In addition to the author, the team included Kaska Porayska-Pomsta (project leader), Alyssa Alcorn, Katerina Avramides, Sandra Beale, Sara Bernardini, Mary Ellen Foster, Christopher Frauenberger, Karen Guldberg, Wendy Keay-Bright, Lila Kossyvaki, Oliver Lemon, Marilena Mademtzi,
Rachel Menzies, Helen Pain, Gnanathusharan Rajendran, Tim Smith, and Annalu Waller. The ECHOES project was funded by the ESRC/EPSRC, TRLP TEL programme grant number: RES-139-25-0395.
References
[39] Alcorn, A. (2016), Embedding novel and surprising elements in touch-screen games for children with autism: creating experiences “worth communicating about“, PhD thesis, The University of Edinburgh.
[58] Alcorn, A. et al. (2019), “Educators’ Views on Using Humanoid Robots With Autistic Learners in Special Education Settings in England”, Frontiers in Robotics and AI, Vol. 6, https://doi.org/10.3389/frobt.2019.00107.
[40] Alcorn, A., H. Pain and J. Good (2014), “Motivating children’s initiations with novelty and surprise”, Proceedings of the 2014 conference on Interaction design and children, https://doi.org/10.1145/2593968.2610458.
[6] Alkhatlan, A. and J. Kalita (2018), Intelligent Tutoring Systems: A Comprehensive Historical Survey with Recent Developments., rXiv preprint arXiv:1812.09628., https://arxiv.org/abs/1812.09628 (accessed on 26 February 2021).
[44] Asselborn, T., M. Chapatte and P. Dillenbourg (2020), “Extending the Spectrum of Dysgraphia: A Data Driven Strategy to Estimate Handwriting Quality”, Scientific Reports, Vol. 10/1, https://doi.org/10.1038/s41598-020-60011-8.
[43] Asselborn, T. et al. (2018), “Automated human-level diagnosis of dysgraphia using a consumer tablet”, npj Digital Medicine, Vol. 1/1, https://doi.org/10.1038/s41746-018-0049-x.
[61] Baker, R. (2016), “Stupid Tutoring Systems, Intelligent Humans”, International Journal of Artificial Intelligence in Education, Vol. 26/2, pp. 600-614, https://doi.org/10.1007/s40593-016-0105-0.
[34] Bauminger, N., C. Shulman and G. Agam (2004), “The Link Between Perceptions of Self and of Social Relationships in High-Functioning Children with Autism”, Journal of Developmental and Physical Disabilities, Vol. 16/2, pp. 193-214, https://doi.org/10.1023/b:jodd.0000026616.24896.c8.
[57] Benton, L. and H. Johnson (2015), “Widening participation in technology design: A review of the involvement of children with special educational needs and disabilities”, International Journal of Child-Computer Interaction, Vol. 3-4, pp. 23-40, https://doi.org/10.1016/j.ijcci.2015.07.001.
[47] Berton, R. et al. (2020), “A Chrome extension to help people with dyslexia”, Proceedings of the International Conference on Advanced Visual Interfaces, https://doi.org/10.1145/3399715.3399843.
[46] Biotteau, M. et al. (2019), “<p>Developmental coordination disorder and dysgraphia: signs and symptoms, diagnosis, and rehabilitation</p>”, Neuropsychiatric Disease and Treatment, Vol. Volume 15, pp. 1873-1885, https://doi.org/10.2147/ndt.s120514.
[63] Brewer, R. et al. (2016), “Can Neurotypical Individuals Read Autistic Facial Expressions? Atypical Production of Emotional Facial Expressions in Autism Spectrum Disorders”, Autism Research, Vol. 9/2, pp. 262-271, https://doi.org/10.1002/aur.1508.
[32] Cappadocia, M., J. Weiss and D. Pepler (2012), “Bullying Experiences Among Children and Youth with Autism Spectrum Disorders”, Journal of Autism and Developmental Disorders, Vol. 42/2, pp. 266-277, https://doi.org/10.1007/s10803-011-1241-x.
[31] Chamberlain, B., C. Kasari and E. Rotheram-Fuller (2007), “Involvement or Isolation? The Social Networks of Children with Autism in Regular Classrooms”, Journal of Autism and Developmental Disorders, Vol. 37/2, pp. 230-242, https://doi.org/10.1007/s10803-006-0164-4.
[1] Chelkowski, L., Z. Yan and K. Asaro-Saddler (2019), “The use of mobile devices with students with disabilities: a literature review”, Preventing School Failure: Alternative Education for Children and Youth, Vol. 63/3, pp. 277-295, https://doi.org/10.1080/1045988x.2019.1591336.
[4] Cheng, S. and C. Lai (2020), “Facilitating learning for students with special needs: a review of technology-supported special education studies”, Journal of Computers in Education, Vol. 7/2, pp. 131-153, https://doi.org/10.1007/s40692-019-00150-8.
[7] Chen, L., P. Chen and Z. Lin (2020), “Artificial Intelligence in Education: A Review”, IEEE Access, Vol. 8, pp. 75264-75278, https://doi.org/10.1109/access.2020.2988510.
[26] Cibrian, F. et al. (2020), “Supporting Self-Regulation of Children with ADHD Using Wearables”, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, https://doi.org/10.1145/3313831.3376837.
[25] Doan, M. et al. (2020), “CoolCraig: A Smart Watch/Phone Application Supporting Co-Regulation of Children with ADHD”, Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, https://doi.org/10.1145/3334480.3382991.
[5] Erdem, R. (2017), “Students with special educational needs and assistive technologies: A literature review.”, Turkish Online Journal of Educational Technology-TOJET, Vol. 16/1, pp. 128–146.
[50] Erfurt, G. et al. (2019), “Hands-On Math”, Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, https://doi.org/10.1145/3290607.3313012.
[49] Fan, M. et al. (2017), “Why Tangibility Matters”, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, https://doi.org/10.1145/3025453.3026048.
[38] Frauenberger, C. et al. (2013), “Conversing through and about technologies: Design critique as an opportunity to engage children with autism and broaden research(er) perspectives”, International Journal of Child-Computer Interaction, Vol. 1/2, pp. 38-49, https://doi.org/10.1016/j.ijcci.2013.02.001.
[37] Frauenberger, C., J. Good and W. Keay-Bright (2011), “Designing technology for children with special needs: bridging perspectives through participatory design”, CoDesign, Vol. 7/1, pp. 1-28, https://doi.org/10.1080/15710882.2011.587013.
[42] Gelsomini, M. et al. (2019), “Magika, a Multisensory Environment for Play, Education and Inclusion”, Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, https://doi.org/10.1145/3290607.3312753.
[54] Gorlewicz, J. et al. (2018), “The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways Forward”, in Interactive Multimedia - Multimedia Production and Digital Storytelling, IntechOpen, https://doi.org/10.5772/intechopen.82289.
[56] Gorlewicz, J. et al. (2020), “Design Guidelines and Recommendations for Multimodal, Touchscreen-based Graphics”, ACM Transactions on Accessible Computing, Vol. 13/3, pp. 1-30, https://doi.org/10.1145/3403933.
[60] Hahn, M., C. Mueller and J. Gorlewicz (2019), “The Comprehension of STEM Graphics via a Multisensory Tablet Electronic Device by Students with Visual Impairments”, Journal of Visual Impairment & Blindness, Vol. 113/5, pp. 404-418, https://doi.org/10.1177/0145482x19876463.
[53] Holloway, L. et al. (2020), “Non-visual access to graphical information on COVID-19”, The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, https://doi.org/10.1145/3373625.3418015.
[12] Houtrow, A. et al. (2014), “Changing Trends of Childhood Disability, 2001-2011”, PEDIATRICS, Vol. 134/3, pp. 530-538, https://doi.org/10.1542/peds.2014-0594.
[9] Kazimzade, G., Y. Patzer and N. Pinkwart (2019), “Artificial Intelligence in Education Meets Inclusive Educational Technology—The Technical State-of-the-Art and Possible Directions”, in Artificial Intelligence and Inclusive Education, Perspectives on Rethinking and Reforming Education, Springer Singapore, Singapore, https://doi.org/10.1007/978-981-13-8161-4_4.
[8] Kulik, J. and J. Fletcher (2016), “Effectiveness of Intelligent Tutoring Systems”, Review of Educational Research, Vol. 86/1, pp. 42-78, https://doi.org/10.3102/0034654315581420.
[29] Kuo, M. et al. (2013), “Friendship characteristics and activity patterns of adolescents with an autism spectrum disorder.”, Autism, Vol. 17/4, pp. 481–500, https://doi.org/10.1177/1362361311416380.
[30] Locke, J. et al. (2010), “Loneliness, friendship quality and the social networks of adolescents with high-functioning autism in an inclusive school setting”, Journal of Research in Special Educational Needs, Vol. 10/2, pp. 74-81, https://doi.org/10.1111/j.1471-3802.2010.01148.x.
[18] Marks, D. (1997), “Models of disability”, Disability and Rehabilitation, Vol. 19/3, pp. 85-91, https://doi.org/10.3109/09638289709166831.
[3] McLeskey, J. et al. (2017), High-leverage practices in special education., Arlington, VA: Council for Exceptional Children & CEEDAR Center.
[51] Metatla, O. (2017), “Uncovering Challenges and Opportunities of Including Children with Visual Impairments in Mainstream Schools”, https://doi.org/10.14236/ewic/hci2017.102.
[55] Metatla, O. et al. (2018), “Inclusive Education Technologies”, Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, https://doi.org/10.1145/3170427.3170633.
[41] Mora-Guiard, J. et al. (2016), “Lands of Fog”, Proceedings of the The 15th International Conference on Interaction Design and Children, https://doi.org/10.1145/2930674.2930695.
[35] Müller, E., A. Schuler and G. Yates (2008), “Social challenges and supports from the perspective of individuals with Asperger syndrome and other autism spectrum disabilities”, Autism, Vol. 12/2, pp. 173-190, https://doi.org/10.1177/1362361307086664.
[17] National Autistic Society (2016), he autism employment gap: Too Much Information in the workplace., https://www.basw.co.uk/resources/autism-employment-gap-too-much-information-workplace.
[28] National Autistic Society (2016), What is autism?, https://www.autism.org.uk/advice-and-guidance/what-is-autism (accessed on 29 July 2020).
[11] OECD (2000), Inclusive Education at Work: Students with Disabilities in Mainstream Schools, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264180383-en.
[2] Ok, M. and W. Kim (2017), “Use of iPads and iPods for Academic Performance and Engagement of PreK–12 Students with Disabilities: A Research Synthesis”, Exceptionality, Vol. 25/1, pp. 54-75, https://doi.org/10.1080/09362835.2016.1196446.
[27] Porayska-Pomsta, K. et al. (2018), “Blending Human and Artificial Intelligence to Support Autistic Children’s Social Communication Skills”, ACM Transactions on Computer-Human Interaction, Vol. 25/6, pp. 1-35, https://doi.org/10.1145/3271484.
[59] Porayska-Pomsta, K. et al. (2013), “Building an Intelligent, Authorable Serious Game for Autistic Children and Their Carers”, in Lecture Notes in Computer Science, Advances in Computer Entertainment, Springer International Publishing, Cham, https://doi.org/10.1007/978-3-319-03161-3_34.
[36] Prizant, B. et al. (2006), The SCERTS model: A comprehensive educational approach for children with autism spectrum disorders. Vol. 1., Paul H Brookes Publishing.
[48] Rello, L. et al. (2016), “Dytective: Diagnosing Risk of Dyslexia with a Game”, Proceedings of the 10th EAI International Conference on Pervasive Computing Technologies for Healthcare, https://doi.org/10.4108/eai.16-5-2016.2263338.
[22] Rumney, H. and K. MacMahon (2017), “Do social skills interventions positively influence mood in children and young people with autism? A systematic review”, Mental Health & Prevention, Vol. 5, pp. 12-20, https://doi.org/10.1016/j.mhp.2016.12.001.
[19] Sayce, L. (1998), “Stigma, discrimination and social exclusion: What’s in a word?”, Journal of Mental Health, Vol. 7/4, pp. 331-343, https://doi.org/10.1080/09638239817932.
[62] Serholt, S. (2019), “Interactions with an Empathic Robot Tutor in Education: Students’ Perceptions Three Years Later”, in Artificial Intelligence and Inclusive Education, Perspectives on Rethinking and Reforming Education, Springer Singapore, Singapore, https://doi.org/10.1007/978-981-13-8161-4_5.
[52] Shilkrot, R. et al. (2015), “FingerReader”, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, https://doi.org/10.1145/2702123.2702421.
[20] Singer, J. (1999), “Why can’t you be normal for once in your life? From a problem with no name to the emergence of a new category of difference.”, Disability Discourse, pp. 59-70.
[23] Sonne, T. and M. Jensen (2016), “ChillFish: A Respiration Game for Children with ADHD”, Proceedings of the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, https://doi.org/10.1145/2839462.2839480.
[24] Sonne, T. and M. Jensen (2016), “Evaluating the ChillFish Biofeedback Game with Children with ADHD”, Proceedings of the The 15th International Conference on Interaction Design and Children, https://doi.org/10.1145/2930674.2935981.
[21] Spiel, K. et al. (2019), “Agency of Autistic Children in Technology Research—A Critical Literature Review”, ACM Transactions on Computer-Human Interaction, Vol. 26/6, pp. 1-40, https://doi.org/10.1145/3344919.
[16] UNESCO Institute for Statistics (2018), Education and disability: Analysis of data from 49 countries. Information Paper No. 49., http://uis.unesco.org/sites/default/files/documents/ip49-education-disability-2018-en.pdf.
[15] Volkmar, F. and J. McPartland (2014), “From Kanner to DSM-5: Autism as an Evolving Diagnostic Concept”, Annual Review of Clinical Psychology, Vol. 10/1, pp. 193-212, https://doi.org/10.1146/annurev-clinpsy-032813-153710.
[33] Whitehouse, A. et al. (2009), “Friendship, loneliness and depression in adolescents with Asperger’s Syndrome”, Journal of Adolescence, Vol. 32/2, pp. 309-322, https://doi.org/10.1016/j.adolescence.2008.03.004.
[10] World Health Organization (2011), World report on disability 2011, https://www.who.int/disabilities/world_report/2011/report.pdf.
[14] Zablotsky, B. and L. Black (2020), Prevalence of children aged 3–17 years with developmental disabilities, by urbanicity: United States, 2015–2018., Natl Health Stat Report. 2020 Feb;(139):1-7. PMID: 32510313., https://pubmed.ncbi.nlm.nih.gov/32510313/.
[13] Zablotsky, B. et al. (2019), “Prevalence and Trends of Developmental Disabilities among Children in the United States: 2009–2017”, Pediatrics, Vol. 144/4, p. e20190811, https://doi.org/10.1542/peds.2019-0811.
[45] Zolna, K. et al. (2019), “The dynamics of handwriting improves the automated diagnosis of dysgraphia”, http://arXiv preprint arXiv:1906.07576.