Sign up for SmartBrief on EdTech to get more news and best practices on technology use in schools.
Insights is a SmartBrief Education Originals column that features perspectives from noted experts and leaders in education on hot-button issues affecting schools and districts. All contributors are selected by the SmartBrief Education editorial team.
During this past year, school districts depended on technology more than ever to support learning and family engagement. Educational technology and tools have the power to support high-quality teaching by providing efficiency, student engagement, and data to inform teaching. Proven, mastery-based programs can support teachers as they provide one-to-one or small group targeted lessons. For organizations and schools dedicated to continuous improvement, technology provides opportunities to refine content, increase efficiency, and test outcomes.
Part of this continuous learning and refinement must include assessments of bias, both in the content and in the algorithms that are determining success. Have you ever read a paragraph in which you could understand most of the words, but without experience or context to draw upon you could not fully understand the author’s message? As an example, this happens often between generations using social media apps or terminology associated with the younger generation. It’s certainly not that the older folks are not intelligent but rather they are missing the context the younger folks have related to the latest TikTok video or Instagram story. Even more unsettling is that we are all likely to have a news feed directly related to our own familiar context which limits our chances of expanding our understanding of other ideas or perspectives.
How edtech can be biased
For some students, assessment content has a similar effect. They know the skills such as decoding, but they are decoding words related to topics with zero connection to their lived or learned experiences. For example, if the topic is related to a tropical island but students have never lived outside an urban setting, or vice versa, there might be vocabulary that students do not fully understand. Small nuances can fully change the meaning of a paragraph and suddenly, children are labeled as low readers when, in reality, they simply do not have a broad understanding of geography. Since assessment algorithms are based on patterns of data, if the data are not accurate or biased, then the AI algorithms are also not objective.
Educational technology providers of both instructional and assessment solutions must spend time learning from students so that they can build the necessary context details or additional content to create an experience where all learners have access to the ideas needed to fully comprehend and to accurately demonstrate their abilities. Data from programs can have a direct impact on a child’s placement and intervention options, so it is imperative that technology providers audit their content and continue to refine assessment experiences.
Not only can content be biased, but the algorithms themselves can be based upon incomplete sets of data and developer bias as well. Take, for example, the disparity in images returned for “three white teenagers” and “three Black teenagers” via a Google Images search a few years ago. The former returned images of happy young people engaged in a variety of activities, while the latter returned predominantly mugshots from the prison system. Google said the difference was the result of disparities in online content, and they have taken steps to account for some of these disparities.
Ways to avoid edtech bias
One way to minimize bias is to commit to building from data sets that sample diverse groups that represent the population of students who providers are intending to serve. Another way to prevent bias is to question outcomes and perform usability tests with students from a wide variety of backgrounds—with an intentional focus on Black students who have traditionally been marginalized by the educational system. It is often said that one does not know what one does not know, and therefore, providers must seek out advice, input, and frank feedback from others with perspectives different than their own, such as Montgomery County Public Schools is doing through its antiracist audit. This could be through focus groups, diversifying the development teams, or hiring consultants for deeper audits and support.
Technology providers, educators, and families must see themselves as partners in a journey to continuously improve learning for children. Providers must be willing to accept feedback and to make changes based upon family and classroom expert feedback. Educators must also demonstrate an ongoing commitment to their own analysis of program efficacy and their own analysis and implementation of the data provided through the program. Most importantly, all stakeholders should make efforts to listen to the students using the solution and to value their ideas and suggestions.
Building effective educational technology is going to take an ongoing commitment to minimizing bias, to discovering missing perspectives, and to engaging in continuous learning. We don’t yet know what we don’t know, but we can agree to be on the lookout together.
Jenni Torres is Waterford.org’s senior vice president of curriculum and instruction. During 15 years in the classroom, Jenni was selected by the U.S. State Department as a Fulbright Exchange Teacher for Uruguay and was awarded Teacher of the Year honors at the school, county and district levels. Jenni earned her master’s degree from the Harvard Graduate School of Education and her bachelor’s degree from the University of Maryland, College Park. You can connect with her on LinkedIn or follow her on Twitter.
Like this article? Sign up for SmartBrief on EdTech to get news like this in your inbox, or check out all of SmartBrief’s education newsletters, covering career and technical education, educational leadership, math education and more.