Most of America’s public schools have closed during the global coronavirus pandemic and U.S. districts are engaged in an unprecedented shift to online education — at least until the crisis is over.
Along with obvious concerns about this vast, rapid shift to digital education — such as whether students have technology and Internet access, and what materials can quickly be put on line — there is another that gets less attention. It’s student data privacy.
In 2018, the FBI issued a warning to the public about cyberthreat concerns related to K-12 students. It said that the growth of education technologies in education, along with widespread collection of student data, could have serious privacy and safety implications if compromised or exploited by criminals.
The FBI said the types of data that can be collected on students include personally identifiable information; biometric data; academic progress; behavioral, disciplinary and medical information; Web browsing history; students’ geolocation; IP addresses used by students; and classroom activities.
This post looks at some of the current issues around student data privacy as millions of students are now trying to learn online at home. This was written by Roxana Marachi, an associate professor of education at San Jose State University, and Lawrence Quill, a professor of political science at San Jose State University.
By Roxana Marachi and Lawrence Quill
Hundreds of universities and colleges are transitioning from face-to-face courses to “online” and “distributed” modes of instruction as a result of the coronavirus pandemic, which has shut down American schools coast to coast.
The architecture is already in place to respond to the new learning environment created by the pandemic. Platforms such as the Canvas Learning Management System (LMS), which operates across the country in thousands of colleges, universities and K-12 schools, already have access to data from millions of faculty and students and will gain access to many more.
Zoom Video Communications, a California-based remote conferencing-services company, is also likely to find experienced users turning to its videoconferencing system, as well as additional new users who will employ the software for the first time.
Adoption of these technologies comes at a pivotal moment for educational institutions. As schools move to these online spaces, it’s important that we not lose sight of ongoing controversies associated with these platforms.
Instructional technologists and educators recently penned a letter of protest to leadership at Instructure — a Utah-based educational-technology company that is Canvas’s parent company — over concerns that the pending sale of the company for an estimated $2 billion to Thoma Bravo (a private equity firm) would compromise student data.
The letter requests legally binding statements from the company specifying what would be done with the data, what protections would be enacted, who would have access under new ownership, and how students would be able to opt out of data collection and retention.
The letter also cites a speech at an investor’s conference in March 2019 by then CEO of Instructure, Dan Goldsmith, who noted that “given that information that we have [on student behaviors], no one else has those data assets at their fingertips to be able to develop those algorithms and predictive models.”
The algorithms and predictive models Goldsmith was referring to are part of a broader plan for Instructure and other educational-technology products and services. Funding streams for many such companies rely on the ability to monetize student data, and to extend the surveillance of students far beyond college to employee placements and corporate training programs.
At some point in the not too distant future, long-term profiles of students from pre-K and grade school through college — including grade point averages, aptitude assessments, and behavioral data from interactions with online platforms — will be available for college admission committees and employers to scrutinize.
Many recent educational initiatives, including piecemeal digital badges, “interoperable learning records” and skills registries also involve attempts to create digital trails that will follow a student over their entire educational path, ostensibly making it easier for employers to find hires with the specific skills they need.
These programs may sound student-centric, yet are not designed with students in mind. Nor do they account for issues of access, opportunity, or equity. There are structural, hidden costs and stark inequities baked in to outcomes that would result from the long-term datafication of students’ lives.
Many children’s early challenges stored on such digital trails will never be allowed to be deleted or may be unfairly and inappropriately entered into flawed algorithmic predictions that would lead to later limited opportunities. Privacy International explains discriminatory consequences of data exploitation, where data collected about an individual at one time point can lead to closed opportunities later on.
Researchers from the Data Justice Lab at Cardiff University have documented a host of data harms resulting from big data analytics that include, among others, targeting based on vulnerability, misuse of personal information, discrimination, data breaches, social harm and political manipulation.
As for Zoom, the videoconferencing application has quickly established itself as the education solution for synchronous meetings, and in March was holding the position as the top downloaded app in the Apple Store.
It has managed to achieve this reach in part through a process of subsidization, offering the technology service free to gain market share. Not surprisingly, such instant access has also allowed for widespread reach and a vast capacity for data extraction.
In 2019, the Electronic Privacy Information Center, a Washington D.C.-based independent nonprofit research center, filed a complaint with the Federal Trade Commission concerning security problems with Zoom’s video conferencing service. The complaint said Zoom had engaged in “unfair and deceptive business practices” in the design of the platform that permitted Zoom to “bypass browser security settings and remotely enable a user’s web camera without the knowledge or consent of the user.” The complaint also indicated that when Zoom was informed of the vulnerabilities, it did not act until the risks were made public, several months after the matter was brought to the company’s attention.
As Shoshana Zuboff notes in her work on surveillance capitalism, the suppression of privacy is at the heart of this business model with a built-in tendency to test the limits of what is socially and legally acceptable in terms of data collection.
E-learning platforms pursue an imperative to collect more and more data. Every action a user performs may be recorded and scanned for information, which can then be used to reconfigure algorithms and optimize processes.
It is precisely this model that is being utilized by Facebook’s Oculus VR system, also making inroads into education. The VR software is able to collect a wide range of data on its users’ emotional and physiological experiences within virtual spaces; data that can then be sold on to advertisers or in human capital performance markets.
What we’re seeing can be described as the “apology model of data extraction.” First, companies collect as much data as possible. Then, if there is an outcry, they respond with an apology and offer to consult with users by walking back the most egregious policies.
Online technologies undoubtedly have the capacity to perform useful services. But an easy-to-use interface shouldn’t give companies free rein to take as much data as they wish, especially when users are not allowed options to opt out. Within education, we need stronger systemic guardrails in place to protect against exploitative practices of tech companies vying for lucrative contracts.
We also need to recognize that current emerging technologies, including Blockchain identity systems and ledger-based badging programs, are part of much broader trends designed to both co-opt and upend public institutions that have been struggling to maintain standards with decreased funding and increasing demands.
With data as the new oil, there is every reason to suspect that the world of education will not return to “business as usual” after the current coronavirus pandemic passes, precisely because education has been identified as an “industry” ripe for disruption.
We are experiencing a watershed moment with these shifts. At no other time in history have we seen such an epic, massive move from in-person learning experiences to online instruction.
The idea that nothing can be done to resist the kinds of technological disruption we are now witnessing in education must be resisted. It betrays a misunderstanding about how technologies develop in the first place, ignores power dynamics in the shaping of education policies, and too readily sacrifices the social commitments that have held our society together; values, moreover, that are now being tested as a result of the coronavirus pandemic.
Inevitability arguments reject the past by spuriously claiming to possess the requisite knowledge necessary to reform society. We would do well to remember this as we shift to using technologies that challenge some of the core values of our liberal democracy.
—
This piece raises issues about Zoom Video Communications, a remote conferencing-services company based in California; Instructure is an educational-technology company based in Utah. I asked both for comment on the content of this post in regard to their activities, and the responses follow this post.
Zoom said in a statement:
EPIC’s 2019 complaint was regarding a bug in the Zoom platform that could potentially enable a bad actor to force a Mac user to join a Zoom room with video enabled. EPIC raised this issue in July of 2019 and Zoom promptly addressed it, fully resolving the matter.
Instructure did not immediately respond to queries.