AI has hit schools ‘like a ton of bricks,’ report says

Francesco Carta fotografo via Getty Images
A survey by Keeper Security found that most education leaders are concerned about AI-related incidents, but few school districts have formalized policies governing the tech.
As they reckon with artificial intelligence’s effects on education for students, administrators and faculty, schools are also wrestling with its drawbacks, including phishing, misinformation and deepfakes, according to a report released last week.
Keeper Security, a cybersecurity software company, found in a recent survey that 41% of schools have experienced those AI-related incidents, while nearly 30% of schools reported instances of deepfakes and other harmful content being generated by students. Of those surveyed, 90% of education leaders expressed some level of concern about AI-related cyber incidents.
Despite those concerns, and while 86% of institutions allow students to use AI tools and 91% permit faculty use, most schools only have guidelines with no formalized policies. And only a quarter of respondents said they felt “very confident” in recognizing those AI-enabled threats like deepfakes and phishing.
As schools wrestle with their cybersecurity vulnerabilities, the growth of AI presents several different challenges, including students using generative AI to cheat their way through assignments and exams, or to produce deepfake content that harms their peers, teachers and leading officials in the district. Meanwhile, teachers are wrestling with how best to use the technology to help them do their jobs better, while administrators have yet to formalize policies governing AI.
It all creates a series of headaches for the education sector, which also is chronically underfunded at the federal and state level.
“There is a resource gap in this space, and that comes out to the reality of their budgets,” said Jeremy London, Keeper’s director of engineering, AI and threat analytics. “Most K-12 districts don't generally have a dedicated fund for cybersecurity, and now AI has hit them like a ton of bricks, and they still don't know how to research that space, so they'll rely back on their districts, or maybe even the state level, to give them guidance, and those are getting weaker budgets.”
The Federal Communications Commission has tried to close that funding gap with a $200 million pilot program to help schools and libraries invest in their cybersecurity, an effort that would also help curb some of the risks from AI. But London noted that the program is already massively oversubscribed to the tune of several billion dollars in applications, showing how keenly aware school districts and libraries are of the budgetary and technological challenges they face.
Budgeting for future threats is challenging, too, as London said that many districts must produce spending plans a year or two in advance to conform with local laws. It’s a tough balance, but school districts must be willing to educate themselves about the threats they may face and try to predict what will come next.
“To get ahead of that is all about the education of what they're going to need,” London said. “They're going to need to know that they need this in the next six months, otherwise they start to open the gates to some risks in their environment.”
One of the biggest threats from AI concerns students’ use of the technology, which London said can range from using generative AI to complete assignments, to spreading misinformation, to producing deepfakes and other material either for meme content designed to amuse their peers, or for what could be seen as a new version of cyber bullying. London said that results in a “cat and mouse game” that administrators must play to stay ahead of students’ negative AI uses.
Making things more difficult is that, while there are tools available to check for AI use in essays, assignments and other classwork, schools and education leaders may not trust them. Some have suggested that the use of em dashes is an indicator of AI, but the reality is more nuanced, meaning educators may feel like they are on their own trying to check for cheating.
“They have to pay for another tool to detect if [an AI tool] is used, and they don't have trust in those systems,” London said. “It comes down to a professor or administrator showing up into that environment saying, ‘I don't think this was human written,’ and there's no foundation for the most part. There are key signs of AI writing and whatnot that we've seen through that research, but a lot of educators aren't perfectly aware of all those ins and outs, so they have to make a judgment call.”
In the immediate future, London said vendor partnerships will be key as school districts around the country navigate AI’s continued growth in use. And getting students and staff trained on basic cyber hygiene like using strong passwords, multifactor authentication will help, as well as making them aware of how AI can be used by bad actors. That way, he said, the “confidence gaps” shown in this survey about detecting AI can start to close.




