New tool aims to help schools vet AI tech in education

gorodenkoff via Getty Images
Tech companies aren’t always transparent about their AI products. A new report offers a way for education leaders to examine them more closely.
Education leaders are increasingly exploring what role artificial intelligence technology can play in assisting with school operations, but a new report raises concerns about a lack of transparency in the AI products being marketed to school leaders.
AI in education technology is a particularly high-stakes use case as schools play a major role in a student’s outcomes. Technology that could negatively impact their performance, like AI systems that falsely accuse students of cheating on assignments, poses a serious threat, said Hannah Quay-de la Vallee, senior technologist at the nonprofit Center for Democracy and Technology. Poor-performing AI tools can also lead to wasted resources and damage public trust in school leaders and education agencies.
“As we're seeing this kind of surge … of AI in the education context, there's a real push towards the adoption of AI tools to either improve the quality of education or solve what are seen as problems in the education context,” she said. But that leaves education leaders to determine not only what AI-enabled tools they need amid a rush of companies marketing their AI tools and products, but also how to “separate the wheat from the chaff,” she said.
That’s why transparency in how an AI product works — and how it was built and vetted — is critical for officials to consider before adopting and implementing the technology, according to a report released Tuesday from CDT.
However, researchers found that, when it comes to technology in education, there is ambiguity around what transparently means, such as what type of product-related information should be available and how it should be made so, Hannah Quay-de la Vallee said. There is also a lack of transparency in practice.
To help education officials, like school administrators and education agencies, CDT developed a rubric that they can use as a “framework for what information they should demand before purchasing and using edtech tools that incorporate AI,” the report stated.
The rubric suggests eight pillars for leaders to consider based on publicly available information on companies’ offering AI-enabled tech products for K-12 schools. The categories include content on use and limitations; training data and methodologies; testing and evaluation; information accessibility; data governance; domain adaptation; underlying AI models; and governance structures. Those metrics were used to determine a company’s transparency score, which ranged from 0 to 16, with two possible points for each pillar.
Researchers also analyzed more than 100 such companies and found that, on average, they received a transparency score of 4. For 65% of them, the most common score received in each of the categories was 0, according to the report.
The highest scoring category among companies was use and context limitations, which included information on the tasks that a particular AI tool was designed to fulfill and what kind of processes it may not be suited for, according to the report. That category received an average score of 1.2.
Quay-de la Vallee said these findings make sense, as companies are likely to promote their product’s capabilities, and researchers found overlap between the information accessibility, which had an average score of 0.7, and use and context limitation elements with overall marketing efforts.
The companies scored lowest in product testing and evaluation and governance structures, which the report defines as a company’s approach to solicit customer feedback, leverage advisory boards or implement policies that support ethical use of AI. Those categories received average scores of 0.28 and 0.26, respectively, the report found.
The low scores could suggest that companies may not prioritize publicly communicating more technical details, like their privacy policies or testing processes, Quay-de la Vallee said. But the scarce information on such elements could also indicate a lack of effort to do the foundational work at all.
“If you build a tool and haven’t meaningfully tested it, you can’t be transparent about those things because there’s nothing to say,” she explained.
While company transparency plays a critical role in the procurement and adoption process, the report underscores other ways school leaders can evaluate what AI products will be effective, Quay-de la Vallee said.
Education leaders should, for example, first determine more narrow use cases that an AI solution could help improve rather than adopt a tool with more broad applications, she said. AI products that are built with a more specific purpose in mind are more likely to meet leaders’ operational needs and have more robust information about how it works.
Ultimately, companies’ lack of transparency about their products put schools at a disadvantage because it makes it harder to differentiate between effective and ineffective options, Quay-de la Vallee said.
But evaluating companies’ transparency efforts more closely can “help make this glut of AI tools much more manageable for school districts and administrators,” she said.




