How a national lab is using data and AI to try to speed up permitting

ismagilov/Getty Images
The work to get better environmental permitting data began under Biden and is continuing under Trump.
The Energy Department’s Pacific Northwest National Laboratory is pushing forward on its project to use better data and artificial intelligence to speed up infrastructure permitting. Last week, the team released an updated dataset of over 120,000 permitting documents, and they’re continuing to test AI tools to help users make sense of this data.
“Our vision here is to use and develop AI tools and technologies to streamline environmental review and the permitting process,” Sameera Horawalavithana, a senior data scientist for PNNL and principal investigator for the project, told Nextgov/FCW during an interview.
The work began under the Biden administration to speed up permitting and enable expanded clean energy. It is continuing under Trump 2.0 as part of the White House’s plan to modernize permitting technology to speed up the processes for the sake of its push for oil and gas and the buildout of data centers.
“At the end of the day, we want to build faster, and that's what we're here for,” said Pranava Raparla, a presidential innovation fellow with Energy’s Office of Policy and Office of Critical and Emerging Technology, who called the project a “flagship” for Energy.
How the broader permitting push may be affected by regulatory rollbacks and decreased staffing under Trump 2.0 isn’t yet clear.
The first version of this corpus came out about a year ago. This time, there’s more data and new types of it supplied by additional agencies. The latest dataset, dubbed NEPATEC 2.0, contains public documents related to the National Environmental Policy Act across 60,000 projects.
NEPA is a flagship environmental law that requires agencies to scrutinize and document the potential environmental impacts of major actions, a process that includes issuing permits.
The team used a custom AI algorithm to sort through documents across various, siloed systems in multiple agencies and extract metadata. This makes it easier to filter through documents once search or chat apps are added on top of them, said Sai Munikoti, a data scientist at the lab and co-principal investigator for the project.
The intention is to put historical data at the fingertips of federal permitting employees, where old documents may be relevant for new environmental reviews. Researchers and techies building tools can also use the data to inform their work.
The group is currently beta testing three AI tools with users across several agencies.
SearchNEPA, launched late last year, is meant to help users within agencies make sense of the massive database with a plain language interface and an integrated chatbot, dubbed ChatNEPA.
Another tool, CommentNEPA, is meant to help with the currently manual process of sorting through public comments during permitting reviews. The team designed it to go through comment letters, identify concerns and sort them. Users can also see where the tool, an “auditable agentic AI framework,” is pulling from across comments and make changes as needed, said Munikoti.
Plans for future tools include a drafting assistant for NEPA environmental impact statements; public engagement tools to help people search for notices and meetings by time and location; and a data analytics tool to dive deeper into the historical documents.
The team wants to add more agencies to the dataset moving forward, and also release data snapshot updates more iteratively.
For now, the dataset pulls past environmental impact statements, environmental assessments, categorical exclusions and other NEPA documents from across the Departments of Interior and Agriculture, U.S. Army Corps of Engineers and the National Institute of Standards and Technology.
The first iteration of the dataset only had information from the Environmental Protection Agency.
The project is also being billed as a proofpoint for a new data standard released by the Council on Environmental Quality earlier this spring to standardize data across agencies and make it easier to coordinate across agency boundaries.
Currently, the permitting technology and data landscape is disconnected, even though the process often involves multiple agencies.




