The good, the bad and the unknown: The future of AI in North Carolina

People walk on the campus of the University of North Carolina Chapel Hill on June 29, 2023, in Chapel Hill, North Carolina. Eros Hoagland via Getty Images
Leading experts in research, government and industry innovators gathered at the University of North Carolina at Chapel Hill campus this week to discuss how artificial intelligence can be responsibly designed and used for the public good.
This story was originally published by NC Newsline.
Leading experts in research, government and industry innovators gathered at the University of North Carolina at Chapel Hill campus this week to discuss how artificial intelligence can be responsibly designed and used for the public good.
UNC-Chapel Hill Provost Magnus Egerstedt said one of the greatest challenges in higher education is producing an AI-ready workforce in a world that is changing faster than curricula can adapt. Egerstedt said while students are already using some generative AI tools, they must have a baseline of AI literacy and understand the constraints that come from using this evolving technology in their chosen field.
“You’ve got to be able to use the tools,” said Egerstedt. “It’s going to take people that can see around corners, who can make new connections, who can keep a finger on the pulse of the future with very limited and oftentimes contradictory information.”
AI and Medical Advances
Ashok Krishnamurthy, director of the Renaissance Computing Institute, said AI is currently being used to provide more rapid cancer identification as well as individualized treatment.
That can be especially important in a state like North Carolina where 41% of colon cancer patients are diagnosed when they go to the emergency department.
Krishnamurthy said rather than a manual process that can take months or years to diagnose, AI-based tools will enable doctors to quickly evaluate clinical notes and have a diagnosis in hours or days.
UNC School of Medicine Professor Melissa Haendel said AI will also help advance clinical trials of new drugs that are often slowed down or delayed by the failure to recruit and enroll enough patients. All of this will require greater investments in data infrastructure and privacy protections, Haendel said.
Not All Good News As AI Models Evolve
OpenAI is a leader in the artificial intelligence industry. Its chief economist Ronnie Chatterji said part of his job is to understand how AI is changing the labor market.
“One of my jobs is to watch those numbers every month and try to figure out how long the job market will look the way it does today,” Chatterji said. “The other piece of my job is a lot harder, which is trying to tell people what to do with the future.”
Chatterji, who also teaches at Duke University’s Fuqua School of Business, said he has been asked to give a graduation speech this year, and is struggling with the message.
“The research shows if you graduate in a time of economic recession or great technology disruption, the legacy of that will affect your job 15 years from now — your wages, how much you earn over a lifetime, and your career trajectory,” said Chatterji.
A report by outplacement and executive coaching firm Challenger, Gray & Christmas found artificial intelligence was a top reason U.S.-based employers cut jobs in March.
“Companies are shifting budgets toward AI investments at the expense of jobs. The actual replacing of roles can be seen in Technology companies, where AI can replace coding functions. Other industries are testing the limits of this new technology, and while it can’t replace jobs completely, it is costing jobs,” wrote workplace expert Andy Challenger in the April report.
For now, Chatterji says he plans to tell the Class of 2026 that leadership still matters,”because you’re going to need a human at the end of the day to make the decision, make the call, and from a legal perspective, be accountable.”
Anthropic’s Claude Mythos made headlines this month over concerns that the powerful new AI model could exploit security flaws and vulnerabilities in every industry. Some observers have suggested that it shouldn’t be released at all.
In an online blog post, Anthropic cautioned: “Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.”
Thompson Paine, Anthropic’s head of geopolitics, told the Chapel Hill audience that the company is trying to be transparent about what it is seeing and learning, so there can be an open dialogue with policymakers and other experts about the capabilities of this technology.
Paine said this new frontier underscores why the United States must stay on the cutting edge of developing these intelligence models.
“Let’s say a Chinese lab had come up with Mythos and these incredible cyber capabilities ahead of an American lab,” said Paine. “Do you think they would have reached out to the U.S. government to figure out how we patch as much software as possible? Do you think that they would have reached out to ten American companies to make sure that we’re covering as many vulnerabilities as possible before this technology goes public?”
Seeing Through ‘A Fog’ of Financial Data
State Treasurer Brad Briner told the conference that his agency would expand its use of artificial intelligence.
A pilot project using ChatGTP found it to be a time-saver, Briner said, adding that he believes AI can also be useful in combing through volumes of data from more than 1,100 municipal governments in the state to find errors or problems, like the inadequate budget oversight the agency flagged in Rocky Mount this month.
“It’s just so enormous, you can’t see through the fog,” said Briner. “We can see through the fog now. Now we can have a predictive ability to tell the citizenry, your municipality is doing this. We know how this ends.”
There’s No Free Burrito
In a separate session on AI infrastructure and privacy, Martha Wewer, the state’s Chief Privacy Officer, said consumers are increasingly interacting with AI without ever realizing it.
She asked panelists what users should take into account before clicking on an email offer for a free Chipotle burrito.
Ogzun Ataman, the Chief Technology Officer at Well, said consumers need to look at the fine print and see whether a company is retaining personal data, for how long, and whether it will be used to further train the AI model.
“You can choose to allow them to do that, but you should at least be aware of that choice,” said Ataman.
DJ Sampath, a senior vice president for AI software at Cisco, said at the very least consumers can also use artificial intelligence to read the fine print, something most people never take the time to do.
“That should give you an instant understanding of what am I giving away to be able to get that free burrito,” said Sampath.
“From a privacy perspective, I think what becomes really, really important is you have to understand that trade-off. What is okay by me, and what is not okay by me?” said Sampath.
NC Newsline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. NC Newsline maintains editorial independence. Contact Editor Laura Leslie for questions: info@ncnewsline.com.




