Piloting AI: Mastering ‘AI-manship’

Oscar Wong via Getty Images
COMMENTARY | Just like pilots know when something is wrong when they fly a plane, local tech leaders should trust their instincts on AI, especially if the systems produce unfair or suspicious results.
A cartoon illustration showed two airport airline counters, one known as “Legacy Air” and the other “‘Air AI.” The line at Legacy Air was rather long, and unsurprisingly, no one was at the Air AI counter. Two robots walking by lamented, “I’m certainly not ready for that!”
Human or machine, it's safe to say no one is ready to fly autonomous airlines. Today, public managers wrestle not only with citizen trust, which has never been lower, but also are being asked to trust in artificial intelligence applications. Piloting AI is very much akin to airmanship and seamanship.
Airmanship and seamanship are the disciplined abilities to safely, skillfully and confidently operate an aircraft or ship by applying technical knowledge, situational awareness, sound judgment, and a professional attitude in all conditions. This definition highlights both the technical skills and mental discipline needed by a competent captain. It aligns with how experienced captains view the term — as more than just a skill, but a mindset and a commitment to excellence.
A captain’s situational awareness is the continuous perception and understanding of the craft’s position, flight and sea conditions, surrounding environment, and potential hazards to make informed decisions and maintain safe operations.
Most local government chief information officers and department heads weren’t trained on AI use and deployment. One does not need to understand every algorithm, but one does need to cultivate good judgment, a strong ethical compass, and a clear-eyed view of risks.
And just like a seasoned pilot knows when something “feels wrong” in the cockpit or the bridge, experienced tech leaders can — and should — trust their instincts when AI systems produce suspicious or unfair results. Don't be afraid to hit pause.
“AI-manship” refers to the skilled, responsible and ethical operation of artificial intelligence systems. This term mirrors airmanship and seamanship, and emphasizes human oversight, judgment and discipline in navigating AI.
In the recent past, public sector tech leaders were able to not only understand every aspect of their network operations but also explain it. Today, we know AI systems perform some rather sophisticated operations. They are already augmenting our work, making us more productive and creative. Every output, right or wrong, looks convincingly real and correct.
But we also know that AI systems can behave in ways that cannot be easily explained, especially when presented with absolutely wrong information in perfect sentences and citations. Some of these false outputs have been referred to as “hallucinations.”
Commercial ships and aircraft are required to equip their fleet with what is referred to as a “black box,” which records all movements and voice communications, internal as well as external. With AI, there is also a black box, but despite its familiar name, it is a metaphor designed to describe the area or space in which users' input meets with AI systems and where algorithmic magic occurs. And this is the area that even AI scientists are often at a loss to explain some of the outputs they see.
Given AI’s growing complexities in all forms, how can tech leaders achieve parity with captains who strive for airmanship and seamanship? It has been well documented that many accidents have been caused by a lack of airmanship and seamanship, where too much emphasis was placed on following strict checklists, protocols and automated procedures without having a “feel” for the overall situation at hand. Had they perceived the actual cause of a potential disaster, steps might have been taken to disengage from a troubled system.
Just as captains maintain “situational awareness” — knowing their position, environment, and system status — AI-era leaders must do the same, but with digital tools. Here’s how that can look:
- Know Your Systems’ Capabilities and Limits: Before using an AI tool, understand what it is designed to do — and not do. Can it explain its reasoning? Has it been tested for bias? Is it trained on relevant data? Every system has a purpose and limitations. Leaders must ensure their teams document and regularly review these.
- Use Checklists and Protocols: Pilots rely on standard operating procedures. Similarly, AI governance frameworks — covering procurement, risk assessment, auditability, and accountability — should be part of everyday operations. Don't assume the vendor has done this for you.
- Monitor for Drift: Models can degrade or “drift” over time as environments change. A predictive policing tool trained on old crime data may become less accurate and more unfair. Just like weather patterns shift in the skies, AI models must be monitored and recalibrated regularly.
- Communicate Across the Crew: In both aviation and maritime settings, communication breakdowns are often fatal. In tech leadership, silos between departments, IT staff, data scientists, and end users can lead to poor outcomes. Make sure your whole team understands how the AI system works and why.
- Train Continuously: Pilots never stop training. Neither should public servants who work with AI. Continuous learning — on ethics, bias, data privacy, and technical fluency — should be part of your agency’s DNA.
Despite what many so-called experts say, AI is not inherently trustworthy. Trust must be earned and engineered. Taking a deeper dive, here are some concrete steps local and state governments can take that can lead to greater trustworthiness and fairness:
- Insist on Explainability: Favor systems that can explain their decisions in human terms. This may mean sacrificing some performance for the sake of transparency, and that’s often worth it in the public sector.
- Create Human-in-the-Loop Models: For high-stakes decisions — such as benefits eligibility or criminal justice — you need humans involved in reviewing or verifying outcomes. Don’t let the system make final decisions without a way for people to intervene.
- Apply Risk Tiers: Not all AI use cases are created equal. A chatbot giving tourism advice is low-risk; an algorithm allocating housing assistance is high-risk. Tailor your oversight and scrutiny to the stakes involved.
- Test for Bias and Fairness Early and Often: AI can replicate and magnify existing inequalities. Work with partners who proactively test their models for fairness and have processes for independent audits or red teaming.
- Maintain Audit Trails: Good recordkeeping isn’t just for compliance — it’s part of maintaining digital airmanship. When something goes wrong, you want to know what inputs, decisions, and actions were taken.
You don’t have to be an AI expert, but you must be a capable captain. Many local government CIOs and department heads weren’t trained as data scientists. That’s okay. One does not need to understand every algorithm, but you do need to cultivate good judgment, a strong ethical compass and a clear-eyed view of risks.
And just like a seasoned pilot knows when something “feels wrong” in the cockpit, experienced tech leaders can — and should — trust their instincts when AI systems produce suspicious or unfair results. Don't be afraid to hit pause.
AI is here to stay. The real question is: can we navigate it responsibly? The best government tech leaders won’t treat AI like magic — or panic over its complexity. They’ll lean into transparency, continuous learning, and shared responsibility. Like a pilot scanning the horizon, they’ll stay alert to what’s coming next, aware of the mission, and ready to change course when needed.
Because in the end, it’s not about blindly trusting the machine. It’s about becoming the kind of leader who knows when to trust, when to question, and how to steer through the fog.
Alan R. Shark is a senior fellow at the Center for Digital Government, as well as associate professor at the Schar School for Policy and Government, George Mason University, where he also serves as a faculty member at the Center for Human AI Innovation in Society (CHAIS). Shark is also a senior fellow and former Executive Director of the Public Technology Institute (PTI). He is a Fellow of the National Academy of Public Administration and Founder and Co-Chair of the Standing Panel on Technology Leadership. Shark is the host of the podcast Sharkbytes.net.





By