Artificial intelligence is no longer an “emerging technology.” Instead, it’s a daily reality shaping how we hire, evaluate, recommend, and even discipline. For tech leaders like me, this shift brings more than engineering challenges; it brings ethical ones as well. And the most critical decisions we face in this new era are not about capability but about boundaries.
Yes, AI can process vast datasets at blinding speed. It can also outperform humans on specific tasks. However, the core question for leadership isn’t what AI can do; it’s actually what we should allow AI to do.
Having spent the better part of two decades building and leading technical teams, I’ve seen AI improve accuracy and efficiency in incredibly sensitive areas, especially testing and evaluation environments. But I’ve also seen the other side. Systems that go too far, make mistakes, and cause real harm reveal a deeper problem that needs to be addressed.
Decisions that used to be made by humans are now quietly being made by machines, often without clear rules or anyone realizing it’s happening. As Norbert Wiener, a pioneer in the study of intelligent systems, warned: “Progress doesn’t just bring new opportunities—it also brings new limits.” Wiener understood, even in the 1950s, that as we hand over more decision-making power to machines, we must also accept and address new moral obligations.
When Optimization Ignores Ethics
Frequently, organizations pursue AI integration as part of a race for optimization. We want faster outcomes, leaner processes, and sharper predictions. However, I believe that improving speed or accuracy without careful thought can create systems that seem smart but act unfairly.
For example, AI might mark someone as suspicious just because of their facial expression or how they move their mouse. The model might be 95% accurate, but in the remaining 5% of cases, we’re not just dealing with statistical noise. We’re dealing with real people. Careers. Reputations. Lives.
What happens when a perfectly innocent person is caught in the crossfire of automation? This is not just a hypothetical. A 2018 study by MIT found that facial recognition tools were wrong up to 35% of the time when identifying women with dark skin, but made almost no mistakes with light-skinned men. These are real mistakes by advanced machines. This stark disparity highlights how datasets and training that are not designed with equity in mind can lead to biased outcomes and increased harm.
In these cases, the machine is not failing because it’s inaccurate. It’s failing because it’s been given the final word. That’s where leadership must step in, not to fight AI, but to frame it.
Good AI leadership begins by following widely accepted ethical principles like those outlined by the European Commission’s High-Level Expert Group on AI and the OECD AI Principles. Two of the most essential pillars are:
- People should know when a system is making decisions that affect them. Those decisions should be easy to explain and understand. If no one can explain how a decision was made, it shouldn’t be trusted.
- Treat people like people, not just numbers. Protect their privacy, avoid unfair treatment, and give them a chance to speak up if something feels wrong.
When to Say “No” to AI
One of the hardest parts of being a technologist is resisting the allure of “just because we can.” There’s always a newer tool, a faster model, a more granular metric. But leadership is about prioritization. It’s also about remembering that every decision is a tradeoff and that efficiency gained at the cost of fairness is never worth it.
In my own career, I’ve learned that it’s not enough to ask whether an AI system works. We must ask what it means. Does it reinforce trust or erode it? Does it empower or disempower? Does it serve people, or use them?
I’ll never forget one particular case. An AI tool flagged a candidate as “high risk” based on so-called suspicious behavior. Nothing illegal or even unusual, just something the system didn’t “like.” We reviewed it by hand, and it turned out the person had a nervous tic linked to a medical condition. Harmless, human, and totally misunderstood by the machine. Without that second look, that person might’ve lost out on a job they were qualified for. That moment hit hard. It reminded me that no matter how polished or promising our systems are, you still need a human at the wheel when it counts.
How Leaders Can Apply Human-Centered AI Today
Leading with a human-centered approach to AI means understanding where machines make decisions and asking whether they should. I’ve seen that AI often works quietly in the background, shaping results that affect people’s lives. Good leadership involves slowing down and making sure someone is accountable when things go wrong.
People must be able to question what a system decides. A human review should never be seen as something extra or superfluous. If leaders speak openly about mistakes and ethical concerns, teams will feel empowered to do the same. Saying “no” isn’t weakness, but responsibility.
The tech we build reflects who we are. If leaders value speed and performance above all else, that mindset shapes the system. But if they lead with care, then the technology follows suit. What matters isn’t just whether it works, but whether it works for people.
Human-Centered Tech Starts with Human-Centered Leadership
There’s no shortcut here. The tech we create mirrors the people behind it, including our focus, our values, and our blind spots. If leadership becomes obsessed with speed or performance, those priorities seep into every layer of what we build. But if we lead with curiosity, care, and courage, our systems reflect that too.
Human-centered leadership does not mean turning away from AI. It means guiding it, knowing when to let it help, and realizing when to step in. Also, it means asking uncomfortable questions, listening to edge cases, and giving your team room to think before they build. The best leaders I’ve worked with don’t chase the newest tech just because it’s shiny. They ask, “What does this do to and for people?”
Be willing to lead with your whole head and your gut. Because no matter how clever our systems get, the hard stuff still falls on us.
Responsible AI Is a Leadership Test
Artificial intelligence gives us power. But power without principles is dangerous. The real risk isn’t that AI will become smarter than us. It’s that we’ll stop asking questions. If you’re in a position to shape technology, whether as a CTO, a product owner, or a team lead, you’re also in a position to shape its impact.
These aren’t just technical guidelines, but leadership commitments. Practicing them means building systems that both work and earn trust. And that starts with asking the right questions long before any line of code is written.
Otto Silva is the CIO of Kryterion, leading technology strategy to support business objectives and drive ROI in close collaboration with executive leadership.