Early March of this year, I decided to get some formal training on how to better adopt AI as a business leader. I did what most of us do. I opened far too many tabs, compared syllabi from the usual prestigious universities in the US and abroad, and tried to decode what was marketing and what was substance. In the end, I chose something that wasn’t too far away from home: Stanford Online’s “AI-Driven Leadership: Strategies for the Future.”
As I shared in an earlier blog, I used to approach AI mostly as a capability problem:
what tools were available
what they could do
how quickly they could be deployed
I saw the work as selecting, implementing, and measuring. The course made it clear that this framing was incomplete. The biggest shift for me was realizing that AI changes the quality of decisions far more than it changes the availability of technology. I now spend less time asking whether something can be automated or augmented, and more time asking whether introducing AI will actually improve judgment and outcomes.
I also became much more honest with myself about readiness. I no longer assume that enthusiasm equals preparedness. Just because a team is excited about AI does not mean they are ready to change how they work. I pay closer attention to the conditions surrounding any AI initiative:
clarity of goals
shared understanding across teams
whether people actually trust the inputs and outputs they are being asked to rely on
I’m more skeptical of impressive demos and more interested in whether a tool genuinely changes how people work and decide, especially when things get ambiguous or messy.
Another shift is how I think about data. I used to see data quality as something that could be fixed downstream with better tooling or more process. Now I see it as a leadership responsibility that reflects culture, incentives, and pressure. When speed or alignment is rewarded over rigor, standards erode quietly. Dashboards still look clean, but the signal underneath starts to blur. I am more conscious now of the long-term consequences when those tradeoffs go unexamined, especially in AI-supported decisions.
The course also changed how I think about experiments. I used to treat pilots as proofs of concept: the goal was to “show it works.” Now I treat them more as proofs of learning.
Did we clarify where AI helps and where it does not?
Did we understand who felt empowered and who felt sidelined?
Did we surface new risks instead of hiding them?
That mindset shift makes it easier to hold onto both ambition and caution at the same time.
Honestly, the most important change is how I see my role as a leader. I am less focused on advocating for AI adoption and more focused on shaping the environment in which it is used. I expect friction now, and I don’t interpret it as failure. There is real truth in the idea that there is always a signal in the noise. The signals I look for are what is changing meaningfully in how people decide, collaborate, and serve customers. I left the course with fewer certainties, but with better judgment. That feels like the most durable outcome of all.
Disclaimer: This reflection is based on my personal experience as a learner. It is not sponsored content, and it is not intended as a formal review or endorsement of any specific program.