• The biggest AI risk in your company has a corner office

    21 Mar 2026

    We spend a lot of time worrying about employees clicking email links and reusing passwords. Both are valid concerns in our new world of AI. But this approach misses a larger (and more shocking) reality. The single biggest AI-related risk in most organizations is at the help desk or buried somewhere in accounting or IT. It’s sitting in the C-suite, operating with authority and speed, with little to no friction.

    A January 2026 study from La Fosse surveyed more than 2,000 UK tech workers and revealed something interesting. Nearly three-quarters (73%) of C-suite executives admitted to uploading confidential company data into AI tools. That rate is almost double what is seen among entry-level employees. At the same time, 78% of business leaders said they rely on AI for work they do not fully understand, and 93% acknowledged making AI-informed decisions based on inaccurate data. These are not edge cases. This is normal operating behavior at the highest levels of many organizations. And these are just what people admit to! What’s really happening out there?

    So, the people making strategic decisions about the business (including IT/security/AI strategies and governance frameworks) are feeding sensitive data into tools they cannot fully explain, trusting outputs they cannot independently validate, and doing so at a pace that far exceeds the rest of the workforce. Got it.

    A separate study from Pluralsight reinforces the point: 91% of C-suite executives admitted to overstating or faking their AI knowledge. Interesting…

    Compliance Week’s recent analysis found similar results. I’m starting to see a trend.

    Get this, Section’s recent AI Proficiency Report not only found that 90% of people don’t know how to protect sensitive information in AI, 86% can’t officially access AI at work, 88% have not received training, and 82% say their managers discourage or are silent on AI.

    Funny how it all works!

    Anyone who has spent time in IT and information security has seen this pattern before. Executives have been the ones abusing mobile devices policies over the past two decades. They’re the ones who don’t want complicated passwords because they’re too taxing. They also resist multifactor authentication because it slows them down. They move sensitive data into personal systems because it’s more convenient. Authority has always created distance from controls. We especially see this in healthcare where doctors leading the charge know best.

    Now, AI has introduced a faster and less visible way for that behavior to create exposure. The difference now is scale and permanence. When sensitive information is sent through traditional channels, there is at least a chance of tracing it, containing it, or recovering from it. When that same information is entered into an AI application, especially one that is externally hosted, the organization often loses visibility and control immediately and indefinitely. The data may be retained, processed, or used in ways that are not fully transparent. And your very own executives are facilitating these things!

    I’m seeing a lot of businesses build AI acceptable use policies with the general workforce in mind. Help desk staff, developers, customer service staff. The assumption, whether spoken or not, is that executives operate with better judgment and therefore require fewer constraints. In practice, that assumption creates a blind spot – and it’s just downright dangerous. An entry-level employee mishandling data is a problem. An executive doing the same thing can introduce regulatory exposure, legal consequences, and reputational damage at a whole different level.

    If your AI governance policy does not explicitly address executive behavior, you’re not finished…and your AI program is not ready for primetime. Your starting point should be visibility. You cannot secure what you do not acknowledge. Most organizations, it seems, have little to no understanding of what AI tools are being used, especially by leadership – who wants to question them after all? They also have little to no understanding of what data are being shared and how the AI outputs are influencing business decisions. Everything is simply guesswork. That’s not how you run a business.

    You must treat AI tools the same way you would treat third-party vendors or even insider threats. Identify what is being used/shared, evaluate the risk, define acceptable use, and require verification where decisions have material impact. And then monitor what’s going on and when bad things happen – do something about them. This is not about getting in the way of executives slowing the business down. It’s about ensuring that decisions are grounded in reality with good information rather than assumptions – something the Dunning-Kruger effect (I got this, I know what I’m doing!) explains.

    At least there’s hope. The La Fosse study found that 80% of business leaders believe a dedicated AI specialist is needed at the board level. This creates a great opportunity because when leadership acknowledges a gap, it becomes possible to address it in a structured and defensible way…that’s properly supported over the long haul.

    AI is not going away, and neither is executive overconfidence. The real question is whether organizations are willing to apply the same level of discipline and oversight to leadership that they expect from everyone else. Rules for thee, not for me approach to AI oversight is not sustainable, nor is it defensible. Policies that focus only on the lower levels of the organization miss where the real exposure often begins (and ends).

    Trust but verify. Of course, you already knew that…It’s one of those “basics” concepts I’ve been harping on for decades. Especially when the person asking for an exception is the one making critical decisions that shape the business!