Making Technology Bulletproof

AI

Keynotes and Data Protection in the Age of AI

Over the last year, I’ve had the privilege of presenting keynotes and technical sessions to both the Oracle and SQL Server communities on a subject that’s no longer optional—data protection in the age of AI.

Whether I’m speaking at regional events or large conferences, the topic resonates because the risk is no longer theoretical. AI is here, it’s evolving rapidly, and the data it consumes and leaks, can and will define the fate of your organization.

Shadow AI is Unseen, Unapproved, and Dangerous

“Shadow IT” has been a longstanding challenge, but Shadow AI is its much more dangerous sibling. Employees, often with good intentions, are using free AI tools like ChatGPT, Gemini, or open-source AI assistants to streamline tasks, generate content, or even write code.

But what’s being entered into these public LLMs? Customer PII, employee records, authentication secrets, proprietary business logic.  A recent research study by Harmonic for Q4 2024 and recent updates show that over 8.5% of all AI prompts contain sensitive data. This data leaves your control the moment it’s typed into a web interface that’s not sanctioned by your IT organization.

Most organizations haven’t kept pace with AI and policies are outdated. Guidelines are vague and enforcement is practically nonexistent. Shadow AI isn’t just a compliance issue, it’s a strategic data leak happening in plain sight and a hacker’s dream.

Enterprise Tools Exist, So Use Them

The good news is that large-scale enterprises have tools at their disposal.  All they need to do is to be properly configured and supported by leadership. Products like Microsoft Defender for Cloud Apps, Zscaler, and Cloudflare Zero Trust are increasingly incorporating AI usage monitoring and policy enforcement capabilities.

These tools can:

  • Detect when users access unsanctioned AI tools.
  • Block data transfers to known LLM APIs.
  • Apply granular policies to define what data is allowed in AI workflows—and what’s not.
  • Provide reporting and alerting to quickly identify misuse.

But tools alone don’t solve the problem. Education, clear communication, and cultural shifts are required. If you don’t provide a safe and sanctioned way for employees to leverage AI, they will continue to use risky alternatives.  Policies and training are as important as tools to stop shadow AI use.

Who is the Customer?

Here’s a hard truth: if your organization is only consuming publicly available AI and not building or fine-tuning its own models, you are likely the product.  My friend, Steve Karam posted on LinkedIn about this recently and I’m glad to see more individuals discussing these concerns, as they are real, even if the risks to user data is unintentional due to lack of understanding.

Public LLMs learn from interactions, especially when prompts and payloads are not protected by paid APIs or encrypted transport layers. You’re giving away IP and business context with every query. And even if the vendor claims otherwise, the lack of transparency in training data and model behavior means you can’t be sure where your data ends up.

I’ve been in infrastructure and data for three decades. I’ve lived through the rise of ransomware, the explosion of unstructured data, the rush to the cloud. This is different. The data risks with AI are so deeply embedded, so deceptively easy to miss, that even tech-savvy organizations are struggling.

Courts Can’t Keep Up

Regulators are lagging years behind, by my estimates, over 3 years minimum. By the time data privacy laws catch up to AI reality, many organizations will have already suffered the consequences.  The biggest challenge is some will still consider it worth the risk due to the promise of AI-driven revenue growth.  The penalties are small vs. the potential revenue gains.

That’s why your response can’t wait. Data governance must become AI governance. Security teams must collaborate with data teams. Developers must understand the value and the liability in every training set.

Leadership must recognize that data is the fuel of AI,  and it’s also the vector of risk.

Summary

The AI revolution is as much about survival as it is innovation.  If your organization isn’t preparing for the data risks that come with it, you’re already behind.  If you’re interested in the content that I’m speaking on, my slide decks can be found on my Github page.

Use your voice, set clear policies, adopt the right tools and start thinking long-term.  If you’re not controlling your data in the age of AI, someone else is.

 

Kellyn

http://about.me/dbakevlar