I’m hearing the same question in more and more conversations with public sector leaders right now. It usually comes after the budget discussion and before the vendor demo. It sounds like this:

“We know we need to do something with AI. Where do we start?”

It’s the right question. But many organizations are being pushed toward answers before they’ve had space to define the real problem they want to solve.


The AI Pitch Public Sector Doesn’t Need

If you’ve sat through an AI vendor pitch in the last year, and I imagine most of you have, you’ve probably heard a familiar story. AI will transform your operations. Agents will handle your workflows. Just point the model at your data and watch the magic.

Christian Klein, CEO of SAP, was asked about this recently on ASUG Talks (Season 5, Episode 11). His answer was refreshingly honest:

“They are lacking the context of the data. Not all of these LLMs should have access to your mission-critical data.”

I believe he’s right. And for public sector organizations, where the stakes include legislative compliance, access-to-information obligations, and public accountability, this is not a side consideration. It shapes whether AI will be useful, defensible, and safe to scale.

Klein described three capabilities that separate useful enterprise AI from generic wrappers:

  1. Workflow context, agents that understand your business processes end to end
  2. Data context, structured and unstructured data connected so agents can reason over real information
  3. Governance, authorization profiles for agents, not just people, so sensitive data doesn’t circulate without control

I want to walk through what each of these means for a Canadian public sector organization running SAP, not in theory, but in practice.


1. Workflow Context: Your Processes Are the Platform

The problem with dropping a generic AI assistant into a government ministry is simple. It doesn’t know that a capital procurement above $500K requires Treasury Board approval. It doesn’t know that an FOI request triggers a specific legislative clock. It doesn’t know that a building permit has different routing in one municipality than another.

The process knowledge lives in your people and your systems. Not in the model.

This is where the concept of workflow context becomes tangible. The idea isn’t to replace your processes with AI. It’s to ground AI inside the processes you already have, so an agent can operate within the rules, not guess at them.

For organizations running SAP with an integrated content platform, there’s a structural pattern worth understanding. It’s called a Business Workspace. It is a process- or object-centric container tied to SAP master and transactional data. A workspace anchors to a vendor, purchase order, contract, asset, or case record. It’s created from SAP context, not assembled manually. Documents, correspondence, approvals, and related content all live inside that workspace, surfaced directly inside SAP Fiori.

Why does this matter for AI? Because it gives an agent a bounded, process-aware environment to work in. Not a raw document dump. Not a general-purpose chatbot. An agent that can see the vendor’s contract, the current PO, the exception history, and the approval chain, and operate within that context.

A generic LLM can describe a procurement process. An agent grounded in workflow context can operate one. For a ministry processing thousands of procurement transactions, or a municipality managing building permits and inspections, that distinction is the difference between a pilot and a production system.


2. Data Context: The 80% Problem

One number should shape how you think about AI readiness. Roughly 80% of enterprise information is unstructured: contracts, emails, invoices, correspondence, regulatory filings, meeting notes.

SAP holds the structured data: vendor numbers, purchase orders, GL accounts, asset records, employee files. But the documents that give those records meaning, the contracts, the correspondence, the compliance evidence, live somewhere else. Often in many somewhere elses. SharePoint. Shared drives. Email. Legacy systems. File cabinets, if we’re being honest.

This is the gap that matters for AI. An agent that can query your vendor master but can’t read the underlying contract is operating with one hand behind its back. An agent that can see a capital project budget but can’t access the treasury submissions, council reports, and change orders is making decisions on incomplete information.

The foundational work here isn’t glamorous. It is connecting your unstructured content to your structured business data in a way that’s consistent, maintained, and governed. Business Workspaces that key off SAP master and transactional data, such as customer ID, document category, and organizational unit, provide that structure. Documents inherit metadata tied to the same business object context. The result is a context model where SAP objects, documents, correspondence, and records are linked through explicit relationships, not ad hoc file names.

Once that foundation exists, AI becomes materially more useful. Natural language queries against your actual document corpus. Summarization of case files or project documentation. Classification of incoming correspondence by type, urgency, and applicable legislation. These aren’t futuristic capabilities. They’re available today. But they depend on the content being connected in the first place.

For public sector organizations under MFIPPA, FIPPA, or equivalent access-to-information legislation, this work is essential. If you can’t find the document, you can’t respond to the request. If the document exists but isn’t connected to the business context, AI won’t help much either.


3. Governance: The Part Nobody Wants to Talk About at the Demo

This is where Klein made his most important point for public sector:

“You don’t want to have your financial data flying around the whole company.”

Now imagine that statement at government scale. Not financial data flying around a company. Citizen data, health records, social services files, law enforcement information, cabinet confidences, accessible to an AI agent without proper authorization boundaries.

When you deploy an AI agent, you’re creating a new user. One that operates at machine speed and machine scale. Without governance, you may gain speed in one area while creating avoidable risk in another.

The governance capabilities that matter here aren’t new. Records management. Retention and disposition. Role-based access control. Legal holds. Audit trails. These are the tools public sector organizations have been building, often imperfectly and often across fragmented systems, for decades.

What’s new is that AI makes the cost of getting governance wrong dramatically higher.

An employee who accesses a document they shouldn’t creates a policy issue that can usually be investigated and contained. An AI agent that surfaces restricted information in a generated response, at scale, to the wrong audience, creates a broader operational and reputational problem.

The practical implication is straightforward. AI agents need the same governance as human users: identity, role membership, bounded scope, confidentiality boundaries, retention obligations, and auditable activity. Treat agents as first-class principals in your security model, with roles, groups, and narrowly scoped access, and route their activity through the same governed systems and audit trails that apply to everyone else.

For organizations already running enterprise content management alongside SAP, the good news is that the building blocks exist. Role-based access that can inherit SAP authorizations. Document-level security with confidentiality classifications. Retention policies that follow content across systems. Audit trails that capture who accessed what, when, and why. Legal hold capabilities that can preserve evidence when needed.

These aren’t obstacles to AI adoption. They’re the prerequisites.


What This Means in Practice

If you’re a public sector leader thinking about AI and SAP, this is where I’d start:

Define the outcome first. Not “implement AI.” Something specific. Faster FOI response times. More efficient invoice processing. Better contract discovery for legal. Smarter case management for social services. Klein was clear about this: start with the outcome, not the technology.

Connect your content before you automate it. If your documents are scattered across five systems with no consistent metadata, no amount of AI will fix that. The work of connecting unstructured content to structured SAP business data, through business workspaces, consistent classification, and maintained metadata, is the foundation everything else sits on.

Governance is the starting point, not the afterthought. Before deploying any AI agent that touches citizen data, financial records, or privileged information, ensure the authorization model, access controls, retention rules, and audit capabilities are in place. Not because it’s bureaucratic. Because without it, you can’t demonstrate compliance, defend a decision, or pass an audit.

Start small, but start from the right place. A pilot that demonstrates AI value on a single, well-governed process is worth more than a broad deployment built on a shaky foundation. Pick one workflow. One data domain. Get the content connected, the governance right, and the outcome measured. Then expand.


The Uncomfortable Truth

Many AI vendors are still building from the outside in. Start with the model. Add features. Hope the enterprise catches up.

The organizations that will actually get value from AI in public sector are building from the inside out. Start with the process. Connect the data. Establish the governance. Then layer on AI where it creates measurable improvement.

That’s not the exciting pitch. It doesn’t make for a great conference keynote. But it’s the sequence that turns a pilot into a program, and a demo into operating value.

Klein said it plainly:

“Start with the outcome first. You have to redefine your whole process.”

For public sector organizations, that process starts with the foundation. The workflows, the data connections, and the governance framework that make AI safe, useful, and defensible.

If you’re running SAP, you already have much of the system foundation in place. The next step is to connect and govern what you already have, then apply AI where it can improve a real outcome.


If you’re working through that now, I hope this gives you a clearer place to start. Begin with one process, one outcome, and one governed set of information. That approach is more likely to hold up in the real world, and to earn trust as you expand.

Michael

Christian Klein’s full interview is available on ASUG Talks, Season 5, Episode 11. If you’d like to compare notes on AI readiness in your organization, I’d welcome the conversation.


Source: ASUG Talks Season 5 Episode 11, Christian Klein, CEO of SAP, interviewed by Jeff Scott (March 2026)