Stop Organizing AI Around Yourselves

Stop Organizing AI Around Yourselves

Every department has two kinds of work. The work that makes it irreplaceable - the judgment, the domain authority, the decisions only that function can make with credibility. And the execution work around it: writing the email, running the eligibility check, filing the handoff note, sending the update. The second category exists to support the first. It does not define it.

Most organizations have never separated those two categories explicitly. They did not need to. Both types of work required people, so both types of work landed inside headcount.

AI changes that. And the organizations still organizing their AI deployments around org chart boundaries are missing the actual question - which is not who owns this task, but whether the task is core work or overhead in disguise.

Two Axes, One Diagnostic

In Part 1, the problem was fragmentation: inference points multiplying without shared architecture above them. The fix starts by changing the question you ask before deploying AI anywhere.

The wrong question: which team owns this task?
The right question: where does this task sit in the customer journey, and what does the department uniquely contribute to it?

That gives you two axes. Horizontal: the customer journey - the sequence of stages a customer moves through from first contact to retention. Vertical: the business functions that touch those stages. Plot them against each other and you get a clear picture of where AI absorbs the overhead without touching what the department actually exists to deliver - and where handing something to AI means giving away the thing that makes the function worth having.

The Business Functions on the Vertical Axis

These are the nine layers assessed in the matrix below. Each is present, to some degree, at every stage of the customer journey:

How to Read the Matrix

Each cell combines a classification with a concrete example of what that looks like at that journey stage. The four classifications:

One distinction worth making explicit before reading the table. Customer Service refers to the operational team handling tickets, complaints, refunds and direct interactions - the people in the queue. CX refers to the broader Customer Experience discipline: the function that owns journey design, loyalty strategy, quality standards and the end-to-end customer relationship. In many organizations these sit inside the same department. Strategically, they are doing different work - and the matrix treats them differently.

Business Function Awareness Consideration Purchase Post-purchase Support Return / Refund Loyalty
Stage Owner Marketing
+ Product
Product
+ Marketing
Product
+ Revenue
Ops
+ Product
Customer Service
+ Product
Customer Service
+ Finance
CX
+ Marketing
Customer Communication
AICampaign reply handling
AIProduct Q&A chat responses
AIOrder confirmation & receipt
AIShipping updates & delays
GuidedDraft reply, agent reviews & sends
GuidedEmpathetic message, reviewed before send
AIRe-engagement sequences
SOP / Procedure Execution
AIGDPR consent & cookie logic
AIInventory availability check
AIAddress validation, payment retry
AITracking dispatch & status sync
AITicket classification & routing
AIRefund eligibility check
AIPoints balance calculation
Personalization & Recommendations
AILookalike audience targeting
AIBrowse-based product recommendations
AIUpsell & cross-sell at checkout
AINext purchase suggestions
GuidedRetention offer based on history, reviewed
GuidedRecovery offer, agent approves before send
AITier-based reward messaging
Pricing & Eligibility
GuidedPromo rules applied, Revenue reviews
GuidedDynamic pricing display, approved
GuidedDiscount eligibility, human confirms
SupportedPrice adjustment claim, agent decides
SupportedGoodwill credit amount, human approves
GuidedRefund amount calculated, reviewed
GuidedReward tier threshold applied
Escalation & Complaint Handling
AISpam / abuse auto-filter
AIFAQ deflection before agent queue
GuidedPayment failure escalation path
GuidedDelay complaint routed, agent reviews
SupportedComplex case summary, agent owns outcome
SupportedDispute context compiled, human leads
GuidedChurn risk flagged, retention team reviews
Cross-department Coordination
SupportedCampaign brief shared across teams
SupportedProduct & marketing handoff summary
SupportedOrder exception alert to Ops
SupportedFulfillment delay flagged to Customer Service
SupportedCase context passed to Finance
SupportedReturn status synced across systems
SupportedSegment data shared with Product
Process Improvement
SupportedDrop-off pattern surfaced
SupportedConversion gap flagged
SupportedCheckout friction identified
SupportedDelay root cause report generated
SupportedResolution time analysis produced
SupportedReturn reason clustering
SupportedChurn predictor model output
Department Authority Layer
HumanBrand positioning decisions
HumanAssortment & merchandising strategy
HumanCommercial terms & pricing strategy
HumanCarrier & SLA commitments
HumanPolicy exceptions & claims judgment
HumanDispute resolution authority
HumanLoyalty program design & rules
AI Oversight & Control
HumanCampaign AI prompt governance
HumanRecommendation model review
HumanCheckout logic accountability
HumanFulfillment AI audit ownership
HumanSupervisor flag review & response
HumanRefund logic sign-off
HumanRetention AI decision rules

Classifications are indicative. Exact placement depends on industry, regulatory context, and risk appetite.

Why the Top Rows Are So Green

Customer Communication and SOP Execution are AI across almost the entire journey. It is worth being precise about why - because it is not simply that AI is "good at" these tasks in some general sense. It is that the skills these roles have historically required are exactly what large language models are architecturally built around.

Think about what a strong customer communication role historically needed:

These are not human strengths. They are human constraints that organizations built processes around. AI does not just replicate them - it removes the constraints entirely. A model does not get tired at message 400. It does not lose consistency across languages. It does not miss a procedure because it was on holiday last week when the policy changed.

The green cells are green not because the work is unimportant but because the characteristics that made humans necessary for this work are no longer the binding constraint. Keeping people in those roles is not a quality decision - it is a habit carried over from a different set of technical conditions.

The Columns That Should Make You Uncomfortable

Look at the Support and Return columns. They carry more Guided and Supported cells than any other stage. That is not coincidental. These are the moments where a customer has already had something go wrong - where the emotional stakes are higher, the liability exposure is more direct, and the consequences of an AI error are not an inconvenience but a relationship damage or a legal exposure.

The journey does not just tell you what functions AI can handle. It tells you where the risk profile changes, and therefore where unsupervised automation needs to give way to human judgment. The org chart never showed you that. It showed you who owned the function - not what the function cost when it went wrong at a specific point in the customer experience.

The Bottom Two Rows Do Not Move

Department Authority Layer and AI Oversight are Human across the entire journey. This is not caution. It is the logical end of the same reasoning that makes the top rows green.

If AI absorbs all execution work from a Revenue team - eligibility checks, discount logic, pricing display - what remains is the work that defines what a Revenue function is for: deciding where to hold margin, where to sacrifice it, how to respond to competitive pressure. That judgment cannot be delegated to an inference point without the organization losing its ability to make deliberate commercial decisions. The function no longer exists as a strategic asset - it becomes a governance wrapper around machine outputs.

The same logic applies to AI Oversight. As the green area grows, so does the need for humans whose job is not to execute within the AI layer, but to own its behavior - setting decision rules, reviewing supervisor flags, absorbing accountability when automated systems produce consequential outputs at scale. That role does not disappear as AI matures. It becomes more important.

Where This Gets Tricky

The Consistency Supervisor sits at the handoff points the matrix makes visible - specifically the transitions between AI and Guided cells, and between Guided and Supported ones. Each of those transitions is a potential context collapse: a point where one inference point hands off to another and semantic state either carries forward or restarts from scratch. Part 3 goes into how to build and operate the supervisor at those exact points. For now the important observation is this: the journey map tells you precisely where to deploy it. You do not need to monitor everything - you need to monitor the transitions.

What the Framework Is Actually Asking

Every department has a version of the same question to answer: of all the work we currently do, what is the work that makes us irreplaceable? What would be lost - not in headcount terms, but in value terms - if everything else were handled by AI?

The matrix is a tool for making that question concrete rather than philosophical. When you plot your functions against the journey and start classifying cells, you will find that the Human area is smaller than most departments expect, and the AI area is larger than most leaders are comfortable admitting. That discomfort is worth sitting with. It is pointing at the gap between the structure the organization has and the structure the technology now makes possible.

What sits in the Human cells - Department Authority Layer and AI Oversight - is not overhead. It is the core. Everything above it, in the green and yellow cells, is the overhead that has historically been inseparable from it. AI is separating them. The organizations that recognize that early are the ones that will redesign around their actual value rather than defend the structure that used to contain it.

What Part 3 Covers

The matrix answers where AI should operate and where it should not. Part 3 addresses what happens once it is running across all those cells simultaneously - how the Consistency Supervisor threads through the transition points, how to govern the Guided cells without creating a human bottleneck that defeats the purpose of automation, and how to detect when the architecture is quietly degrading before the damage reaches the customer.

Part 2 of a three-part series. Read Part 1 here or the full series overview here.

← Back to all articles