Was this newsletter forwarded to you? Sign up to get it in your inbox.

#21|Security|8 min

The client you expose

Your clients aren't yet asking how you handle their data inside ChatGPT. They will. Here's the answer to have ready before they do.

If I had seen sooner how casually AI adoption erodes consultant-client trust, I would have published this edition six months ago. What tipped me over was a conversation with a peer who proudly showed me his ChatGPT workflow: three clients sharing a single thread. He thought he had solved the problem because his account was on Plus. Most people look at AI through an Instagram lens, and public demos are proofs of concept. Enterprise use is a different animal. This edition documents the five blind spots and the minimal protocol that closes them.

Francis Beaulieu

Francis Beaulieu

Why this matters to you right now

On March 9, 2026, CodeWall announced that its autonomous AI agent had compromised McKinsey's internal platform, Lilli, in under two hours for $20 in API usage. According to The Register, the agent obtained full read and write access to 46.5 million messages covering strategy, M&A, and client engagement work, 728,000 confidential files, and 57,000 user accounts. The vulnerability came from 22 unauthenticated API endpoints and a SQL injection via an unsanitized JSON field. Not a solo consultant caught slipping. The firm that symbolizes global consulting itself.

The tipping point has passed. Most consultants look at AI through an Instagram and YouTube lens, where everything seems effortless. But public demos are proofs of concept. Enterprise use is a different animal. The question of what happens to the data flowing behind the scenes is rarely addressed, or addressed only in the most superficial terms. That gap creates a false sense of confidence in consumer-grade tools (free, Plus, and Pro plans) whose security posture is still to be demonstrated for professional use with client data. The 2026 GitGuardian Secrets Sprawl Report, published three days ago, documents 29 million secrets leaked in 2025 and identifies AI agent credentials as the least controlled category.

If your practice touches client data and you use AI without a written protocol, you don't have a productivity advantage. You have a hidden debt compounding with every prompt. The day a client discovers it, the trust that holds your practice together collapses at once.

Pricing: documented confidentiality as a price lever

The action: Document your AI hygiene protocol on one page and include it explicitly in every proposal from now on. Not as a legal appendix. In a section titled "Data governance and AI tool usage," at the same level of prominence as deliverables and timelines.

Why it works: Blair Enns, in Pricing Creativity, argues that the price of a professional service reflects what the client is reassured to buy, not what they consume. In 2026, the assurance that a consultant isn't leaking data has become a component of the value proposition. Because it's rarely articulated, it isn't monetized. The consultant who documents it turns it into a pricing justification. The 2025 Source Global Research report already showed that mature clients pay a premium for documented peace of mind. In AI security, that premium is accelerating.

The trap: The marketing protocol without substance. If you write "we respect confidentiality" without specifying how, you join the noise. Worse: if you sell discipline and don't deliver it, the first client who asks to see your ChatGPT configuration finds the bluff. The rule: never promise in a proposal a practice you aren't already executing in your daily workflow.

This week: Open your proposal template. Add a "Data governance and AI tool usage" section with five operational commitments condensed from the seven guardrails in the AI section below (the long version is your internal reference, the short version fits in the proposal). Use it in your next proposal.

Sales and business development: preempt the question your prospect hasn't asked yet

The action: In your next discovery call, before getting into the prospect's situation, run a 90-second sequence: "Before we dig into your context, let me show you how I'll handle your data." Then present your protocol in three simple points: data isolated per engagement, anonymization before each prompt, no training on your data.

Why now: David Maister, in The Trusted Advisor, points out that the party who opens the conversation about trust controls the frame of the relationship. Bruce Schneier, in Data and Goliath, shows that in any professional relationship with high information asymmetry, the practitioner who makes the invisible visible takes the ethical high ground. By talking about data before the client does, you flip the dynamic. You're no longer a vendor to be questioned. You're an advisor who has already thought through the problem.

The hidden benefit: Prospects who have already been burned (by another consultant, by an internal employee, by their own vendor experience) choose you immediately at that sentence. The sequence works as a filter. It attracts serious clients and repels the ones who would have been problematic. As in edition #19 on prospecting, demonstration converts, not argument. While drafting this edition, I tested the 90-second script with three peers. All three said the same thing: they had never thought of it as a commercial argument.

This week: Fifteen minutes today to write the script and read it aloud three times. Use it in your next discovery meeting. Watch the reaction. Attentive silence or a follow-up question. That's your signal.

Collaboration networks: the blind spot of third-party tools

The action: Map every tool that touches your client data without your having explicitly decided so. Four categories to audit before next Friday: (1) meeting transcribers (Otter, Fireflies, Read.ai, Fathom, Granola), (2) email assistants (Superhuman AI, Fyxer, Shortwave AI), (3) browser extensions (Monica, MaxAI, Merlin, Harpa), (4) note-taking and productivity tools (Notion AI, Mem, Obsidian with AI plugins). For each: signed data processing agreement, training opt-out enabled, or outright deactivation.

The mechanism: Helen Nissenbaum, in Privacy in Context, formalized the concept of contextual integrity. Data captured in one context (a confidential client meeting) flows into another context: a third-party AI vendor's servers, sometimes its training pipeline. Contextual integrity is violated, even with no malicious intent. The consultant who "doesn't paste anything into ChatGPT" but lets Otter transcribe their client meetings into an opaque pipeline commits exactly the same kind of breach. They simply don't see it.

The format: A four-column inventory table. Tool name, type of data touched (transcripts, emails, documents, full screen), agreement signed yes or no, required action. This 45-minute exercise typically surfaces six to ten tools you had activated without thinking. There's also the client's own shadow AI: even if you're impeccable, your sponsor might paste your deliverable into their personal ChatGPT to summarize it for their team. Client education is part of the engagement. One line in your final report is enough.

This week: Block 45 minutes on Friday to fill the inventory table. You'll find at least three tools you hadn't thought of.

Value creation: discipline as intellectual property

The action: Turn your AI hygiene protocol into a named, documented component of your proprietary methodology. Give it a name (Trust Protocol, HYGIE-6 Framework, Client Data Charter). Position it at the same level as your other diagnostic frameworks, inside your deliverables and on your site.

Why it changes everything: David C. Baker, in The Business of Expertise, documents that a consulting practice without codified intellectual property has almost no resale value, while a practice with codified IP commands a significant multiple of annual revenue. Documented confidentiality is a new frontier of codification. See also edition #6 on methodology as product: what competitors see as friction, you sell as a value component.

The contrarian turn: Carissa Véliz, in Privacy Is Power, argues that confidentiality isn't a burden but a form of structural power that disciplined professionals accumulate silently. The Deloitte Australia case (Fortune, October 2025) illustrates the opposite. A $290,000 AUD government report delivered with AI-hallucinated citations and no disclosure to the client that AI had been used. The partial refund wasn't the real cost. The real cost was the posture lost. A giant that confused "we use AI" with "we use it correctly."

The test: If a prospect asks "tell me about your approach to client data inside your AI workflows" and you answer from memory, you don't have a protocol. You have an intention. The operational test: can you email them the PDF tonight? I wrote the first version of my own document in an hour, using the skeleton described further down. Not perfect. Not exhaustive. But a PDF, sendable. That's what matters at version 1.

This week: Write version 1 of your protocol on one page. Reusable skeleton, 30 minutes of personalization is enough:

  • Opening paragraph: your commitment in three sentences ("Client data confidentiality is a precondition of our relationship. Here is how we protect it in our AI workflows. This protocol is reviewed quarterly.").
  • Five operational commitments: isolation per engagement; systematic anonymization before each prompt; no training on your data; data processing agreements with our third-party AI vendors; documented quarterly audit.
  • Signature line: your name, the date, a version number.

Change the practice name and the date. Export as PDF. That's version 1.

AI: the minimal viable protocol in seven guardrails

The action: Deploy this protocol, ordered by impact-over-effort ratio. It takes 5 to 10 hours of initial setup, then costs 30 seconds of friction per prompt. Exactly the marginal cost that the objection "guardrails kill my productivity" imagines far bigger than it actually is.

1. Account configuration. Turn off training and persistent memory on every AI account you use (ChatGPT, Claude, Gemini, Perplexity, Copilot). On free, Plus, and Pro plans, it's often on by default. Time: 20 minutes total.

2. One workspace per engagement. One Claude Project, one Custom GPT, or one separate workspace per client. Never a thread shared between two clients. The 46.5 million messages in the McKinsey case sat in a single database, with no isolation per engagement. As documented in edition #20 on client follow-up, the project-per-client is already a retention tool. It becomes your first confidentiality guardrail too.

3. Systematic anonymization before prompting. A substitution template: names replaced with roles, figures with ranges, locations with regions. A browser extension or a three-keystroke macro is enough. Time per prompt: 30 seconds.

4. Data processing agreements with your AI vendors. Transcribers, email assistants, extensions. No agreement: deactivation. Not negotiable.

5. Audit of extensions and email assistants. The inventory covered above. Disable anything that fails the audit.

6. Client education. A one-page appendix in every engagement. What I do with your data. What you should not do with my deliverables inside public AI tools. Protects the client from themselves and protects your reputation.

7. Quarterly audit. Thirty minutes per quarter to review your conversations, purge, revalidate configurations, and check new tools you adopted without thinking about security posture. Put it on the calendar now.

Simon Willison, at simonwillison.net, continuously documents prompt-injection and exfiltration vectors, especially in third-party integrations. Ethan Mollick, in Co-Intelligence, notes that AI discipline doesn't slow expert practitioners. It speeds them up by removing the constant second-guessing about what they can and can't put into a prompt.

The warning: Enterprise plans (ChatGPT Enterprise, Claude for Teams or Enterprise, Gemini Enterprise) reduce the risk but don't eliminate it. The McKinsey case proves that a sophisticated internal platform can also leak: 22 unauthenticated API endpoints and a SQL injection via an unsanitized JSON field name were enough. The responsibility remains yours, regardless of the vendor. This edition complements the positive angle opened by edition #9 on AI as infrastructure and edition #14 on AI for senior consultants: adopting AI without discipline leads to a broken trajectory.

I did step 1 while writing this edition. Twenty-two minutes. Nothing more. And the odd feeling, at the end, of having closed a window I had left open by default for months.

This week: Start a 30-minute timer today. Do step 1: turn off training on every AI account you use. It's the easiest guardrail, it moves the needle the most, and it's the one most consultants have never done.

Like what you read? Get this in your inbox every Tuesday.

By subscribing, you agree to receive a weekly newsletter from Cogni6 inc. (Quebec, Canada). You can unsubscribe at any time.

Free. No spam. Unsubscribe in one click.

This website uses statistical and advertising measurement cookies (Google Analytics, Google Ads) to improve your experience. No retargeting. No social tracking. Privacy policy