AI Governance Worksheet.
A practical checklist for deciding whether an AI chatbot, workflow, feature, or agent is ready to launch. No scoring. No risk tiers. Just the things you need before you ship.

- · 6 hard launch blockers
- · 7 must-have checklists
- · 3-way launch decision
You can launch when you can answer “yes” to every must-have — not when you hit a score.
Ready to launch doesn’t mean risk-free. It means governable.
Get the worksheet
Enter your work email and we'll send the AI Governance Worksheet.
Two steps. No math.
Walk the blockers first. If any are true, you stop. If none are, walk the must-haves. The unchecked boxes are your gaps.
- 01
Check the hard blockers
Six conditions that mean don't launch — full stop. No score buys you out of one of these.
- 02
Walk the must-haves
Seven groups of things you need to be able to say "yes" to. Anything you can't is a gap to close before launch.
Do not launch if any of these are true.
Any one of these is enough on its own. Not launching today doesn’t mean the use case is bad — it means it isn’t governable yet.
No named owner
No business or product owner. No technical owner. No escalation owner.
Unknown data access
You don't know what data the AI touches, where it goes, or whether it's retained or used for training.
Sensitive data, unclear vendor terms
Customer, PHI, financial, legal, or confidential data is involved — but the vendor's terms are unclear.
Auto-action in a high-impact workflow
AI sends external messages, issues refunds, changes records, approves or denies requests, moves money, or touches regulated data — without controls.
No audit trail
You can't see what the AI did, what it accessed, or what output it produced.
No rollback or escalation path
Nobody knows how to stop it, escalate failures, or handle incidents.
Seven groups. Every box must check.
Walk each group with the team. Anything you can’t check is a specific, concrete gap to close — not a number to chase.
Data access
Before you launch, you need to be able to say exactly what data flows in, where it lands, and what happens to it.
- You know what data the AI reads from
- You know what data the AI writes to
- You know whether any of it is customer, confidential, or regulated
- You know whether the data is retained, and for how long
- You know whether your data is used to train the vendor's models
- User-level permissions are respected end-to-end
Vendor and data terms
The vendor's contract decides what they can do with your data. Read it before you ship, not after.
- The vendor is approved (or there's a clear approval path)
- Training-on-your-data terms are understood
- Retention terms are understood (or zero-retention is enabled)
- BAA, DPA, or equivalent is in place if the data requires it
- Security review is complete
Ownership
Every AI use case needs named humans. If something goes wrong, you should already know who picks up the phone.
- A business or product owner is named
- A technical owner is named
- A risk, legal, or security owner is named (for sensitive use cases)
- An escalation owner is named for incidents
Human controls
Decide what the AI can do on its own and what needs a human in the loop. Be explicit.
- It's clear whether the AI suggests, drafts, queues, or acts
- Any auto-action is narrow in scope and reversible
- A human approval gate exists for high-impact actions
- Exceptions and edge cases route to a human
Monitoring
You can't manage what you can't see. Decide how you'll know it's working — and how you'll know when it isn't.
- "Working" is defined in concrete terms (accuracy, refusal rate, latency, etc.)
- Outputs were tested before launch
- Inputs and outputs are logged
- There's a feedback loop for users to flag bad output
- A review cadence is set (weekly, monthly, etc.)
Incident response and rollback
Plan for failure before launch. The middle of an incident is a bad time to design the response.
- Failure modes are documented
- There's a named on-call or escalation owner
- A pause or kill switch exists
- A rollback path exists
- Incidents are logged and reviewed
Policy and accountability
Tie the use case back to your company's AI policy. If you don't have one yet, this is the moment.
- The use case is classified under your AI policy
- It's allowed under that policy (with documented restrictions if any)
- Residual risk is formally accepted by the business owner
Three outcomes. Pick one — honestly.
Ready to launch
Every must-have checks. No hard blocker is true.
Needs more controls
Solid use case. Specific boxes are still unchecked. Close those, then re-walk.
Do not launch yet
A hard blocker is true. The use case isn't governable in its current form.
Ready to launch means understood, bounded, owned, controlled, monitored — and reversible. Not risk-free. Governable.
Bring a use case. We’ll walk it with you.
A 30-minute Roadmap Call — we’ll run your AI use case through the worksheet live and leave you with a clear next move.