How to use this checklist
The questions below are organized by six dimensions of readiness. Score yourself honestly on each line. The output is not pass/fail. It is a map of where the workflow is ready and where it is not.
A workflow that scores low on most dimensions is not "bad." It is not yet ready to automate. That is a useful conclusion. It tells you what to fix before you spend money on a tool, not after.
A workflow that scores high on most dimensions is a strong candidate for a 30-day sprint. The rest of this article walks through how to interpret each dimension and what to do at each readiness level.
The cost of automating an unready workflow is not just the tool. It is the credibility cost of an AI rollout that does not produce results, which makes the next rollout harder.
Dimension 1: Workflow clarity
The most fundamental question. If the workflow is not clear, nothing else matters.
- Can you describe the workflow in writing in fifteen minutes?
- Are inputs, outputs, owners, and approval points named?
- Would two members of your team describe the workflow the same way independently?
- Are exceptions documented, or are they handled ad hoc?
- Is there a single human responsible for the workflow today?
If the answer to most of these is no, the workflow is not yet documented. Adding AI to an undocumented workflow makes the undocumented parts run faster. It does not produce operational improvement.
What to do if you score low here: Spend ninety minutes on a whiteboard with the team that runs the workflow. Document inputs, outputs, owners, and the three most common exceptions. That alone is often the highest-leverage hour you will spend this quarter.
Dimension 2: Data availability
AI cannot work without inputs. Inputs cannot work without data that exists, is accessible, and is trustworthy.
- Is the data the workflow depends on stored in tools you control?
- Do you have admin access to those tools, including the ability to export?
- Is the data accurate enough today to be trusted by an automated step?
- Do you have a sample of twenty real, recent cases you could use for validation?
- Are there data fields the workflow depends on that are inconsistent across your records?
The most common failure mode here is not "we have no data." It is "we have data, but the fields are inconsistent." A CRM where Lead Source is sometimes a dropdown value and sometimes a free-text note will produce AI output that looks good and is silently wrong.
What to do if you score low here: Pick the three fields the workflow depends on most. Audit them across the last 200 records. Standardize. Then come back to the AI conversation.
Dimension 3: Repetition and bounded variation
AI is most useful where the work is repetitive enough that small per-task improvements compound, and where the variation between cases is bounded enough that the model has something to learn from.
- Does this workflow happen often enough that a small improvement compounds? (Daily or weekly volume is a good signal.)
- Is the variation between cases bounded? (A small set of categories, not infinite uniqueness.)
- Are there obvious patterns the existing team already uses to handle different categories?
- Is the workflow stable, or has it changed substantially in the last six months?
Workflows that score high on repetition and have bounded variation are excellent first candidates. Workflows where every case is genuinely different (high-end consulting, bespoke contract negotiation, novel medical cases) are usually not first candidates.
What to do if you score low here: Look for a sub-workflow inside the larger one. The larger work may be unique; specific sub-tasks (drafting the first version of a section, summarizing a meeting, classifying inbound communication) are often highly repetitive.
Dimension 4: Tool maturity
The systems your workflow already runs on need to be stable enough to integrate with.
- Are the systems you use stable enough to integrate? (They have APIs, exports, or at least consistent web behavior.)
- Do you have admin access to the tools required?
- Have your existing tools been in place long enough that staff use them consistently?
- Are there systems-of-record conflicts, where the same data lives in two tools and people disagree about which one is correct?
Tool maturity is often the silent blocker. A business where staff prefers email over the CRM will see AI workflow value erode quickly because the data the AI needs is in email, not the CRM.
What to do if you score low here: Resolve the systems-of-record question before the AI question. Pick the one source of truth for each entity (lead, customer, quote, invoice). Make staff use it. Then come back.
Dimension 5: Staff adoption
AI workflows are adopted by humans or they are not adopted at all.
- Will the team that owns the workflow actually use the new system?
- Are there explicit objections to address before launch?
- Is there a staff member who will champion the rollout, or is leadership the only advocate?
- Is the team currently overloaded to the point where any new tool will be ignored?
- Are there past failed tool rollouts that staff remembers?
Past failed rollouts matter. If staff has seen three CRM migrations that did not stick, they will treat the AI rollout as a fourth one. The validation work has to include staff confidence-building, not just output accuracy.
What to do if you score low here: Before any tool selection, surface the objections. Ask the team what would have to be true for them to use a new system. Address those things explicitly. The cost of doing this up front is much lower than the cost of a rollout that fails for adoption reasons.
Dimension 6: Validation feasibility
This is the dimension most teams skip. It is also the dimension that determines whether the rollout produces value or new failure modes.
- Can you define what acceptable output looks like, in writing, in advance?
- Can you collect twenty known-good cases to compare AI output against?
- Is there a human approval point for any high-risk step?
- Can you measure one quantitative metric that tells you the workflow is still working a month later?
- Do you have a re-validation trigger, a defined event that prompts a fresh validation pass?
Validation feasibility correlates strongly with implementation success. We have not seen a workflow score high on validation feasibility and fail. We have seen many workflows score low on validation feasibility, and they all produced predictable rollout problems.
What to do if you score low here: Write the acceptance criteria first. "Reply addresses the customer by name, references the specific product, includes our standard refund disclosure, and contains no claims about pricing" is an acceptance criterion. "Reply sounds professional" is not. If you cannot write the criteria, the workflow is not yet ready.
Scoring guidance
There is no formal score. The pattern is what matters.
- High on all six dimensions: Strong candidate for a 30-day sprint. Pick the bottleneck and start.
- High on workflow, data, and validation, low on adoption: Solve the adoption problem first. AI is downstream of trust.
- High on most, low on data: Audit the three fields the workflow depends on. Standardize. Then come back.
- Low on workflow clarity: This is not yet an AI conversation. It is a documentation conversation.
- Low on validation feasibility: Either write the acceptance criteria, or recognize that this workflow is too judgment-heavy to automate today. Both are honest answers.
What this checklist is not
It is not a maturity model dressed up to sell a longer engagement. It is a forcing function for honesty about whether a specific workflow is ready.
It is also not a one-time exercise. Workflow readiness changes. A workflow that was not ready in February might be ready in November because the team has matured, the data has been cleaned up, or the bottleneck has moved. Re-running the checklist quarterly on candidate workflows is a reasonable practice.
Where Geaux Digital Media fits
The structured version of this checklist is the AI Workflow Review. We score the workflow against the same six dimensions, identify the bottleneck, and respond with a practical first step, or tell you the workflow is not yet ready to automate.
If you score yourself honestly above and find the workflow is not yet ready, the right move is usually not to hire someone to push through. It is to fix the underlying readiness gap (documentation, data, adoption) and come back. The AI rollout will work better when the workflow is ready, and the readiness work pays dividends regardless of whether AI ever gets added.
See the full implementation process for how readiness flows into a 30-day sprint, validation, and a scale-or-stop decision.
Further reading
Brent Dorsey is the founder of Geaux Digital Media and a Senior Systems & Software Engineer with 20+ years across Marine Corps technical systems and DO-178C avionics software for Boeing, GE Aviation, BAE Systems, and RTX. Geaux Digital Media helps Louisiana small businesses implement AI workflows that are defined, validated, and measured before they scale. Request an AI Workflow Review →