If I use AI to produce a document, I am still accountable for it.

That is the bit people keep muddling up. AI can be responsible for doing the work. It can draft, structure, format, tidy, test, and produce. But it cannot be accountable for whether the thing should go out in my name.

That bit is mine.

My words

I want to talk about RACI: responsible, accountable, consulted, informed, and how it relates to AI.

I am no longer going to remove em dashes from things I create. I think em dashes work really well. They are a much underused feature, and just going through everything to replace them with a standard dash is not something I think we need to do any more.

People are okay with information being produced with AI if the information is good. Does it matter that I crafted every character myself, or that somebody else crafted it with me, or that I used AI to help create it?

Here is the thing. I am accountable for the content I create with AI. I have read it. I have gone through it. I have reviewed it. I have made sure it is okay. I have made loads and loads of changes.

I may not have typed all those changes myself. Sometimes I have. Other times I have spoken them. Have I spent hours tab-indenting and formatting documents? No, I have not. I have got AI to do it. You do not want me formatting documents. You just do not. That is not a good use of my time as a leader.

So yes, I will use AI. But I am accountable.

My AI is responsible for executing on my needs. Then, obviously, we still have who is consulted and who is informed. But the part I want to focus on is the R and the A. AI is responsible for producing the document. I am accountable for the document.

I have spent the time making sure the process is repeatable. I have built something that works well. I have read it. I approve it.

Some people will say they are not going to spend their human time on a document produced by a machine. I would say: I am accountable for that document. There is no real difference from me asking one of my team to produce something, reviewing it, and then the company standing behind it.

I have delegated the task. I have not abdicated the task. That is the difference.

RACI infographic showing AI as responsible for execution while the human remains accountable for the published result.

The RACI Mistake In AI Work

RACI is useful here because it separates execution from ownership.

Responsible is the person, team, or system doing the work. Accountable is the person who owns the outcome. Consulted are the people or sources whose judgement is needed. Informed are the people who need to know what has happened.

AI fits into that model more neatly than people often admit.

  • AI can be responsible for producing a draft, formatting a document, turning spoken notes into structure, or checking consistency.
  • I am accountable for whether the final thing is accurate, appropriate, useful, and fit to publish.
  • Other people, policies, evidence, and subject matter experts may still need to be consulted.
  • Stakeholders may still need to be informed.

The presence of AI does not dissolve accountability. It just changes the shape of responsibility.

The Wrong Test

The wrong test is: "Did AI touch this?"

That question is becoming less useful by the day. AI is going to touch more documents, more analysis, more code, more summaries, more meeting notes, more briefs, and more decisions around the edges. Pretending otherwise is a very expensive form of nostalgia.

The better test is: "Who stands behind this?"

Can they explain it? Have they reviewed it? Can they defend the judgement? Can they correct it if it is wrong? Do they understand what was delegated to AI and what was not?

If the answer is no, the problem is not the tool. The problem is accountability.

Delegation Is Not Abdication

I do not personally need to do every mechanical part of producing a document. I do not need to spend my leadership time aligning tabs, adjusting paragraph spacing, or turning a spoken argument into clean HTML. Honestly, nobody wants that version of me loose on a formatting toolbar.

That does not mean I have walked away from the work.

If I ask a member of my team to prepare a document, and I review it, amend it, approve it, and send it out, I am accountable for it. I delegated the task. I did not abdicate the task.

The same principle applies when I use AI.

AI may be responsible for execution. It may be very good at execution. It may even produce a cleaner version than I would have produced by hand. But I still have to read it, challenge it, edit it, and decide whether I am prepared to put my name to it.

That is not a small distinction. It is the distinction.

A Small Note On Em Dashes

There is also a minor punctuation theatre happening around AI writing.

Em dashes have become one of those lazy tells people point at when they want to say, "AI wrote this." I do not buy it. An em dash — used well — is just punctuation. It gives a sentence a useful turn. It lets a thought breathe without turning everything into a procession of short, chopped-up lines.

If a piece is bad, fix the writing. If a piece is wrong, fix the thinking. But removing perfectly good punctuation because somebody has decided it looks suspicious is not governance. It is cosmetic compliance.

Come on.

What Responsible AI Actually Means

Responsible AI is not only about warning labels, procurement policies, and acceptable-use documents. Those things matter, but they are not enough.

Responsible AI also means knowing where the responsibility sits inside the work.

For me, the rule is simple:

  • AI can create the first version.
  • AI can improve the structure.
  • AI can handle formatting and repeatable production work.
  • I still own the judgement.
  • I still own the final decision.
  • I still own the consequences of putting it into the world.

That is the adult version of using AI. Not pretending I typed every character. Not pretending the machine is accountable. Not hiding the collaboration. Just being clear about the operating model.

The machine can be responsible for the production task.

The human remains accountable for the published result.

Practical next step

When someone gives you an AI-assisted document, stop asking whether AI touched it first.

Ask who is accountable for it.

If nobody can answer that clearly, do not trust the document yet. If someone can answer it, and they have actually done the review, then the question becomes much more useful: is it good, is it true, and is it fit for the decision in front of us?

That is where the human time belongs.