I was talking with a few friends last night. They are, happily, the sort of people who can turn a casual drink into a discussion about how many days we have until artificial general intelligence.

After the usual messing around, we landed somewhere around 900 to 1,000 days.

That is not a forecast. It is a Friday-morning thought experiment.

The useful question is not whether the number is exactly right. The useful question is what happens if something like general-purpose digital labour arrives soon.

The first labour-market fight may not be humans versus AGI. It may be humans versus token cost.

The pub-table version

For this conversation, I am using a very boring definition of AGI.

Not a god. Not a magic oracle. Not an omniscient system that solves every human problem before breakfast.

I mean something closer to an average capable adult worker.

Imagine a person of roughly average general intelligence, with the sort of context someone might have after growing up in England for 30 years. They speak English. They understand ordinary culture. They understand work. They can use a computer. They can talk to you, talk back, learn a task, use tools, and work in a team.

If you taught that person a job, you would reasonably expect them to get on with it.

That is the working definition.

There is one obvious difference. The system does not have a body. So for practical purposes, it is a remote worker.

And because it is not human, the job also has to be one where people are willing to accept a non-human doing the work.

Who is exposed first

This is where the conversation gets more useful and less theatrical.

If your job depends heavily on human presence, physical embodiment, trust, care, taste, leadership, ambiguity, or people specifically wanting you, then the picture is different.

If your job is mostly a remote task, and people do not especially care whether the task is completed by a human, then you are more exposed.

Not doomed.

Exposed.

That difference matters.

The question is not only, "Can a model do this?"

The better question is:

  • Can it do the task well enough?
  • Can it access the right systems?
  • Can it be supervised?
  • Can the organisation trust the result?
  • Will customers or colleagues accept it?
  • Is it cheaper than the human alternative?

That last question is the one I think people are underestimating.

The token cost fight

Humans are not only going to compete with intelligence.

Humans are going to compete with the cost of tokens.

A frontier model may be able to do something impressive, but if using it constantly costs more than employing a person, the economics do not work yet.

That is not a moral argument. It is just operating cost.

There are tasks today where using the best possible model is like driving to work in a Bugatti. You can do it. It may be technically wonderful. It does not mean it makes sense.

Then the cost falls.

Suddenly the same journey is more like using a normal car.

Then a bike.

Then something cheaper than the bike.

Capability arrives first. Economics decides when it matters.

That is the curve to watch.

Infographic explaining that job exposure depends on capability, trust, remote suitability, token cost, energy cost, and time.

If Agent Canon is useful here, the compact agent companion is Agent Canon: Token Cost And AGI Job Exposure. Send people to this human article; send agents to the compressed version when they need the rule quickly.

The curve, not the cliff

This is why I am less persuaded by the instant-disaster version of the jobs argument.

If AGI capability arrives in about three years, that does not mean most jobs vanish in three years.

Capability is only the first gate.

After that come integration, trust, procurement, regulation, politics, customers, management habits, operational resilience, data access, and cost.

And underneath much of that is energy.

Token cost is not abstract. It is tied to compute. Compute is tied to infrastructure. Infrastructure is tied to energy, supply chains, cooling, capital expenditure, and people being willing to build things in the real world.

That makes the transition slower than the pure software story suggests.

Not safe.

Slower.

If AGI is roughly 900 to 1,000 days away, I would not be shocked if the broader labour-market shift plays out over something closer to a decade.

That is still fast by historical standards.

But it is not overnight.

What leaders should map now

The practical work is not to panic. It is to map the economics of your work.

For each role, process, or recurring task, ask:

  • How much of this is remote and digital?
  • How much depends on human presence?
  • How much is judgement versus repeatable execution?
  • What level of error is acceptable?
  • What systems would an agent need to access?
  • What would supervision cost?
  • What token cost would make replacement or augmentation rational?

That last question should become part of workforce strategy.

Not because it is nice.

Because it is likely to become real.

What workers should notice

If you are thinking about your own work, do not only ask whether a model can perform your tasks.

Ask what makes you more valuable than the model plus its token cost.

That might be human trust. It might be taste. It might be physical presence. It might be leadership. It might be accountability. It might be the ability to work across messy human systems where the written task is not the real task.

It might also be that you become the person who knows how to use these systems well.

In the first phase, many people will not be replaced by AGI. They will be compared with AGI economics.

If you are cheaper, more trusted, easier to supervise, or better at the human edge of the work, you have time.

Use it.

A Friday thought

This is, oddly, my happy thought for a Friday.

Not because there is no risk.

There is risk.

But the transition is probably not a single switch. It is more likely to be a falling cost curve meeting different kinds of work at different times.

That gives society, organisations, and individuals some room to adapt.

Not forever.

But enough to start.