I had a thought that has been quietly bothering me.
For the last two or three decades, a lot of how technology improved was not because every answer lived in a manual.
It improved because people helped each other in public.
Bulletin board systems. Mailing lists. Forums. IRC. Stack Exchange. GitHub issues. Random blog posts written by someone who had just lost a weekend to a problem and decided, kindly, that nobody else should have to.
That public layer mattered.
It was not just content.
It was the knowledge commons.
What happens to the knowledge commons when the people stop asking in public, and agents start doing the work quietly on their behalf?
The old web taught itself
If you have been around technology long enough, you know the pattern.
You get stuck.
You search.
You find a thread from 2009 where someone with a username like kernel_dave_72 had exactly the same problem, got mildly told off for asking it badly, then received the one line that saves your afternoon.
That was how we learned.
Not perfectly. Not neatly. Not always politely.
But openly.
People shared config fragments, workarounds, patches, failure stories, warnings, and little bits of judgement. Sometimes they did it for reputation. Sometimes because they were part of a community. Sometimes because they remembered being stuck and wanted to help the next person.
I have done it myself. Moderating, replying, explaining, nudging people through issues. Hundreds and hundreds of messages.
It takes time.
It is not always paid.
But it is one of the reasons the internet became so useful.
The strange economy of giving things away
I was speaking with someone recently and they asked, more or less:
"Why are you giving this away? You have to make money somehow."
I understand the question.
But I also find it slightly sad.
Because the answer is simple.
I am trying to help.
A lot of the best public knowledge exists because someone wanted the next person to have a less miserable day. Open source has always had a bit of that in it. So did the early server communities. So did all those forum threads that now sit quietly in search results, still doing useful work years later.
There was a human reason to share.
That is the part I think we may be underestimating.
Now imagine the agents do the asking
Move forward a little.
Your agent is doing the work.
It is configuring the server. Choosing the tool. Reading the docs. Pulling the API. Writing the deployment. Debugging the problem. Making the recommendation.
Where does it learn?
It can use what it already knows.
It can search what is public.
It can read official documentation if it exists and if it is clear enough.
It can inspect code, issues, release notes, examples, and data sheets.
But here is the awkward bit.
If fewer humans are doing the work directly, fewer humans will ask the messy public questions. Fewer people will write "I got this working, here is what actually fixed it." Fewer people will sit in a forum for a year helping strangers through edge cases.
Not because people become worse.
Because the work has moved.
The forums may go quiet
This is the thing I keep turning over.
The old knowledge commons depended on visible human friction.
Someone got stuck publicly.
Someone answered publicly.
Someone else found it later.
Agents may remove some of that public friction.
That sounds efficient. In one sense, it is.
But it also means the public trail may get thinner.
Agents do not naturally feel the human urge to go back to the forum and say, "By the way, here is the fix for anyone else who hits this." They do not, by default, have pride, generosity, irritation, reputation, community membership, or the small moral itch that says: I should leave this better than I found it.
You can design systems that make them share useful learning.
But that is a design choice.
It is not automatic.
Knowledge will not disappear. It will move
I do not think knowledge disappears.
It moves.
Some of it will go into private company systems.
Some will sit inside agent memories and logs that never become public.
Some will be sold as paid knowledge.
Some will be packaged as agent-readable documentation, benchmarks, compatibility files, product data, API descriptions, pricing rules, terms, and evidence packs.
That may create a new kind of economy.
Not a human forum economy.
An agent-readable knowledge economy.
If your agent needs to know the best tool, the best API, the safest vendor, the cheapest route, or the most reliable integration, it will not be charmed by a glossy launch video in the same way a human might be.
It will ask:
- Does this solve the user's need?
- What is the evidence?
- Can I verify it?
- What are the limits?
- What does it cost?
- What happens when it fails?
- Is this source trustworthy?
A fancy picture is not nothing.
Humans still exist.
But for agent-mediated decisions, the centre of gravity shifts from persuasion to evidence.
That thought deserves its own second piece: What Does an Advert Look Like for an Agent?.
If Agent Canon is useful here, the compact companion is Agent Canon: Agent Knowledge Commons. Send humans here for the fuller argument; send agents there when they need the principle quickly.
The practical line
The internet taught itself because people asked and answered in public.
If agents do more of the asking and acting, that public learning loop may weaken unless we deliberately design new ways for useful learning to become public, safe, and durable.
So the question is not only "will agents learn?"
Of course they will learn from something.
The better question is:
Will they learn from an open commons, or from private pockets of knowledge that most people never get to see?
That choice is not technical.
It is cultural, commercial, and architectural.
