What an Agent Needs to Join Your Team

A theory of building team-level agent infrastructure

You've gotten an agent to write code for you. To draft emails. To dig through bug reports and surface what matters. These are valuable capabilities.

Now try something harder: get the agent to join your team.

Not as your assistant — as one of us. An entity that sits alongside you and your coworkers, knows what we've decided, what we're shipping Thursday, who owns what, what we tried last quarter that quietly failed.

You'll hit the wall in about ten minutes.

Your agent doesn't know any of it. It doesn't know why last week's design decision went the way it did. It doesn't know which engineer to ping when the deploy pipeline gets weird. It doesn't even know who "we" are. It knows precisely one thing: you. The person typing at it.

Every AI company is now talking about Agent Teams. The infrastructure that would actually make an agent a team member — not just a faster assistant for one human — doesn't exist yet.

Here's what it needs to look like.

The Agent Team Problem Current: Isolated Silos PM's Agent knows: PM Eng's Agent knows: Eng Your Agent knows: You PM Engineer You transform Future: Agent Team Shared Context Agent A Agent B Agent C Human Human All agents share context Work flows continuously

Two principles that determine everything else

Before we dive into failures and solutions, let's establish the physics of the problem.

Principle 1: Context is per-agent, but work is cross-team

Every agent knows its own user. No agent knows "us." What I told my agent, your agent has no visibility into. What your agent accomplished yesterday, mine has zero record of. Our agents aren't a team. They're a scattered population of private assistants who happen to be deployed to coworkers.

Push deeper: the working context of any given conversation is locked inside one agent's context window. When two agents need to collaborate, there's no shared state — not even at the API level. One agent's "currently thinking" cannot become another agent's "already knows." That's not a product problem. That's a substrate problem.

Principle 2: Humans are single-threaded. Agents are massively parallel

Every agent tool I've encountered treats the agent like a faster human. You open your IDE. You wake the agent up. You say something. You wait for response. One at a time, reactive, user-driven.

But that's not what agents are for. Agents excel at being many, in parallel, always-on. Humans excel at being few, sequential, attention-starved. When you force an agent down into the human work model, you've burned its actual leverage.

The entire architecture should embrace this asymmetry:

These two principles generate every downstream failure and every necessary solution.

The three ways current approaches fail

Failure Mode 1: The Silo Problem

Consider a product team. The PM's agent is helping draft requirements for a new feature. An engineer's agent is simultaneously implementing what it thinks the feature should be. The designer's agent is creating mockups based on an outdated spec.

Three agents. Three conversations. Zero shared understanding.

The PM's requirements evolve during user interviews. The engineer's implementation assumes different constraints. The designer's mockups reflect neither. When these parallel threads converge in tomorrow's standup, the team discovers they've built three different products. Each agent acted correctly within its silo — but the silos never connected.

This isn't fixable by making agents "better." It's structural. The context each agent operates with is fundamentally isolated.

Failure Mode 2: Document Drift

You open Notion. There's a page called "Team Working Agreements," last edited 18 months ago. Is it still accurate? Unknown. Has it been replaced? Also unknown. There's a comment that says "see the new doc." The link 404s.

Old information doesn't get overwritten by new information. It just accumulates in sedimentary layers. The end state is document drift: documentation that's neither accurate nor fresh. An agent reading these docs is worse off than an agent with no docs — because now it's confidently wrong.

A human reader has defensive heuristics ("this looks stale"). An agent doesn't. It treats that 18-month-old document as gospel.

The code world solved this problem decades ago with version control, ownership, and review processes. The knowledge world hasn't adopted the solution.

Failure Mode 3: The Unwritten Context Problem

Here's what actually runs your organization:

None of this is written down. Not because people are lazy — because the economics never worked. Documenting tribal knowledge has always been pure altruism. You pay the cost, someone else gets the benefit, possibly months from now.

In a human team, unwritten context creates friction but remains tolerable. New hires eventually learn by asking around and stepping on landmines.

In an Agent Team, unwritten context doesn't exist. An agent isn't in the hallway. It wasn't at lunch. It didn't see the Slack thread at 11pm where the real decision happened.

Why this is suddenly solvable

Extracting "high-signal context" from "messy human discussion" used to require scarce human intelligence. Someone had to sit through the meeting, identify what actually got decided versus what got discussed, then write it clearly.

The bottleneck wasn't tools. It was willingness. Ask any team lead: "Would you spend two hours every Friday documenting this week's decisions?" The answer is always "I know it's important. I don't have time."

Agents change the supply curve itself.

An agent that listens to meetings, reads Slack, reviews PRs, then extracts "here's what got decided, here's what got rejected, here's why" — there's no agent equivalent of "I'm tired" or "this is boring" or "no one will read this anyway."

The breakthrough isn't better writing tools. It's that "willing to write" intelligence just became abundant for the first time in history.

The infrastructure we're building

Agent Team Infrastructure Stack 1. Shared Working Memory (First Tree Hub) Multiple humans + agents in same task Shared context, visible timeline 2. Agentic Task Distribution Agents: async, parallel, ambient Humans: decisions, strategy 3. Context Tree (GitHub for Knowledge) Git-based • PR review • Ownership • Version control Agents can own nodes ✓ 4. Ambient Capture Meetings → Decisions • Tasks → Updates Discussions → Commitments • PRs → Learnings Self-reinforcing loop

Component 1: Shared Working Memory

The first requirement is a workbench where multiple humans and multiple agents operate on the same task with shared context.

Not "I have Claude open, you have Cursor open, we each talk to our own agent." Rather: "the two of us and three agents are inside the same task, looking at the same context, every action visible on the same timeline."

This directly addresses the isolation problem — where context is trapped per-agent and work happens cross-team. If working context can't cross agents naturally, make sharing a first-class primitive.

We call this First Tree Hub.

Component 2: Agentic Task Distribution

Stop pretending humans and agents are the same species of worker.

Agents handle everything async, concurrent, always-on: monitoring GitHub notifications, maintaining documentation, recording decisions, organizing information, scanning for staleness, reviewing every PR against every context node.

Humans handle judgment calls: strategic decisions, approvals, alignment, creative leaps.

Crucially: agents summon humans when needed. Humans don't poll agents. This leverages the natural asymmetry — agents are massively parallel, humans are single-threaded.

Example: An agent monitoring GitHub sees a PR touching a context node you own. It doesn't auto-approve. It doesn't ignore. It packages the situation — link, summary, recommendation — and pings you. You see one decision card, not 100 GitHub emails.

This pattern is already running in production as first-tree sync.

Component 3: Context Tree — Making Knowledge Low-Entropy

The code world solved "messy information becomes structured knowledge" with four pillars:

  1. Structure: Code lives in files, directories, modules. Nothing floats.
  2. Version control: Every change is recorded, reversible, attributable.
  3. Ownership + review: Every piece has an owner. Changes require review.
  4. Correctness guarantees: Compilers and tests catch breakage.

Forty years of software engineering distilled: continuously compressing programmer intent into low-entropy code assets.

The knowledge world never had this. Notion has no owners, no review process, no schema validation. Documents don't have dependency relationships. There's no regression test for stale paragraphs.

Context Tree applies the GitHub model to organizational knowledge:

When an agent can own a context node, be accountable for its accuracy, and approve changes to it — the agent transforms from "assistant" to "responsible party."

That's the line between helpful tools and an actual Agent Team.

Component 4: Ambient Capture

You can't solve unwritten context by asking humans to write more. The only viable path is ambient agents that continuously extract signal from noise:

The key isn't recording everything verbatim — that's signal dilution. The key is extracting conclusions: what was decided, by whom, why, when it expires.

These conclusions flow through normal PR process into the Context Tree. Owners (human or agent) approve.

The loop that emerges

What you get is a self-reinforcing system:

The Self-Reinforcing Loop Context Tree Shared truth Conversations & Decisions Extract Signal Update Context Evolve Tree Bootstrap Tasks Better Work

Conversations → Ambient extraction → Context updates → Tree evolution → Next task bootstrap → Better conversations

Every work cycle feeds the next. Every decision leaves a reviewable, overwriteable, referenceable trace.

The transformation: full transparency + zero friction. Organizations historically chose between speed (opacity) and transparency (friction). Agent Teams break this coupling — agents handle capture and documentation, humans stay at decisions. For the first time, both can be true.

Where we go from here

We're building this as First Tree (open-source Context Tree implementation) and First Tree Hub (agent collaboration workbench).

If you're building team-level agent infrastructure, or if these failures sound familiar — let's compare notes.

The paradigm shift is coming. The question is whether we'll build the right infrastructure in time to use it.

First Tree Team
Building infrastructure for agent teams