All Days
Day 24

Agent Swarms

// agent orchestration

One prompt. N agents.
Done in parallel.

Stop running tasks one by one. Spawn a swarm of agents, each tackling a piece of the problem simultaneously. Orchestrate, fan out, collect results. This is how you scale intelligence.

6
Swarm Patterns
10x
Throughput
0
Manual Coordination
Section 01

Swarm vs Subagent vs Agent Team

Three orchestration patterns. Each with different tradeoffs in control, context sharing, and scale.

Subagent

1:1 delegation
  • Reports to 1 parent agent
  • Isolated context per task
  • Sequential or parallel execution
  • Parent controls scope and prompt
  • Best for: discrete, bounded tasks

Swarm

1:N fan-out
  • N workers spawned simultaneously
  • Orchestrator coordinates all
  • Fan-out pattern with result collection
  • Workers are independent, no peer comms
  • Best for: parallel bulk work

Agent Team

N:N collaboration
  • Full sessions per agent
  • Peer-to-peer communication
  • Shared or partitioned context
  • Experimental, harder to control
  • Best for: complex multi-step projects
Subagent
Parent
Child
Result
Swarm
Orchestrator
W1
W2
W3
W4
Merged
Agent Team
A
B
peer comms
C
D
Consensus
Section 02

The 6 Swarm Patterns

Click each pattern to expand. Every pattern includes a flow visualization, example, and code you can use today.

one task → N parallel agents → collected results

The simplest swarm pattern. Take one prompt, spawn N agents with different inputs, collect all results. Each agent is independent and has no awareness of the others.

Orchestrator
Agent: Client A
Agent: Client B
Agent: Client C
Agent: Client D
Agent: Client E
Dashboard

Example: “Analyze 5 clients simultaneously”

#!/bin/bash # Fan-out: one agent per client folder clients=("ArtOfYou" "Earleads" "Kateb" "Neoday" "xGrowth") for client in "${clients[@]}"; do claude "Read 01_Projects/Clients/Active/$client/ Return JSON: {blockers:[], next_actions:[], health: 1-10}" \ --output "reports/${client}.json" & done wait # All 5 finish in parallel claude "Merge reports/*.json into a single dashboard summary"
N items → process each → aggregate into final report

Inspired by the classic distributed computing pattern. List items, map an agent to each, then reduce all outputs into a single result. The reduce step is where intelligence compounds.

List Files
Map
scan file 1
scan file 2
scan file N
Reduce
Report

Example: Security scan all API route files

#!/bin/bash # MapReduce: security audit on all route files # MAP phase: spawn one agent per file files=$(find src/routes -name "*.ts" -type f) for f in $files; do claude "Audit $f for security issues. Check: SQL injection, auth bypass, XSS, rate limiting. Return JSON: {file, issues:[], severity, score}" \ --output "audit/$(basename $f).json" & done wait # all agents finish in parallel # REDUCE phase: aggregate all findings claude "Read all audit/*.json files. Produce a security report: - Critical issues first - Total score across all files - Top 3 recommendations" --output "SECURITY-REPORT.md"
Agent A → Agent B → Agent C (sequential handoff)

Each stage has a dedicated agent with a specific role. Output of one becomes input of the next. Unlike fan-out, this is sequential by design — but each agent is a specialist.

Research
Draft DM
Review
Send

Example: Research prospect → write DM → review → send via LinkedIn

# Pipeline: each stage = different agent with different role # Stage 1: Research agent claude "Research this LinkedIn profile: $PROFILE_URL Extract: role, company, pain points, recent posts. Save to /tmp/prospect-research.json" # Stage 2: Writer agent (uses research as input) claude "Read /tmp/prospect-research.json Write a personalized DM using the Twins Method. Max 300 chars, value-first, no pitch. Save to /tmp/draft-dm.txt" # Stage 3: Critic agent (fresh eyes) claude "Review /tmp/draft-dm.txt against these rules: - No generic openers - Must reference specific detail from research - Must end with question Save approved version to /tmp/final-dm.txt" # Stage 4: Send (deterministic, no agent needed) send_linkedin_dm "$(cat /tmp/final-dm.txt)"
3 agents, same problem, different assumptions → first to find root cause wins

When you are stuck on a hard problem, the worst thing is tunnel vision. This pattern forces diversity of thought by giving each agent a different hypothesis. No anchoring bias. No groupthink.

Bug Report
Race Condition?
Memory Leak?
Auth Expiry?
Root Cause

Why this works: No single agent anchors on its first guess. Each investigates from a completely different angle. The one that finds evidence wins.

#!/bin/bash # Competing hypotheses: 3 agents, 3 theories claude "Investigate the intermittent 500 error in /api/checkout. ASSUME the root cause is a race condition. Look for: shared state, missing locks, concurrent writes. Return: {hypothesis, evidence:[], confidence:0-100}" \ --output "/tmp/h1-race.json" & claude "Investigate the intermittent 500 error in /api/checkout. ASSUME the root cause is a memory leak. Look for: unbounded caches, event listener buildup, streams. Return: {hypothesis, evidence:[], confidence:0-100}" \ --output "/tmp/h2-memory.json" & claude "Investigate the intermittent 500 error in /api/checkout. ASSUME the root cause is auth token expiry. Look for: token refresh logic, cache TTL, session handling. Return: {hypothesis, evidence:[], confidence:0-100}" \ --output "/tmp/h3-auth.json" & wait claude "Compare /tmp/h1-race.json, /tmp/h2-memory.json, /tmp/h3-auth.json. Which hypothesis has the strongest evidence? Recommend a fix for the winning hypothesis."
Writer implements → Critic reviews (fresh context) → Writer fixes → repeat N times

The key insight: the critic has fresh eyes. No sunk cost fallacy, no attachment to the code, no bias from having written it. Each iteration improves quality. Stopping condition prevents infinite loops.

Writer
Critic
Writer
Critic
Ship
#!/bin/bash # Writer + Critic loop with stopping condition MAX_ITERATIONS=3 iteration=0 # Initial implementation claude "Implement a rate limiter middleware for Express. Requirements: sliding window, per-user, Redis-backed. Save to src/middleware/rateLimiter.ts" while [ $iteration -lt $MAX_ITERATIONS ]; do # Critic: fresh agent, no bias claude "Review src/middleware/rateLimiter.ts with fresh eyes. Check: edge cases, error handling, performance, security. If no issues found, respond ONLY with: APPROVED Otherwise list specific issues as JSON: {issues:[], severity:critical|major|minor}" \ --output "/tmp/review.json" # Check if approved if grep -q "APPROVED" /tmp/review.json; then echo "Approved after $iteration iterations" break fi # Writer: fix issues from review claude "Read the review at /tmp/review.json. Fix ALL listed issues in src/middleware/rateLimiter.ts. Do not introduce new features, only fix the issues." iteration=$((iteration + 1)) done
200 files → list → loop → spawn agent per file → all parallel

The brute force pattern. When you have a large number of items that need the same transformation, loop through them and spawn an agent for each. The & operator in bash runs each in the background. wait blocks until all complete.

find *.py
for f in
agent &
agent &
agent &
x200
wait
Done

Example: Migrate 200 Python files from Python 2 to Python 3

#!/bin/bash # Bash Loop Swarm: mass code migration # 200 Python files → Python 2 to Python 3 MAX_PARALLEL=10 # control concurrency count=0 for f in $(find src/ -name "*.py" -type f); do claude "Migrate $f from Python 2 to Python 3. - Fix print statements - Fix integer division - Fix unicode handling - Fix dict.keys()/values()/items() - Run: python3 -c 'import ast; ast.parse(open(\"$f\").read())' to verify syntax is valid after migration." & count=$((count + 1)) # Throttle: wait every MAX_PARALLEL jobs if [ $((count % MAX_PARALLEL)) -eq 0 ]; then wait fi done wait # wait for remaining jobs echo "Migration complete: $count files processed"
Visualization

Swarm in Motion

An orchestrator dispatches work to N agents. Each runs independently, returns results to the center.

Section 03

The /batch Skill

Built-in swarm orchestration. One command decomposes, plans, spawns, and manages parallel agents for you.

/batch migrate all React class components to functional in src/
01
Decompose

Automatically scans the codebase and identifies all units of work (files, components, modules)

02
Plan

Presents the full plan for your approval: which files, what changes, estimated scope per unit

03
Spawn

One agent per unit, each in its own worktree. No conflicts, no merge issues, full isolation

04
Execute

All agents run in parallel. Each makes its changes, runs tests, validates output independently

05
Deliver

Each agent opens a PR. 20 PRs in parallel, clean diffs, ready for your review. You just approve.

# What /batch does under the hood: # 1. Scan and decompose files=$(find src/ -name "*.tsx" -exec grep -l "extends React.Component" {} \;) # 2. Present plan (you approve) # "Found 20 class components. Migrate each to functional + hooks." # [approve / modify / cancel] # 3. Spawn agents in isolated worktrees for f in $files; do # Each agent gets its own git worktree git worktree add "/tmp/batch-$(basename $f)" -b "batch/migrate-$(basename $f)" claude --worktree "/tmp/batch-$(basename $f)" \ "Convert $f from class component to functional. Replace lifecycle methods with hooks. Run tests. Open PR when done." & done wait # Result: 20 PRs ready for review
Section 04

Interactive Swarm Builder

Select your task, scale, and output format. Get an orchestrator prompt you can copy and run.

Code Migration
Security Audit
Content Generation
Client Reports
Research
2-5 agents
5-10 agents
10-20 agents
20+ agents
JSON
Files
PRs
Slack Message
#!/bin/bash # Swarm: Code Migration # Scale: 5-10 agents | Output: json MAX_PARALLEL=10 count=0 # Discover items find src/ -name "*.{ext}" -type f for item in $items; do claude "Migrate {file} from {from} to {to}. Preserve all functionality. Run tests after migration. Return: {status: "success"|"failed", changes: number, errors: []}" \ --output "results/{name}.json" & count=$((count + 1)) if [ $((count % MAX_PARALLEL)) -eq 0 ]; then wait fi done wait echo "Swarm complete: $count items processed" # Reduce phase: merge all results claude "Read all results in results/ directory. Merge into a single summary report. Highlight: top issues, patterns, recommendations."
Section 05

Real Use Cases

Patterns that have been deployed in production. Real numbers, real workflows.

Mass JS to TS Migration

200 JavaScript files converted to TypeScript with full type annotations. One agent per file, all running in parallel. Types inferred from usage patterns, imports rewritten, tests updated.

fan-out200 files20 min total

Competitive Analysis

One agent per competitor. Each researches pricing, features, positioning, reviews, tech stack. All results merged into a single comparison matrix with scoring and recommendations.

mapreduce8 competitorsmerged report

Multi-Language Translation

Marketing copy translated into 10 languages simultaneously. Each agent is prompted with language-specific cultural context and terminology. Native-quality output, not machine translation.

fan-out10 languagesparallel

Client Weekly Reports

One agent per client reads their project folder, communication log, and progress file. Generates a formatted weekly report with blockers, wins, and next actions. 12 reports at once.

fan-out12 clients12 reports
Section 06

Cost Calculator

Compare swarm cost vs sequential. Swarms cost the same in tokens but save massive time.

20
Total Token Cost$1.25
Swarm Time (parallel)~5 min
Sequential Time~40 min
Time Saved35 min
swarm (parallel)
sequential (one by one)
Section 07

Knowledge Check

5 questions. Test your understanding of agent swarm patterns.

Question 01 / 05
What is the key difference between a swarm and an agent team?
Question 02 / 05
In a MapReduce pattern, what does the "reduce" phase do?
Question 03 / 05
What bash operator runs processes in parallel?
Question 04 / 05
What does /batch do automatically that you would have to do manually?
Question 05 / 05
Which swarm pattern is best for a hard bug you cannot reproduce?

AY Automate / 30 Days of Claude Code / Day 24: Agent Swarms