Q1: What is the transaction pool and how is its architecture designed?
Three-layer structure
TxPool (coordinator) │ Stores no transactions itself — only routes and merges │ ├─ LegacyPool ← normal transactions (type 0/1/2/4) └─ BlobPool ← blob transactions (type 3)TxPool is an empty shell — when it receives transactions, it uses each subpool’s Filter() to determine which subpool owns each transaction, then routes accordingly. It provides a unified API externally (Add(), Pending()), delegating internally to subpools.
LegacyPool’s two core containers
pending: map[Address]*list ← "can execute right now" transactionsqueue: map[Address]*list ← "might be executable later" transactions (nonce gaps)Pending — transactions whose nonces form a contiguous sequence starting from the account’s current on-chain nonce. The miner only pulls from pending when building blocks. Pending uses strict mode: if the transaction at nonce 5 is removed, nonces 6, 7, 8 all become invalid (no longer contiguous).
Queue — transactions with nonce gaps. For example, if the on-chain nonce is 3 and you send a nonce-5 transaction (missing 4), it goes to the queue. Once nonce 4 arrives, nonce 5 can be promoted to pending. Queue uses non-strict mode: removing one transaction doesn’t affect others.
A transaction’s complete lifecycle
1. Arrival → TxPool.Add() routes to LegacyPool or BlobPool
2. Stateless validation → signature, size, gas limits, fee sanity (no chain state needed)
3. Stateful validation → nonce not too low, sufficient balance, no overdraft
4. Insertion ├─ If nonce is exactly the next one → straight to pending ├─ If nonce has a gap → into queue └─ If same nonce already exists → replacement (fees must be 10%+ higher)
5. Waiting → stays in pool, waiting to be included
6. Promotion → new block arrives, now-executable queue txs move to pending
7. Consumption → miner calls Pending(), takes transactions for block building
8. Removal → transaction included in a block, removed on next Reset()
9. Eviction → pool full: cheapest txs kicked out; or queued > 3 hoursCapacity control
GlobalSlots = 5120 ← pending global capAccountSlots = 16 ← per-account pending capGlobalQueue = 1024 ← queue global capAccountQueue = 64 ← per-account queue capLifetime = 3 hours ← max survival time in queueWhen the pool is full, the priced heap (fee-sorted) decides who gets evicted. It maintains two heaps — one sorted by effective tip (for congestion periods), one by fee cap (for base fee spikes). The cheapest remote transactions are evicted first.
Account reservation mechanism
A critical design: a given sender address can only belong to one subpool. Enforced via Reserver:
pool.reserver.Hold(addr) // claim this addresspool.reserver.Release(addr) // release this addressIf alice has transactions in LegacyPool, BlobPool cannot accept alice’s blob transactions. Otherwise the two pools would track nonce and balance independently, leading to inconsistencies.
Q2: How does the transaction pool update when the chain head changes?
Whenever a new block is inserted, a ChainHeadEvent fires. The coordinator’s loop() goroutine calls Reset() on all subpools. Then LegacyPool performs three operations:
Step 1: reset() — handle reorgs
Normal case (new block is a direct descendant of the old): just update state.
But if a reorg occurs (chain fork switch):
Old chain: A → B → C → D (old head)New chain: A → B → E → F (new head) ↑ common ancestorreset() will:
- Walk back from old head D and new head F to find common ancestor B
- Collect transactions from old-chain-only blocks (C, D) → “lost transactions”
- Collect transactions from new-chain-only blocks (E, F) → “included transactions”
- Compute the difference: lost - included = transactions that need re-injection
- Re-inject these transactions into the pool
This way, transactions that were included in the old fork but not in the new fork don’t vanish — they return to the pool awaiting re-inclusion.
Then update the state snapshot:
pool.currentState = chain.StateAt(newHead.Root)pool.pendingNonces = newNoncer(statedb)Step 2: promoteExecutables() — promotion
For each account in the queue:
1. Discard transactions with nonce < on-chain nonce (already confirmed)2. Discard transactions exceeding balance or gas limit3. Starting from the current pending nonce, extract a contiguous nonce sequence → move to pending4. If per-account queue exceeds 64, drop the highest-nonce transactionsConcrete example:
On-chain nonce = 5Queue contains: nonce 3, 4, 5, 6, 7, 9
Step 1: discard 3, 4 (already confirmed)Step 2: check balance (assume all pass)Step 3: Ready(5) → extract 5, 6, 7 (contiguous) → move to pending Nonce 9 stays in queue (gap between 7 and 9)Step 3: demoteUnexecutables() — demotion
For each account in pending:
1. Forward(nonce): remove transactions with nonce < on-chain nonce (confirmed)2. Filter(balance, gasLimit): remove transactions exceeding balance or gas limit → strict mode: removing nonce N invalidates N+1, N+2... as well → invalidated transactions move back to queue (not discarded)3. If a nonce gap appears (middle transaction missing), entire pending list demoted to queueWhy move invalidated transactions back to queue instead of discarding? Because the account balance might increase later (e.g., someone sends you ETH), at which point those transactions could become executable again.
Complete flow
ChainHeadEvent (new block arrives) │ ▼reset() ├─ detect reorg → re-inject lost transactions └─ update currentState and pendingNonces │ ▼promoteExecutables() └─ now-executable queue txs → move to pending │ ▼demoteUnexecutables() └─ no-longer-valid pending txs → move back to queue or discard │ ▼truncatePending() + truncateQueue() └─ enforce capacity limits, evict excess transactions │ ▼send NewTxsEvent → broadcast newly promoted txs to network peersQ3: Why does BlobPool need to be a separate pool?
This isn’t a cosmetic split — blob transaction properties fundamentally conflict with LegacyPool’s assumptions.
Conflict 1: Size
Normal transaction: hundreds of bytes ~ 128 KBBlob transaction: up to ~768 KB (6 blobs × 128 KB)LegacyPool stores all transactions in memory. If blob transactions did the same, a few thousand would consume several GB of RAM.
BlobPool’s approach: transaction data on disk (using billy.Database), only lightweight blobTxMeta in memory (hash, nonce, fees — a few hundred bytes each). Full transactions are loaded from disk on demand.
Conflict 2: Nonce gap policy
LegacyPool allows nonce gaps in the queue — nonce 5 is missing, nonces 6 and 7 wait in queue until 5 arrives, then all promote together.
BlobPool forbids nonce gaps entirely. Blob transactions are designed for rollups that submit data sequentially — gaps are meaningless. This greatly simplifies BlobPool internals: no queue/pending two-tier structure needed, all transactions are “executable.”
Conflict 3: Persistence
LegacyPool stores transactions purely in memory — geth restart loses everything, requiring re-collection from the network. Acceptable for normal transactions (high turnover, quickly re-broadcast).
But blob transactions have low turnover (only a few per block), and re-collecting them is expensive. BlobPool writes to disk immediately on receipt, surviving restarts.
Conflict 4: Limbo mechanism
Normal transactions are fully contained in the block body after inclusion — the complete transaction data is there.
But blob transaction sidecars (the actual data) are not in the block body (Chapter 2: sidecars are stripped at block building time). If a reorg occurs, blob transactions that need re-injection have lost their sidecar.
BlobPool solves this with limbo: when a transaction is included but not yet finalized, blob data moves from main storage to limbo. If reorg happens, recover from limbo; if finalized, delete from limbo.
Add() → store (main disk storage) │ ▼ included in blockMove to limbo (included but not finalized) │ ├─ finalized → delete from limbo (done) └─ reorg → recover from limbo back to main storage (re-await inclusion)LegacyPool has no such problem and no such mechanism.
Conflict 5: Eviction strategy
LegacyPool sorts eviction by a single dimension (effective tip or fee cap).
BlobPool has three independent fee dimensions (execution tip, execution fee cap, blob fee cap) — no simple ordering exists. It uses a sophisticated “fee jumps” algorithm: convert absolute fees to “how many 1.125x adjustments from current fee to the cap,” take the worst value across dimensions, compress, then sort.
Summary comparison
| LegacyPool | BlobPool | |
|---|---|---|
| Storage | Memory | Disk |
| Max tx size | ≤128 KB | ≤768 KB |
| Nonce gaps | Allowed (queue) | Forbidden |
| Persistence | None (lost on restart) | Yes (survives restart) |
| Limbo mechanism | Not needed | Needed (sidecar recovery) |
| Eviction sorting | Single dimension | Three dimensions |
| Per-account cap | 16 pending + 64 queue | 16 total |
This is not “could merge but chose to split” — it is must split, because nearly every design decision differs.
Some information may be outdated






