Mobile wallpaper 1Mobile wallpaper 2Mobile wallpaper 3
2049 words
10 minutes
Geth(15) Tying It All Together

This guide has been driven by a single question: “What happens when a transaction enters geth and becomes part of the permanent chain?” Over fourteen chapters, we examined every subsystem that participates in that journey. This final chapter traces the complete path in one continuous narrative — from the moment a user submits a transaction to the moment it is sealed in a finalized block — then provides a reference map of the entire architecture.


The Life of a Transaction#

The diagram below shows the full path. Each numbered stage is explained in detail afterwards, with references to the chapter where the subsystem was covered.

User / DApp
|
| eth_sendRawTransaction (JSON-RPC)
v
+------------------+
| RPC Server | <-- Ch 13: transport, dispatch, method reflection
| (rpc/server.go) |
+--------+---------+
|
v
+------------------+
| TransactionAPI | <-- Ch 13: SendRawTransaction, SubmitTransaction
| (ethapi/api.go) |
+--------+---------+
| tx.UnmarshalBinary -> types.Transaction
| txPool.Add()
v
+------------------+
| Transaction Pool | <-- Ch 08: validation, pending/queued maps, blob pool
| (core/txpool/) |
+--------+---------+
|
| NewTxsEvent via event.Feed
v
+-------------------+
| Handler | <-- Ch 12: txBroadcastLoop, txAnnounceLoop
| (eth/handler.go) |---> broadcast to peers (eth/68 protocol)
+-------------------+
|
| Consensus client calls engine_forkchoiceUpdatedV*
| with payloadAttributes (triggers block building)
v
+-------------------+
| Miner | <-- Ch 09: BuildPayload, generateWork
| (miner/worker.go) |
+--------+----------+
| fillTransactions: pull from pool, sort by tip, execute each
|
v
+--------------------+
| State Transition | <-- Ch 06: preCheck, buyGas, EVM dispatch, gas refund
| (core/ |
| state_transition) |
+--------+-----------+
|
| evm.Call() or evm.Create()
v
+--------------------+
| EVM | <-- Ch 07: Run loop, opcodes, gas accounting
| (core/vm/) |
+--------+-----------+
|
| state mutations (SSTORE, CREATE, SELFDESTRUCT, etc.)
v
+--------------------+
| StateDB | <-- Ch 04: stateObject, dirty maps, journal, snapshots
| (core/state/) |
+--------+-----------+
|
| Miner calls FinalizeAndAssemble
v
+--------------------+
| Block Assembly | <-- Ch 09: header finalization, receipt root, state root
| (consensus/beacon) |
+--------+-----------+
|
| CL calls engine_getPayloadV* -> engine_newPayloadV*
| then engine_forkchoiceUpdatedV* (sets new head)
v
+--------------------+
| BlockChain | <-- Ch 10: InsertChain, processBlock, writeBlockWithState
| (core/blockchain) |
+--------+-----------+
|
| writeBlockAndSetHead -> ChainHeadEvent
v
+--------------------+
| Storage Stack | <-- Ch 03, 04, 05: trie commit, triedb, ethdb (LevelDB/Pebble)
| (trie/ + ethdb/) |
+--------------------+
|
| Block propagated to peers
v
+--------------------+
| P2P Network | <-- Ch 11, 12: NewBlockMsg broadcast, snap sync for lagging peers
| (p2p/ + eth/) |
+--------------------+

Stage 1: RPC Arrival#

A user (or DApp) submits a signed transaction by calling eth_sendRawTransaction over HTTP, WebSocket, or IPC. The RPC server (Chapter 13) receives the request, splits the method name on _ to find the eth namespace, and dispatches to TransactionAPI.SendRawTransaction().

SendRawTransaction binary-decodes the raw bytes via tx.UnmarshalBinary() into a types.Transaction (Chapter 02) and calls SubmitTransaction(), which adds the transaction to the pool.

Stage 2: Transaction Pool#

The transaction enters the TxPool coordinator (Chapter 08), which routes it to the appropriate sub-pool — LegacyPool for regular transactions, BlobPool for EIP-4844 blob transactions. The sub-pool validates the transaction: signature recovery, nonce check, balance check, gas limit check, and pool-level limits (account slots, global slots, price floor).

If validation passes, the transaction lands in the pool’s pending map — it is ready for inclusion in a block. The pool publishes a NewTxsEvent via an event.Feed (Chapter 14), notifying the handler.

Stage 3: Peer Broadcast#

The handler in eth/handler.go (Chapter 12) picks up the NewTxsEvent and broadcasts the transaction to connected peers. For each peer, the handler decides between two strategies:

  • Direct broadcast — send the full transaction to a small fraction of peers (square root of total).
  • Hash announcement — send only the transaction hash to the remaining peers, who can request the full transaction if they don’t have it.

This is the eth/68 wire protocol at work. The transaction propagates across the network, landing in the pools of other nodes.

Stage 4: Block Building#

Block building is triggered by the consensus client. When the CL calls engine_forkchoiceUpdatedV* with payloadAttributes, geth’s Engine API (Chapter 09) tells the miner to start building a payload.

The miner’s generateWork() method:

  1. Prepares a block header with the parent hash, timestamp, and other consensus fields.
  2. Creates a StateDB snapshot at the parent block’s state root.
  3. Calls fillTransactions() — pulls pending transactions from the pool, sorts them by effective tip (highest-paying first), and executes each one.

Stage 5: Transaction Execution#

For each transaction, the miner calls core.ApplyTransaction(), which creates a stateTransition and runs its execute() method (Chapter 06). The execution pipeline:

  1. preCheck() — verify the sender’s nonce, balance, and call buyGas() to deduct the upfront gas cost and reduce the block’s GasPool.
  2. EVM dispatch — call evm.Call() for message calls or evm.Create() for contract creation.
  3. Gas refundcalcRefund() computes the refund (capped at 1/5 of gas used since EIP-3529), then returnGas() credits the remaining gas back to the sender.
  4. Fee distribution — the priority fee (tip) goes to the coinbase; the base fee is burned.

Stage 6: EVM Execution#

Inside evm.Call(), the EVM.Run() loop (Chapter 07) takes over:

  1. Fetch the next opcode from the contract’s bytecode.
  2. Look up the opcode in the fork-specific jump table to get its gas cost and handler function.
  3. Deduct gas. If insufficient, abort with an out-of-gas error.
  4. Execute the handler — arithmetic (opAdd), storage (opSstore, opSload), calls (opCall, opDelegateCall), or any other EVM operation.
  5. Repeat until STOP, RETURN, REVERT, or an error.

State-mutating opcodes like SSTORE write to the StateDB (Chapter 04), which records changes in dirty maps on the corresponding stateObject. These changes are not yet committed to the trie — they exist only in memory, protected by the journal’s undo log so they can be reverted if the transaction fails.

Stage 7: State and Trie#

After all transactions are executed, the miner calls the consensus engine’s FinalizeAndAssemble(), which internally invokes Finalize() to apply block-level state changes (beacon withdrawals, system-level operations), then computes the final state root by committing dirty state through the trie (Chapter 03):

  • Each modified account’s storage trie is updated and hashed.
  • The account trie is updated with new account states and hashed.
  • The resulting 32-byte Merkle root becomes the block header’s StateRoot.

The assembled block — header, transaction list, receipts, and withdrawals — is the payload returned to the consensus client via engine_getPayloadV*.

Stage 8: Block Insertion#

The consensus client validates the block on the beacon chain, then sends it back to geth via engine_newPayloadV*. Geth runs InsertBlockWithoutSetHead() (Chapter 10):

  1. Header verification — check timestamp, gas limit, extra data, and other consensus constraints.
  2. Body validation — verify transaction root, uncles hash, withdrawals hash against the header.
  3. State processing — re-execute all transactions (identical to step 5-6 above) and compare the resulting state root against the header’s StateRoot. If they don’t match, the block is rejected.
  4. Write to database — block header, body, receipts, and state trie nodes are persisted to the key-value store (LevelDB or Pebble, Chapter 05).

When the CL calls engine_forkchoiceUpdatedV* designating this block as the new head, geth updates its canonical chain pointers and emits a ChainHeadEvent.

Stage 9: Persistence and Propagation#

The committed state flows down the storage stack:

  • StateDB flushes dirty state objects to the trie (Chapter 03).
  • The trie commits new nodes to TrieDB — either the path-based or hash-based scheme (Chapter 05).
  • TrieDB eventually flushes to ethdb (LevelDB or Pebble) on disk.
  • Old blocks are migrated to the freezer (append-only ancient storage) to keep the active database lean.

Simultaneously, the handler broadcasts the new block to peers (Chapter 12) — full block to a few, block hash to the rest. Peers that are behind can use snap sync or full sync to catch up.

Stage 10: Finality#

The consensus client eventually marks the block (or an ancestor) as finalized — meaning it will never be reverted. Geth updates its currentFinalBlock pointer (Chapter 10). The transaction is now permanently part of the canonical chain.


Architecture Reference#

The table below maps every major subsystem to its chapter, primary source files, and the key structs or interfaces that define it.

Foundation Layer#

ChapterTitleKey FilesKey Types
00Codebase Overviewcmd/geth/main.go, node/node.go, eth/backend.go
01Primitives, Configuration, and Encodingcommon/common.go, params/config.go, rlp/encode.go, crypto/crypto.goAddress, Hash, ChainConfig
02Core Data Typescore/types/block.go, core/types/transaction.go, core/types/receipt.goBlock, Header, Transaction, Receipt, Log

State Architecture#

ChapterTitleKey FilesKey Types
03Merkle Patricia Trietrie/trie.go, trie/node.go, trie/hasher.go, trie/stacktrie.goTrie, fullNode, shortNode, StackTrie
04Account and Statecore/state/statedb.go, core/state/state_object.go, core/state/journal.goStateDB, stateObject, journal
05The Storage Stackethdb/database.go, ethdb/leveldb/, ethdb/pebble/, core/rawdb/schema.goKeyValueStore, Database, Freezer

Execution#

ChapterTitleKey FilesKey Types
06Transaction Executioncore/state_transition.go, core/state_processor.go, core/gaspool.gostateTransition, GasPool, ExecutionResult
07The EVM Deep Divecore/vm/evm.go, core/vm/interpreter.go, core/vm/jump_table.goEVM, Contract, JumpTable

Chain Operations#

ChapterTitleKey FilesKey Types
08The Transaction Poolcore/txpool/txpool.go, core/txpool/legacypool/, core/txpool/blobpool/TxPool, LegacyPool, BlobPool
09Block Production and Consensusminer/worker.go, miner/payload_building.go, consensus/consensus.go, eth/catalyst/api.goMiner, Engine, Payload
10The Blockchaincore/blockchain.go, core/headerchain.go, core/genesis.goBlockChain, HeaderChain

Networking#

ChapterTitleKey FilesKey Types
11P2P Networking and Discoveryp2p/server.go, p2p/peer.go, p2p/rlpx/rlpx.go, p2p/discover/Server, Peer, UDPv4, UDPv5
12Sync and the Ethereum Wire Protocoleth/handler.go, eth/downloader/, eth/fetcher/, eth/protocols/eth/handler, Downloader, TxFetcher

Interface and Lifecycle#

ChapterTitleKey FilesKey Types
13JSON-RPC and Accountsrpc/server.go, rpc/handler.go, internal/ethapi/api.go, accounts/Server, Backend, EthAPIBackend, KeyStore
14Node Lifecyclecmd/geth/main.go, node/node.go, eth/backend.go, event/feed.goNode, Lifecycle, Ethereum, Feed

Cross-Cutting Patterns#

Several design patterns appear across all subsystems. Recognizing them makes the codebase easier to navigate.

The Lifecycle Pattern#

Many long-lived components follow a similar contract — setup methods that launch goroutines, and teardown methods that shut them down. The Node orchestrates the formal Lifecycle interface (Start()/Stop()) for registered services (Chapter 14), starting them in registration order and stopping them in reverse order. Components that implement this interface include:

  • Ethereum (eth/backend.go) — the core service
  • handler (eth/handler.go) — sync and broadcast
  • Server (p2p/server.go) — P2P networking

Other components use variant naming but follow the same pattern — LegacyPool and BlobPool use Init()/Close(), and BlockChain is initialized via NewBlockChain() and torn down via Stop().

The Event Feed Pattern#

Decoupled communication between subsystems uses event.Feed (Chapter 14). A producer calls Send(), and all subscribers receive the value on their channels. Key feeds in geth:

FeedProducerConsumersEvent Type
chainHeadFeedBlockChainminer, handler, filter systemChainHeadEvent
(via SubscribeTransactions)TxPoolhandler (for broadcast)NewTxsEvent
walletEventaccount backendsstartNode() wallet listenerWalletEvent

The Backend Interface Pattern#

API layers are decoupled from implementation through interfaces. The RPC methods in internal/ethapi/ talk to a Backend interface (Chapter 13), which EthAPIBackend implements by delegating to BlockChain, TxPool, Miner, and other concrete types. This allows the same API code to work in full node, light client, and test contexts.

The Config Struct Pattern#

Every major subsystem has a Config struct that aggregates its tunable parameters:

ConfigPackageControls
node.Confignode/config.goData directory, P2P settings, RPC endpoints
ethconfig.Configeth/ethconfig/config.goSync mode, cache sizes, gas price, tx pool limits
p2p.Configp2p/config.goMax peers, listen address, NAT, bootnodes
ChainConfigparams/config.goFork activation timestamps, chain ID, consensus rules
TxPool.Configcore/txpool/legacypool/Price limits, slot limits, journal settings

CLI flags map to these config structs during startup (Chapter 14): utils.SetNodeConfig() writes to node.Config, utils.SetEthConfig() writes to ethconfig.Config, and so on.

The Four-Layer Storage Model#

Data flows through four layers on its way to disk (Chapters 03, 04, 05):

StateDB in-memory dirty maps, journal for rollback
|
v
Trie Merkle Patricia Trie nodes (fullNode, shortNode)
|
v
TrieDB caching layer (path-based or hash-based scheme)
|
v
ethdb key-value store (LevelDB or Pebble) + freezer for ancient data

Reads traverse this stack top-down: StateDB checks its dirty cache, falls through to the trie, which falls through to TrieDB, which falls through to disk. Writes accumulate at the top and flush downward on commit.


Reading Paths#

Depending on your interest, here are suggested paths through the chapters:

“I want to understand transaction execution” Ch 01 (primitives) -> Ch 02 (data types) -> Ch 06 (execution pipeline) -> Ch 07 (EVM) -> Ch 04 (state)

“I want to understand block production” Ch 02 (data types) -> Ch 08 (tx pool) -> Ch 09 (block production) -> Ch 06 (execution) -> Ch 10 (blockchain)

“I want to understand state storage” Ch 01 (primitives) -> Ch 03 (trie) -> Ch 04 (state) -> Ch 05 (storage stack)

“I want to understand networking and sync” Ch 11 (P2P) -> Ch 12 (sync protocol) -> Ch 10 (blockchain) -> Ch 08 (tx pool)

“I want to understand the RPC interface” Ch 13 (JSON-RPC) -> Ch 04 (state, for eth_call) -> Ch 06 (execution, for eth_call internals)

“I want to understand how geth starts up” Ch 14 (node lifecycle) -> Ch 00 (overview) -> then any subsystem chapter


The Central Answer#

The guide’s central question was: What happens when a transaction enters geth and becomes part of the permanent chain?

The answer, compressed to one paragraph:

A signed transaction arrives via JSON-RPC (Ch 13), is validated and stored in the transaction pool (Ch 08), broadcast to peers over the eth/68 wire protocol (Ch 12), pulled by the miner when the consensus client requests a payload (Ch 09), executed through the state transition pipeline (Ch 06) and the EVM (Ch 07), mutating the world state held in StateDB (Ch 04) and committed through the Merkle Patricia Trie (Ch 03). The resulting block is inserted into the blockchain (Ch 10), persisted through the storage stack to LevelDB or Pebble (Ch 05), propagated to peers (Ch 11, Ch 12), and eventually marked as finalized — all within a process orchestrated by the Node lifecycle (Ch 14) using the primitives and configuration established at the foundation (Ch 01, Ch 02).

Geth(15) Tying It All Together
https://kehaozheng.vercel.app/posts/chainethgeth/15_tying_it_all_together/
Author
Kehao Zheng
Published at
2026-04-24
License
CC BY-NC-SA 4.0

Some information may be outdated