Releases: ethereum-optimism/optimism
Release op-contracts v1.4.0-rc.2 - Fault Proofs V1
Overview
This release candidate enables fault proofs in the withdrawal path of the bridge on L1. It also modifies the SystemConfig
to remove the legacy L2OutputOracle
contract in favor of the DisputeGameFactory
.
Specification here.
The full set of L1 contracts included in this release is:
- AddressManager: Latest (this has no version) (No change from prior version)
- AnchorStateRegistry: 1.0.0 (New Contract)
- DelayedWETH: 1.0.0 (New Contract)
- DisputeGameFactory: 1.0.0 (New Contract)
- L1CrossDomainMessenger: 2.3.0 (No change from prior version)
- L1ERC721Bridge: 2.1.0 (No change from prior version)
- L1StandardBridge: 2.1.0 (No change from prior version)
- OptimismMintableERC20Factory: 1.9.0 (No change from prior version)
- OptimismPortal: 3.8.0 (Modified from prior version, with breaking changes)
- SystemConfig: 2.0.0 (Modified from prior version, with breaking changes)
- SuperchainConfig: 1.1.0 (No change from prior version)
- ProtocolVersions: 1.0.0 (No change from prior version)
The L2OutputOracle is no longer used for chains running this version of the L1 contracts.
Contracts Changed
L2OutputOracle
- The
L2OutputOracle
has been removed from the deployed protocol.
- The
OptimismPortal
- The
OptimismPortal
has been modified to allow users to prove their withdrawals against outputs that were proposed as dispute games, created via a trustedDisputeGameFactory
contract. spec.
- The
SystemConfig
- The
SystemConfig
has been changed to remove theL2_OUTPUT_ORACLE
storage slot as well as the getter for the contract. To replace it, a new getter for theDisputeGameFactory
proxy has been added.
- The
New Contracts
DisputeGameFactory
- The
DisputeGameFactory
is the new inbox for L2 output proposals on L1, creating dispute games. - Output proposals are now permissionless by default.
- Challenging output proposals is now permissionless by default.
- The
FaultDisputeGame
- The
FaultDisputeGame
facilitates trustless disputes over L2 output roots proposed on L1. spec.
- The
PermissionedDisputeGame
- A child of the
FaultDisputeGame
contract, that permissions proposing and challenging. Deployed as a safety mechanism to temporarily restore liveness in the event of theFaultDisputeGame
's failure.
- A child of the
MIPS
- The
MIPS
VM is a minimal kernel emulating the MIPS32 ISA with a subset of available Linux syscalls. This contract allows for executing single steps of a fault proof program at the base case of disputes in theFaultDisputeGame
. spec.
- The
PreimageOracle
- The
PreimageOracle
contract is responsible for serving verified data to the program running on top of theMIPS
VM during single-step execution. When data enters thePreimageOracle
, it is verified to be correctly formatted and honest. spec.
- The
AnchorStateRegistry
- The
AnchorStateRegistry
contract is responsible for tracking the latest finalized root claims from various dispute game types.
- The
DelayedWETH
DelayedWETH
is an extension ofWETH9
that delays unwrapping operations. Bonds that are placed in dispute games are held within this contract, and the owner may intervene in withdrawals to redistribute funds to submitters in case of dispute game resolution failure.
Full Changelog
op-node v1.7.5
Partial changelog (op-node)
- chore(op-service): reduce allocations by @hoank101 in #10331
- op-service/eth: Optimize ssz decoding by @sebastianst in #10362
New Contributors (all monorepo)
- @Ethnical made their first contribution in #10246
- @AaronChen0 made their first contribution in #10284
- @threewebcode made their first contribution in #10229
- @SanShi2023 made their first contribution in #10329
- @hoank101 made their first contribution in #10331
Full Changelog (all monorepo): v1.7.4...op-node/v1.7.5
🚢 Docker image: https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-node:v1.7.5
op-stack v1.7.4
⚠️ Strongly recommended maintenance release
🐞 op-node blob reorg bug fix (#10210)
If an L1 block got reorg'd out during blob retrieval, an op-node might get stuck in a loop retrieving a blob that will never exist, requiring a restart. This got fixed by internally signaling the right error types, forcing a derivation pipeline reset in such cases.
✨ op-batcher & op-proposer node sync start (#10116 #10193 #10262 #10273)
op-batcher and op-proposer can now wait for the sequencer to sync to the current L1 head before starting their work.
This fixes an issue where a restart of op-batcher/proposer and op-node at the same time might cause to resend duplicate batches from the last finalized L2 block, because a freshly restarted op-node resyncs from the finalized head, potentially signaling a too early safe head in its sync status.
🏳️ This feature is off by default, so we recommend testing it by using the new batcher and proposer flag --wait-node-sync
(or its corresponding env vars).
Enabling this will cause op-batcher and op-proposer to wait for the sequencer's verifier confirmation depth for typically 4 L1 blocks, or ~1 min, at startup.
🏳️ To speed up this process in case that no recent batcher transaction have happened, there's another optional new batcher flag --check-recent-txs-depth
that lets the batcher check for recent batcher transactions to determine a potentially earlier sync target. This feature is off by default (0) and should be set to the sequencer's verifier confirmation depth to get enabled.
Partial changelog
op-node
- fix: set version during build process for op-node,batcher,proposer by @bitwiseguy in #10087
- fix: Fix the error judgment when obtaining the finalized/safe block o… by @anonymousGiga in #10127
- chore: Update dependency on
superchain
package by @geoknee in #10204 - op-service: Return ethereum.NotFound on 404 by @trianglesphere in #10210
- op-node: remove dependency on bindings by @tynes in #10213
- op-node: prevent spamming of reqs for blocks triggered by
checkForGapInUnsafeQueue
by @bitwiseguy in #10063
op-batcher & op-proposer
- op-service/dial: Add WaitRollupSync by @sebastianst in #10116
- op-proposer: remove dep on op-bindings by @tynes in #10218
- op-batcher: wait for node sync & check recent L1 txs at startup to avoid duplicate txs by @bitwiseguy in #10193
- op-proposer: Add option to wait for rollup sync during startup by @bitwiseguy in #10262
- op-batcher: Always use recent block from startup tx check by @sebastianst in #10273
New Contributors (all monorepo)
- @sellskin made their first contribution in #9963
- @bitwiseguy made their first contribution in #10087
- @0xyjk made their first contribution in #10101
- @anonymousGiga made their first contribution in #10127
- @iczc made their first contribution in #10131
- @brucexc made their first contribution in #9874
- @dome made their first contribution in #10165
- @testwill made their first contribution in #10203
- @dajuguan made their first contribution in #9986
Full Changelog: v1.7.3...v1.7.4
🚢 Docker Images:
op-contracts/v1.4.0-rc.1
This contracts release adds an optional DA Challenge contract for use with OP Plasma. If usePlasma
is set to true in the deploy config, then the OP Plasma feature will be enabled.
The challenge DA contract is used to ensure that data posted as part of OP Plasma is made available. There are four deploy config parameters that must be set when using this feature: daChallengeWindow
, daResolveWindow
, daBondSize
, daResolverRefundPercentage
Release op-node, op-batcher, op-proposer v1.7.3
⬆️ This is a recommended release for Optimism Mainnet, particularly for op-batcher operators.
This release contains general fixes & improvements to op-node, op-batcher, & op-proposer. This also update the monorepo op-geth dependency to https://github.com/ethereum-optimism/op-geth/releases/tag/v1.101311.0
The most important change to be aware of is that the op-batcher is now significantly more performant in handling span batches that contain a large number of L2 blocks.
Partial Changelog
- Rename derive.CompressorFullErr to conventional ErrCompressorFull by @sebastianst in #9936
- chore(op-proposer): Update Proposer Description by @refcell in #9916
- op-node: fetch l1 block with retry by @jsvisa in #9869
- op-challenger: Unhide subcommands by @ajsutton in #9989
- Tests: Batching Benchmarks by @axelKingsley in #9927
- feat(op-service):Persist RethDB instance in the go fetcher struct. by @Nickqiaoo in #9904
- op-batcher: stateful span batches & blind compressor by @axelKingsley in #9954
- simplify bigMSB by @zhiqiangxu in #9998
- all: use the built-in slices library by @carehabit in #10005
- op-node: p2p ping test CI flake fix by @protolambda in #10010
- fix(op-node): handle async disconnects to avoid test flakiness by @felipe-op in #10019
- Update op-geth dependency to v1.101309.0-rc.2 by @roberto-bayardo in #9935
- txmgr: fix racy access to nonces slice in TestQueue_Send with mutex by @sebastianst in #10016
- CI: Less verbose output by @trianglesphere in #10059
- handle
Read
more correctly by @zhiqiangxu in #10034 - update geth dependency to version w/ v1.13.11 upstream commits by @roberto-bayardo in #10041
- op-batcher: Embed Zlib Compressor into Span Channel Out ; Compression Avoidance Strategy by @axelKingsley in #10002
Full Changelog: v1.7.2...v1.7.3
🚢 Docker Images:
op-node, op-batcher, op-proposer v1.7.2 - Batcher Improvements
⬆️ This is a strongly recommended release of op-batcher for all chain operators.
op-batcher changes
Multi-blob support in op-batcher
See release notes for v1.7.2-rc.3
for details on how to configure a multi-blob batcher.
Improved channel duration tracking
The batcher now tracks channel durations relative to the last L1 origin in a previous channel. The last channel's L1 origin is restored at startup and during reorgs.
This ensures that the desired channel duration survives restarts of the batcher, which is particularly important for low-throughput chains that use channel durations of a few hours.
There's a known quirk in the new tracking design, which leads to a slightly lower effective channel duration (~1min lower), related to how a channel timeout is determined relative to the current L1 head, not current channel's newest L1 origin. This will be improved in a future release.
Breaking compressor configuration change
The channel and compressor configuration got simplified by removal of the target-frame-size
flag. The only configuration parameters left to configure the channel size are
max-l1-tx-size
- default of 120k for calldata; for blobs this is overwritten to the max blob sizetaget-num-frames
- default of 1 for calldata; for multi-blob txs, set this to the desired amount of blobs per blob-tx (e.g. 6)
The default compressor is the shadow compressor, which is recommended in production.
Overflow frames bug fix
The batcher now correctly estimates a channel's output size, fixing a rarely but regularly occurring bug that produced overflow frames, leading for example to a 7th blob that was sent in a second batcher transaction.
op-node changes
- Improved peering behavior
- Per-chain hardfork activation times via superchain-registry
Partial Changelog
- feat(op-node): clean peer state when disconnectPeer is called and log intercept blocks by @felipe-op in #9706
- make txmgr aware of the txpool.ErrAlreadyReserved condition by @roberto-bayardo in #9683
- op-node/rollup/derive: also mark
IsLast
astrue
whenclosed && maxDataSize==readyBytes
by @zhiqiangxu in #9696 - op-node: Record genesis as being safe from L1 genesis by @ajsutton in #9684
- TXManager: add IsClosed to TxMgr and use check in BatchSubmitter by @axelKingsley in #9470
- Remove hardfork activation time overrides by @geoknee in #9642
- feat(op-node): gater unblock by @felipe-op in #9763
- op-node: Restore previous unsafe chain when invalid span batch by @pcw109550 in #8925
- export ChannelBuilder so we can use it in external analysis scripts by @roberto-bayardo in #9784
- op-node: Unhide the safedb.path option by @ajsutton in #9789
- More bootnodes by @trianglesphere in #9801
- simplify channel state publishing flow by separating tx sending from result processing by @roberto-bayardo in #9757
- op-batcher: Multi-blob Support by @sebastianst in #9779
- feat: add tx data version byte by @tchardin in #9845
- op-batcher: more accurate max channel duration tracking by @danyalprout in #9769
- remove an impossible condition in
NextBatch
by @zhiqiangxu in #9885 - feat(op-node): p2p rpc input validation by @felipe-op in #9897
- op-batcher: rework channel & compressor config, fix overhead bug by @sebastianst in #9887
- op-batcher: fix "handle receipt" log message to properly log id by @sebastianst in #9918
New Contributors
- @alecananian made their first contribution in #9805
- @friendwu made their first contribution in #9862
Full Changelog: v1.7.0...v1.7.2
🚢 Docker Images
op-batcher v1.7.2-rc.3 - Multi-Blob Batcher
🔴✨ Multi-Blob Batcher Pre-Release
The op-batcher in this release candidate has the capabilities to send multiple blobs per single blob transaction. This is accomplished by the use of multi-frame channels, see the specs for more technical details on channels and frames.
A minimal batcher configuration (with env vars) to enable 6-blob batcher transactions is:
- OP_BATCHER_BATCH_TYPE=1 # span batches, optional
- OP_BATCHER_DATA_AVAILABILITY_TYPE=blobs
- OP_BATCHER_TARGET_NUM_FRAMES=6 # 6 blobs per tx
- OP_BATCHER_TXMGR_MIN_BASEFEE=2.0 # 2 gwei, might need to tweak, depending on gas market
- OP_BATCHER_TXMGR_MIN_TIP_CAP=2.0 # 2 gwei, might need to tweak, depending on gas market
- OP_BATCHER_RESUBMISSION_TIMEOUT=240s # wait 4 min before bumping fees
This enables blob transactions and sets the target number of frames to 6, which translates to 6 blobs per transaction. The min. tip cap and base fee are also lifted to 2 gwei because it is uncertain how easy it will be to get 6-blob transactions included and slightly higher priority fees should help. The resubmission timeout is increased to a few minutes to give more time for inclusion before bumping the fees, because current txpool implementations require a doubling of fees for blob transaction replacements.
Multi-blob transactions are particularly interesting for medium to high-throughput chains, where enough transaction volume exists to fill up 6 blobs in a reasonable amount of time. You can use this calculator for your chain to determine what number of blobs are right for you, and what gas scalar configuration to use. Please also refer to our documentation on Blobs for chain operators.
🚢 Docker image: https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-batcher:v1.7.2-rc.3
A full v1.7.2
release of the op-stack follows soon.
Release op-node v1.7.1
⬆️ This is a recommended release for node operators using Snap Sync on Optimism Mainnet & Sepolia. For other users, this is a minor release. Node operators should be on at least v1.7.0.
Changes
- This release contains a fix to snap sync to ensure that all blocks are inserted to the execution engine when snap sync completes. Previously once snap sync would complete, if blocks where received out of order, the op-node could have internally inconsistent state & the unsafe head could stall for a period of time.
- This release also contains a safeDB feature which tracks the L1 block L2 blocks are derived from.
Partial Changelog
- op-node: Add option to enable safe head history database by @ajsutton in #9575
- op-node: Add flag category and improve testing by @ajsutton in #9636
- op-node: fix finalize log by @will-2012 in #9643
- op-node: p2p pinging background service by @protolambda in #9620
- op-node: Cleanup unsafe payload handling by @trianglesphere in #9661
New Contributors
- @will-2012 made their first contribution in #9643
Full Changelog: op-node/v1.7.0...op-node/v1.7.1
🚢 Docker Images
op-node, op-batcher, op-proposer v1.7.0 - Optimistic Ecotone Mainnet Release
✨🔴 Optimistic Ecotone Mainnet Release
❗ Mainnet operators are required to update to this release to follow the chain post-Ecotone. This release contains an optimistic Ecotone Mainnet activation time of Mar 14, 00:00:01 UTC
.
v1.6.1
contained a different Ecotone Mainnet activation date, so it is particularly important for Mainnet operators to upgrade from this release.
Optimism Governance Voting Cycle 19
The Ecotone activation contained in this release is still subject to approval during the currently ongoing Optimism Governance voting cycle 19, see the Governance Proposal of the Ecotone Protocol Upgrade. The voting period ends on Mar 6 while the veto period ends on Mar 13, 19:00 UTC.
We will soon publish a Veto Release in advance with the Ecotone OP Mainnet activation removed so node operators can prepare for the unlikely event of a negative vote or a veto. We will also soon provide documentation on how to override the Ecotone activation included in this or future releases via command line flags or env vars. This leaves an emergency window of 5h to change the node configuration, or update to the Veto Release, in the unlikely event that the veto period ends in a veto.
New Beacon Endpoint
Node operators need to configure a Beacon endpoint for op-node
, because soon after the Ecotone activation, batch transactions will be sent as 4844 blobs, and blobs can only be retrieved from Beacon nodes. If you're using Lighthouse, make sure to use at least version v5.0.0
, which contains the Dencun upgrade for Mainnet.
The op-node
provides a new command line flag & env var for configuring the Beacon endpoint: --l1.beacon
and $OP_NODE_L1_BEACON
. If you need to configure an HTTP header for authentication with the Beacon endpoint, you can use the flag --l1.beacon-header
or $OP_NODE_L1_BEACON_HEADER
.
❗ We encourage all node operators to already configure their Beacon endpoint to avoid interruptions after the Ecotone activation.
Experimental Snap Sync (execution-layer sync)
op-node 1.7.0 and op-geth v1.101308.2 now support Snap Sync. To enable snap sync set the --syncmode=execution-layer
flag on op-node
. op-geth
should also be set to --syncmode=snap
and must have discovery and be peered to the network for snap sync to work.
This feature is ready to be tested, but still may contain some bugs as it is rolled out.
Partial Changelog (affecting op-node)
- op-node: Expose method to load rollup config without a CLI context by @ajsutton in #9554
- op-service/client: Add http header option to BasicHTTPClient by @sebastianst in #9601
- op-node: Add optional Beacon header flag by @sebastianst in #9604
- op-node: Unhide syncmode flag by @trianglesphere in #9611
- op-node: Fix bootnodes port by @trianglesphere in #9621
- Import
SystemConfigProxy
address from new location insuperchain
(registry). by @geoknee in #9585 - Update Ecotone mainnet activation to Mar 14 00:00:01 UTC by @sebastianst in #9625
- op-node: Add flag categories by @trianglesphere in #9629
Full Changelog (monorepo): v1.6.1...v1.7.0
🚢 Docker Images
https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-node:v1.7.0
https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-batcher:v1.7.0
https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-proposer:v1.7.0
op-node, op-batcher, op-proposer v1.6.1 - OUTDATED Ecotone Mainnet Release
❗ OUTDATED Ecotone Mainnet Release
❌ The Optimistic Ecotone Mainnet activation has been moved forward to Mar 14, 00:00:01 UTC
! You MUST NOT use this release on Mainnet. Use v1.7.0
instead.
✅ You can safely use this release on all other testnets and devnets.
Old Optimistic Release Background Info
This release contained an optimistic Ecotone Mainnet activation time of Mar 18, 17:00:01 UTC
. The purpose of this release was to have a reference for the Governance Proposal of the Ecotone Protocol Upgrade. The Ecotone Mainnet activation still needs to be approved during the currently ongoing Optimism Governance voting cycle 19 whose review and voting periods runs from Feb 15 to Mar 6. The veto period ends on Mar 13.
We will soon publish a Veto Release in advance with the Ecotone OP Mainnet activation removed so node operators can prepare for the unlikely event of a negative vote or a veto. We will also provide documentation on how to override the Ecotone activation included in this or the v1.7.0
release.
New Beacon Endpoint
Node operators who already wish to upgrade to this release need to configure a Beacon endpoint for op-node
, because soon after the Ecotone activation, batch transactions will be sent as 4844 blobs, and blobs can only be retrieved from Beacon nodes. If you're using Lighthouse, make sure to use at least the finalized version v4.6.0
because the latest rc contains a bug in its blob_sidecars
http endpoint.
The op-node
provides a new configuration flag & env var for configuring the Beacon endpoint: --l1.beacon
and $OP_NODE_L1_BEACON
.
We encourage all node operators to already configure their Beacon endpoint to avoid interruptions after the Ecotone activation.
Ecotone Sepolia
This release is ready to be used by Sepolia node operators. Ecotone activated on Sepolia at Wed Feb 21 17:00:00 UTC 2024
. The activation has been part of v1.5.1
as well.
🐞 Bug Fixes
- op-node contained a bug that affected block gossiping for sequencers. This release fixes that bug (#9560). Because of this bug, we advice against usage of op-node
v1.6.0
and this version has therefore not been published. - log: DynamicLogHandler to also capture derived handlers by @sebastianst in #9479
Partial Changelog - op-node
- refactor(op-service, op-node): add missing stop ticker by @hoanguyenkh in #9474
- op-node,op-service: Add Fallback Beacon Client by @trianglesphere in #9458
- go,rollup: Prepare optimistic Ecotone Mainnet release by @sebastianst in #9528
- op-node: Still EL sync if the transition block is finalized by @trianglesphere in #9501
- add additional check by @axelKingsley in #9560
- add additional check before emitting log warning by @axelKingsley in #9564
Partial Changelog - op-batcher & op-proposer
- change gas re-estimation difference logging to debug by @roberto-bayardo in #9420
- log blob fee cap in txmgr by @roberto-bayardo in #9435
- txmgr: Set default min tip cap and basefee to 1 GWei by @sebastianst in #9502
New Contributors
- @tchardin made their first contribution in #9269
- @d-roak made their first contribution in #9462
- @hoanguyenkh made their first contribution in #9474
- @Frierened made their first contribution in #9540
Full Changelog (monorepo): v1.5.1...v1.6.1
🚢 Docker Images
https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-node:v1.6.1
https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-batcher:v1.6.1
https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-proposer:v1.6.1