Documentation Index
Fetch the complete documentation index at: https://seilabs.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Giga SS Store Migration Guide
Giga SS Store is the next step in Sei’s storage evolution on top of SeiDB. It splits the hot EVM state into its own dedicated state-store (SS) database so the node can scale toward the ~150k TPS target throughput, and so non-EVM modules stop paying write amplification for EVM state. After migration the SS layer is repartitioned into two cooperating stores:| Layer | Cosmos backend | EVM backend |
|---|---|---|
| SC (State Commit, app hash) | memiavl | FlatKV |
| SS (State Store, historical queries) | single MVCC DB (PebbleDB or RocksDB) | dedicated EVM SS MVCC DB(s) under data/evm_ss/ |
memiavl remains the authoritative source for the app hash, so this is
invisible to the network.
This guide tracks the canonical procedure in
docs/migration/giga_store_migration.md inside sei-chain. Open an issue there if anything here drifts.Prerequisites
- A
seidbuild with theevm-ss-splitflag wired in (Sei v6.5 or later). Older releases used per-keyevm-ss-write-mode/evm-ss-read-modetoggles; if yourapp.tomlstill has those keys, upgradeseidbefore continuing. sc-enable = trueandss-enable = trueinapp.toml. Both must stay enabled.- A trusted RPC endpoint to state-sync from (chain ID and trust-height source).
- Disk headroom for two SS databases. The EVM split does not duplicate data, but during migration both the old and the new layouts may briefly coexist on disk.
Benefits
- EVM reads are served exclusively from a dedicated EVM SS database.
- Non-EVM modules no longer pay write amplification for EVM state.
- A backend change (PebbleDB ↔ RocksDB) can be combined with the same state
sync, since
ss-backenddrives both the Cosmos SS MVCC DB and every EVM SS sub-DB.
What’s different about EVM SS
EVM SS is point-query only by design (Get / Has). Iteration is
explicitly disabled on the EVM backend for performance: the hot EVM read path
is tuned for direct key lookups, and cross-bucket scans would defeat the
per-type sub-DB layout. Any EVM read that needs iteration must stay on the
Cosmos SS side.
Migration Steps
Step 1: Update app.toml
Apply the following settings in ~/.sei/config/app.toml:
copy
- PebbleDB → RocksDB: set
ss-backend = "rocksdb", buildseidwith-tags rocksdbBackend, and install RocksDB per the RocksDB Backend Guide.ss-backenddrives both the Cosmos SS MVCC DB and every EVM SS sub-DB, so a single setting flips both. - No data migration tool is needed across backends — the state sync populates the new layout.
Step 2: State sync into the new layout
Giga SS Store is fully compatible with the existing state-snapshot format. On import, the composite state store routes each snapshot node based on the importing node’sevm-ss-split:
- With
evm-ss-split = true, EVM snapshot nodes go only into EVM SS and non-EVM nodes go only into Cosmos SS. - The import path normalizes legacy
evm_flatkvsnapshot nodes toevm, so snapshots produced by either the old or new FlatKV module are accepted.
copy
Step 3: Verify the new layout
Once the state sync completes and the node starts producing blocks, confirm Giga SS Store is active in two places. Startup logs. All three lines should appear:debug_traceBlockByNumber is the cleanest end-to-end check —
it forces the node to read EVM state out of the new EVM SS backend:
copy
"result" field rather than an RPC error.
Safety checks
seid runs three DB-state checks at startup and refuses to launch if the EVM
SS and Cosmos SS DBs are inconsistent. They specifically catch the footgun of
flipping evm-ss-split from false to true without state syncing.
- EVM SS directory missing or empty (before the EVM SS is opened). When
evm-ss-split = true, the composite state store refuses to proceed if Cosmos SS already has committed history but the EVM SS directory (data/evm_ss/by default) does not exist or is empty. Failing before the sub-DBs are opened means a rejected config does not leave a confusing emptydata/evm_ss/behind. - EVM SS DB empty post-open, pre-recovery. Belt-and-suspenders for (1)
when the directory exists but its DBs are empty. The WAL only covers the
last
KeepRecentblocks, so replay cannot rebuild a fresh EVM SS from scratch. - Mismatched earliest versions, post-recovery. If the two DBs were populated from different snapshots (or pruned independently), historical reads would be inconsistent. A non-zero earliest-version divergence aborts startup.
evm-ss-split = false and restart. If
data/evm_ss/ is stale from a failed attempt, remove it before state syncing.
Rollback
To roll back:- Set
evm-ss-split = falseinapp.toml. - Restart the node. The EVM SS DB under
data/evm_ss/is no longer opened but stays on disk until manually removed.
data/evm_ss/ after reverting the setting.
FAQ
Where do the data files live after migrating?
- Cosmos SS data lives under the same directory as before, typically
data/pebbledb/for the defaultpebbledbbackend. - EVM SS data lives under
data/evm_ss/. - SC data (
memiavl+ FlatKV) is untouched by this migration.
Does Giga SS Store change the app hash or consensus?
No. The SC layer is unchanged, somemiavl remains the authoritative source
for the app hash. Giga SS Store is a per-node SS change that is invisible to
the network.
Can I migrate a validator node with this guide?
Not yet. This migration guide is for RPC nodes only.Can I migrate an archive node with this guide?
Not yet. Archive-node migration is out of scope for this guide.Can I toggle back to evm-ss-split = false after enabling it?
Yes, but cleanly rolling back requires another state sync — see the
Rollback section above.
Why can’t I just flip evm-ss-split = true on a running node?
Because evm-ss-split = true requires the EVM SS DB to already contain the
full history that Cosmos SS has. A live flip would leave the EVM SS DB empty
while the composite store refuses to fall back to Cosmos SS, which would
translate into missing EVM state at query time. The safety checks above
block this scenario at startup.