Commit Graph

349 Commits

Author SHA1 Message Date
Michael Sproul
2985ede062 Merge remote-tracking branch 'origin/unstable' into gloas-envelope-processing 2026-01-19 12:05:38 +11:00
Mac L
3903e1c67f More consensus/types re-export cleanup (#8665)
Remove more of the temporary re-exports from `consensus/types`


Co-Authored-By: Mac L <mjladson@pm.me>
2026-01-16 04:43:05 +00:00
Mac L
605ef8e8e6 Remove state dependency from core module in consensus/types (#8653)
#8652


  - This removes instances of `BeaconStateError` from `eth_spec.rs`, and replaces them directly with `ArithError` which can be trivially converted back to `BeaconStateError` at the call site.
- Also moves the state related methods on `ChainSpec` to be methods on `BeaconState` instead. I think this might be a more natural place for them to exist anyway.


Co-Authored-By: Mac L <mjladson@pm.me>
2026-01-15 02:16:40 +00:00
Jimmy Chen
dbe474e132 Delete attester cache (#8469)
Fixes attester cache write lock contention. Alternative to #8463.


  


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2026-01-06 03:08:02 +00:00
Mark Mackey
a8c313dc5c Merge branch 'gloas-containers' into gloas-envelope-processing-merge 2025-12-16 15:56:08 -06:00
ethDreamer
a39e991557 Gloas(EIP-7732): Containers / Constants (#7923)
* #7850

This is the first round of the conga line! 🎉

Just spec constants and container changes so far.


  


Co-Authored-By: shane-moore <skm1790@gmail.com>

Co-Authored-By: Mark Mackey <mark@sigmaprime.io>

Co-Authored-By: Shane K Moore <41407272+shane-moore@users.noreply.github.com>

Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>

Co-Authored-By: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>

Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>

Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-12-16 06:45:45 +00:00
Michael Sproul
257d8d7c11 Merge remote-tracking branch 'origin/unstable' into gloas-containers 2025-12-16 14:04:18 +11:00
Andrés David Ramírez Chiquillo
49e1112da2 Add regression test for unaligned checkpoint sync with payload pruning (#8458)
Closes #8426


  Added a new regression test: `reproduction_unaligned_checkpoint_sync_pruned_payload`.

This test reproduces the bug where unaligned checkpoint syncs (skipped slots at epoch boundaries) fail to import the anchor block's execution payload when `prune_payloads` is enabled.

The test simulates the failure mode by:
- Skipping if execution payloads are not applicable.
- Creating a harness with an unaligned checkpoint (gap of 3 slots).
- Configuring the client with prune_payloads = true.


It asserts that the Beacon Chain builds successfully (previously it panicked with `MissingFullBlockExecutionPayloadPruned`), confirming the fix logic in `try_get_full_block`.


Co-Authored-By: Andrurachi <andruvrch@gmail.com>

Co-Authored-By: Michael Sproul <michaelsproul@users.noreply.github.com>
2025-12-15 02:33:29 +00:00
Eitan Seri-Levi
556e917092 Rust 1.92 lints (#8567)
Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>
2025-12-12 08:45:38 +00:00
Mac L
f3fd1f210b Remove consensus/types re-exports (#8540)
There are certain crates which we re-export within `types` which creates a fragmented DevEx, where there are various ways to import the same crates.

```rust
// consensus/types/src/lib.rs
pub use bls::{
AggregatePublicKey, AggregateSignature, Error as BlsError, Keypair, PUBLIC_KEY_BYTES_LEN,
PublicKey, PublicKeyBytes, SIGNATURE_BYTES_LEN, SecretKey, Signature, SignatureBytes,
get_withdrawal_credentials,
};
pub use context_deserialize::{ContextDeserialize, context_deserialize};
pub use fixed_bytes::FixedBytesExtended;
pub use milhouse::{self, List, Vector};
pub use ssz_types::{BitList, BitVector, FixedVector, VariableList, typenum, typenum::Unsigned};
pub use superstruct::superstruct;
```

This PR removes these re-exports and makes it explicit that these types are imported from a non-`consensus/types` crate.


Co-Authored-By: Mac L <mjladson@pm.me>
2025-12-09 07:13:41 +00:00
Mark Mackey
8480f93a42 Merge branch 'gloas-containers' into gloas-envelope-processing 2025-11-28 14:40:55 -06:00
Mark Mackey
252700b5db Merge remote-tracking branch 'upstream/unstable' into gloas-containers 2025-11-28 14:27:54 -06:00
Michael Sproul
e21a433748 Allow manual checkpoint sync without blobs (#8470)
Since merging this PR, we don't need `--checkpoint-blobs`, even prior to Fulu:

- https://github.com/sigp/lighthouse/pull/8417

This PR removes the mandatory check for blobs prior to Fulu, enabling simpler manual checkpoint sync.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>

Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
2025-11-26 23:00:21 +00:00
Mark Mackey
77e1c67723 Added Flag to Bypass New Payload Cache 2025-11-24 12:24:57 -06:00
Michael Sproul
261322c3e3 Merge remote-tracking branch 'origin/stable' into unstable 2025-11-20 13:04:32 +11:00
Jimmy Chen
af1d9b9991 Fix custody context initialization race condition that caused panic (#8391)
Take 2 of #8390.

Fixes the race condition properly instead of propagating the error. I think this is a better alternative, and doesn't seem to look that bad.


  * Lift node id loading or generation from `NetworkService ` startup to the `ClientBuilder`, so that it can be used to compute custody columns for the beacon chain without waiting for Network bootstrap.

I've considered and implemented a few alternatives:
1. passing `node_id` to beacon chain builder and compute columns when creating `CustodyContext`. This approach isn't good for separation of concerns and isn't great for testability
2. passing `ordered_custody_groups` to beacon chain. `CustodyContext` only uses this to compute ordered custody columns, so we might as well lift this logic out, so we don't have to do error handling in `CustodyContext` construction. Less tests to update;.


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-11-17 05:23:12 +00:00
Mark Mackey
dae2f1dcea fix beacon chain / http_api tests 2025-11-10 16:01:01 -06:00
ethDreamer
d3ea16cb1e fix beacon chain tests (#8392) 2025-11-10 13:28:59 -06:00
Eitan Seri- Levi
0feede4ae5 Merge branch 'unstable' of https://github.com/sigp/lighthouse into gloas-containers 2025-11-06 22:53:01 -08:00
Michael Sproul
a7e89a8761 Optimise state_root_at_slot for finalized slot (#8353)
This is an optimisation targeted at Fulu networks in non-finality.

While debugging on Holesky, we found that `state_root_at_slot` was being called from `prepare_beacon_proposer` a lot, for the finalized state:

2c9b670f5d/beacon_node/http_api/src/lib.rs (L3860-L3861)

This was causing `prepare_beacon_proposer` calls to take upwards of 5 seconds, sometimes 10 seconds, because it would trigger _multiple_ beacon state loads in order to iterate back to the finalized slot. Ideally, loading the finalized state should be quick because we keep it cached in the state cache (technically we keep the split state, but they usually coincide). Instead we are computing the finalized state root separately (slow), and then loading the state from the cache (fast).

Although it would be possible to make the API faster by removing the `state_root_at_slot` call, I believe it's simpler to change `state_root_at_slot` itself and remove the footgun. Devs rightly expect operations involving the finalized state to be fast.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-11-05 02:08:46 +00:00
Eitan Seri- Levi
3d75238a6d Resolve merge conflicts 2025-11-04 10:36:08 -08:00
Michael Sproul
0507eca7b4 Merge remote-tracking branch 'origin/stable' into unstable-merge-v8 2025-11-04 16:08:34 +11:00
Jimmy Chen
bc86dc09e5 Reduce number of blobs used in tests to speed up CI (#8194)
`beacon-chain-tests` is now regularly taking 1h+ on CI since Fulu fork was added.

This PR attemtpts to reduce the test time by bringing down the number of blobs generated in tests - instead of generating 0..max_blobs, the generator now generates 0..1 blobs by default, and this can be modified by setting `harness.execution_block_generator.set_min_blob_count(n)`.

Note: The blobs are pre-generated and doesn't require too much CPU to generate however processing a larger number of them on the beacon chain does take a lot of time.

This PR also include a few other small improvements
- Our slowest test (`chain_segment_varying_chunk_size`) runs 3x faster in Fulu just by reusing chain segments
- Avoid re-running fork specific tests on all forks
- Fix a bunch of tests that depends on the harness's existing random blob generation, which is fragile


beacon chain test time on test machine is **~2x** faster:

### `unstable`

```
Summary [ 751.586s] 291 tests run: 291 passed (13 slow), 0 skipped
```

### this branch

```
Summary [ 373.792s] 291 tests run: 291 passed (2 slow), 0 skipped
```

The next set of tests to optimise is the ones that use [`get_chain_segment`](77a9af96de/beacon_node/beacon_chain/tests/block_verification.rs (L45)), as it by default build 320 blocks with supernode - an easy optimisation would be to build these blocks with cgc = 8 for tests that only require fullnodes.


  


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>

Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
2025-11-04 02:40:44 +00:00
Michael Sproul
4908687e7d Proposer duties backwards compat (#8335)
The beacon API spec wasn't updated to use the Fulu definition of `dependent_root` for the proposer duties endpoint. No other client updated their logic, so to retain backwards compatibility the decision has been made to continue using the block root at the end of epoch `N - 1`, and introduce a new v2 endpoint down the track to use the correct dependent root.

Eth R&D discussion: https://discord.com/channels/595666850260713488/598292067260825641/1433036715848765562


  Change the behaviour of the v1 endpoint back to using the last slot of `N - 1` rather than the last slot of `N - 2`. This introduces the possibility of dependent root false positives (the root can change without changing the shuffling), but causes the least compatibility issues with other clients.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-11-03 08:06:03 +00:00
Eitan Seri-Levi
b57d046c4a Fix CGC backfill race condition (#8267)
During custody backfill sync there could be an edge case where we update CGC at the same time where we are importing a batch of columns which may cause us to incorrectly overwrite values when calling `backfill_validator_custody_requirements`. To prevent this race condition, the expected cgc is now passed into this function and is used to check if the expected cgc == the current validator cgc. If the values arent equal, this probably indicates that a very recent CGC occurred so we do not prune/update values in the `epoch_validator_custody_requirements` map.


  


Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>
2025-11-03 00:51:42 +00:00
Mac L
f5809aff87 Bump ssz_types to v0.12.2 (#8032)
https://github.com/sigp/lighthouse/issues/8012


  Replace all instances of `VariableList::from` and `FixedVector::from` to their `try_from` variants.

While I tried to use proper error handling in most cases, there were certain situations where adding an `expect` for situations where `try_from` can trivially never fail avoided adding a lot of extra complexity.


Co-Authored-By: Mac L <mjladson@pm.me>

Co-Authored-By: Michael Sproul <michaelsproul@users.noreply.github.com>

Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-28 04:01:09 +00:00
Shane K Moore
acac0503ed latest gloas-container fixes (#8273) 2025-10-23 08:56:57 -05:00
Mark Mackey
9da4528e59 Merge remote-tracking branch 'upstream/unstable' into gloas-containers 2025-10-22 16:05:06 -05:00
Jimmy Chen
43c5e924d7 Add --semi-supernode support (#8254)
Addresses #8218

A simplified version of #8241 for the initial release.

I've tried to minimise the logic change in this PR, although introducing the `NodeCustodyType` enum still result in quite a bit a of diff, but the actual logic change in `CustodyContext` is quite small.

The main changes are in the `CustdoyContext` struct
* ~~combining `validator_custody_count` and `current_is_supernode` fields into a single `custody_group_count_at_head` field. We persist the cgc of the initial cli values into the `custody_group_count_at_head` field and only allow for increase (same behaviour as before).~~
* I noticed the above approach caused a backward compatibility issue, I've [made a fix](15569bc085) and changed the approach slightly (which was actually what I had originally in mind):
* when initialising, only override the  `validator_custody_count` value if either flag `--supernode` or `--semi-supernode` is used; otherwise leave it as the existing default `0`. Most other logic remains unchanged.

All existing validator custody unit tests are still all passing, and I've added additional tests to cover semi-supernode, and restoring `CustodyContext` from disk.

Note: I've added a `WARN` if the user attempts to switch to a `--semi-supernode` or `--supernode` - this currently has no effect, but once @eserilev column backfill is merged, we should be able to support this quite easily.

Things to test
- [x] cgc in metadata / enr
- [x] cgc in metrics
- [x] subscribed subnets
- [x] getBlobs endpoint


  


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-22 05:23:17 +00:00
Eitan Seri-Levi
33e21634cb Custody backfill sync (#7907)
#7603


  #### Custody backfill sync service
Similar in many ways to the current backfill service. There may be ways to unify the two services. The difficulty there is that the current backfill service tightly couples blocks and their associated blobs/data columns. Any attempts to unify the two services should be left to a separate PR in my opinion.

#### `SyncNeworkContext`
`SyncNetworkContext` manages custody sync data columns by range requests separetly from other sync RPC requests. I think this is a nice separation considering that custody backfill is its own service.

#### Data column import logic
The import logic verifies KZG committments and that the data columns block root matches the block root in the nodes store before importing columns

#### New channel to send messages to `SyncManager`
Now external services can communicate with the `SyncManager`. In this PR this channel is used to trigger a custody sync. Alternatively we may be able to use the existing `mpsc` channel that the `SyncNetworkContext` uses to communicate with the `SyncManager`. I will spend some time reviewing this.


Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>

Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>

Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
2025-10-22 03:51:34 +00:00
Eitan Seri-Levi
46dde9afee Fix data column rpc request (#8247)
Fixes an issue mentioned in this comment regarding data column rpc requests:
https://github.com/sigp/lighthouse/issues/6572#issuecomment-3400076236


  


Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>

Co-Authored-By: Michael Sproul <micsproul@gmail.com>
2025-10-21 23:54:35 +00:00
Michael Sproul
21bab0899a Improve block header signature handling (#8253)
Closes:

- https://github.com/sigp/lighthouse/issues/7650


  Reject blob and data column sidecars from RPC with invalid signatures.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-21 13:58:12 +00:00
Michael Sproul
2f8587301d More proposer shuffling cleanup (#8130)
Addressing more review comments from:

- https://github.com/sigp/lighthouse/pull/8101

I've also tweaked a few more things that I think are minor bugs.


  - Instrument `ensure_state_can_determine_proposers_for_epoch`
- Fix `block_root` usage in `compute_proposer_duties_from_head`. This was a regression introduced in 8101 😬 .
- Update the `state_advance_timer` to prime the next-epoch proposer cache post-Fulu.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-20 03:14:14 +00:00
SunnysidedJ
d1e06dc40d #6853 Adding store tests for data column pruning (#7228)
#6853 Update store tests to cover data column pruning


  Created a helper function `check_data_column_existence` which is a copy of `check_blob_existence` but checking data columns instead.
The helper function is then used to check whether data columns are also pruned when blobs are pruned if PeerDAS is enabled.


Co-Authored-By: SunnysidedJ <j@testinprod.io>

Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>

Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-16 15:20:26 +00:00
Mark Mackey
45d58cb4a6 Merge remote-tracking branch 'upstream/unstable' into gloas-containers 2025-10-15 10:12:48 -05:00
Pawan Dhananjay
2c328e32a6 Persist only custody columns in db (#8188)
* Only persist custody columns

* Get claude to write tests

* lint

* Address review comments and fix tests.

* Use supernode only when building chain segments

* Clean up

* Rewrite tests.

* Fix tests

* Clippy

---------

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2025-10-13 20:32:13 +11:00
Michael Sproul
38fdaf791c Fix proposer shuffling decision slot at boundary (#8128)
Follow-up to the bug fixed in:

- https://github.com/sigp/lighthouse/pull/8121

This fixes the root cause of that bug, which was introduced by me in:

- https://github.com/sigp/lighthouse/pull/8101

Lion identified the issue here:

- https://github.com/sigp/lighthouse/pull/8101#discussion_r2382710356


  In the methods that compute the proposer shuffling decision root, ensure we don't use lookahead for the Fulu fork epoch itself. This is accomplished by checking if Fulu is enabled at `epoch - 1`, i.e. if `epoch > fulu_fork_epoch`.

I haven't updated the methods that _compute_ shufflings to use these new corrected bounds (e.g. `BeaconState::compute_proposer_indices`), although we could make this change in future. The `get_beacon_proposer_indices` method already gracefully handles the Fulu boundary case by using the `proposer_lookahead` field (if initialised).


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-09-29 01:13:33 +00:00
Michael Sproul
c754234b2c Fix bugs in proposer calculation post-Fulu (#8101)
As identified by a researcher during the Fusaka security competition, we were computing the proposer index incorrectly in some places by computing without lookahead.


  - [x] Add "low level" checks to computation functions in `consensus/types` to ensure they error cleanly
- [x] Re-work the determination of proposer shuffling decision roots, which are now fork aware.
- [x] Re-work and simplify the beacon proposer cache to be fork-aware.
- [x] Optimise `with_proposer_cache` to use `OnceCell`.
- [x] All tests passing.
- [x] Resolve all remaining `FIXME(sproul)`s.
- [x] Unit tests for `ProtoBlock::proposer_shuffling_root_for_child_block`.
- [x] End-to-end regression test.
- [x] Test on pre-Fulu network.
- [x] Test on post-Fulu network.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-09-26 14:44:50 +00:00
Michael Sproul
f04d5ecddd Another check to prevent duplicate block imports (#8050)
Attempt to address performance issues caused by importing the same block multiple times.


  - Check fork choice "after" obtaining the fork choice write lock in `BeaconChain::import_block`. We actually use an upgradable read lock, but this is semantically equivalent (the upgradable read has the advantage of not excluding regular reads).

The hope is that this change has several benefits:

1. By preventing duplicate block imports we save time repeating work inside `import_block` that is unnecessary, e.g. writing the state to disk. Although the store itself now takes some measures to avoid re-writing diffs, it is even better if we avoid a disk write entirely.
2. By returning `DuplicateFullyImported`, we reduce some duplicated work downstream. E.g. if multiple threads importing columns trigger `import_block`, now only _one_ of them will get a notification of the block import completing successfully, and only this one will run `recompute_head`. This should help avoid a situation where multiple beacon processor workers are consumed by threads blocking on the `recompute_head_lock`. However, a similar block-fest is still possible with the upgradable fork choice lock (a large number of threads can be blocked waiting for the first thread to complete block import).


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-09-16 04:10:42 +00:00
Shane K Moore
2f1aa10d4d Gloas test fixes (#7932)
* use builder_pending_payments_limit in upgrade gloas

* check_all_blocks_from_altair_to_fulu test to not cover gloas for now

* store_tests fixes

* remove gloas fork from CI network testing for now

* remove gloas fork from CI network testing for now
2025-08-29 11:45:25 -05:00
Mac L
e438691683 Add Gloas boilerplate (#7728)
Adds the required boilerplate code for the Gloas (Glamsterdam) hard fork. This allows PRs testing Gloas-candidate features to test fork transition.

This also includes de-duplication of post-Bellatrix readiness notifiers from #6797 (credit to @dapplion)
2025-08-26 02:49:48 +00:00
Jimmy Chen
b4704eab4a Fulu update to spec v1.6.0-alpha.4 (#7890)
Fulu update to spec [v1.6.0-alpha.4](https://github.com/ethereum/consensus-specs/releases/tag/v1.6.0-alpha.4).
- Make `number_of_columns` a preset
- Optimise `get_custody_groups` to avoid computing if cgc = 128
- Add support for additional typenum values in type_dispatch macro
2025-08-20 02:05:04 +00:00
chonghe
522bd9e9c6 Update Rust Edition to 2024 (#7766)
* #7749

Thanks @dknopik and @michaelsproul for your help!
2025-08-13 03:04:31 +00:00
Jimmy Chen
8bc6693dac Fix wrong columns getting processed on a CGC change (#7792)
This PR fixes a bug where wrong columns could get processed immediately after a CGC increase.

Scenario:
- The node's CGC increased due to additional validators attached to it (lets say from 10 to 11)
- The new CGC is advertised and new subnets are subscribed immediately, however the change won't be effective in the data availability check until the next epoch (See [this](ab0e8870b4/beacon_node/beacon_chain/src/validator_custody.rs (L93-L99))). Data availability checker still only require 10 columns for the current epoch.
- During this time, data columns for the additional custody column (lets say column 11) may arrive via gossip as we're already subscribed to the topic, and it may be incorrectly used to satisfy the existing data availability requirement (10 columns), and result in this additional column (instead of a required one) getting persisted, resulting in database inconsistency.
2025-08-07 00:45:04 +00:00
Eitan Seri-Levi
db8b6be9df Data column custody info (#7648)
#7647


  Introduces a new record in the blobs db `DataColumnCustodyInfo`

When `DataColumnCustodyInfo` exists in the db this indicates that a recent cgc change has occurred and/or that a custody backfill sync is currently in progress (custody backfill will be added as a separate PR). When a cgc change has occurred `earliest_available_slot` will be equal to the slot at which the cgc change occured. During custody backfill sync`earliest_available_slot` should be updated incrementally as it progresses.

~~Note that if `advertise_false_custody_group_count` is enabled we do not add a `DataColumnCustodyInfo` record in the db as that would affect the status v2 response.~~
(See comment https://github.com/sigp/lighthouse/pull/7648#discussion_r2212403389)

~~If `DataColumnCustodyInfo` doesn't exist in the db this indicates that we have fulfilled our custody requirements up to the DA window.~~
(It now always exist, and the slot will be set to `None` once backfill is complete)

StatusV2 now uses `DataColumnCustodyInfo` to calculate the `earliest_available_slot` if a `DataColumnCustodyInfo` record exists in the db, if it's `None`, then we return the `oldest_block_slot`.
2025-07-22 13:30:30 +00:00
Michael Sproul
c7bb3b00e4 Fix lookups of the block at oldest_block_slot (#7693)
Closes:

- https://github.com/sigp/lighthouse/issues/7690


  Another checkpoint sync related fix! See issue for a description of the bug.

We fix it by just loading the block root of the `oldest_block_slot`, rather than trying to load the slot prior, which will always fail.
2025-07-02 23:40:04 +00:00
Michael Sproul
a459a9af98 Fix and test checkpoint sync from genesis (#7689)
Fix a bug involving checkpoint sync from genesis reported by Sunnyside labs.


  Ensure that the store's `anchor` is initialised prior to storing the genesis state. In the case of checkpoint sync from genesis, the genesis state will be in the _hot DB_, so we need the hot DB metadata to be initialised in order to store it.

I've extended the existing checkpoint sync tests to cover this case as well. There are some subtleties around what the `state_upper_limit` should be set to in this case. I've opted to just enable state reconstruction from the start in the test so it gets set to 0, which results in an end state more consistent with the other test cases (full state reconstruction). This is required because we can't meaningfully do any state reconstruction when the split slot is 0 (there is no range of frozen slots to reconstruct).
2025-07-02 04:50:33 +00:00
Pawan Dhananjay
e305cb1b92 Custody persist fix (#7661)
N/A


  Persist the epoch -> cgc values. This is to ensure that `ValidatorRegistrations::latest_validator_custody_requirement` always returns a `Some` value post restart assuming the `epoch_validator_custody_requirements` map has been updated in the previous runs.
2025-07-01 06:06:37 +00:00
Michael Sproul
c1f94d9b7b Test database schema stability (#7669)
This PR implements some heuristics to check for breaking database changes. The goal is to prevent accidental changes to the database schema occurring without a version bump.
2025-06-30 10:55:47 +00:00
Pawan Dhananjay
9b1f3ed9d1 Add gossip check (#7652)
N/A


  Add an additional gossip condition.
2025-06-27 00:26:38 +00:00