Commit Graph

7239 Commits

Author SHA1 Message Date
Michael Sproul
4908687e7d Proposer duties backwards compat (#8335)
The beacon API spec wasn't updated to use the Fulu definition of `dependent_root` for the proposer duties endpoint. No other client updated their logic, so to retain backwards compatibility the decision has been made to continue using the block root at the end of epoch `N - 1`, and introduce a new v2 endpoint down the track to use the correct dependent root.

Eth R&D discussion: https://discord.com/channels/595666850260713488/598292067260825641/1433036715848765562


  Change the behaviour of the v1 endpoint back to using the last slot of `N - 1` rather than the last slot of `N - 2`. This introduces the possibility of dependent root false positives (the root can change without changing the shuffling), but causes the least compatibility issues with other clients.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-11-03 08:06:03 +00:00
Eitan Seri-Levi
25832e5862 Add mainnet configs (#8344)
#8135

mainnet config PR: https://github.com/eth-clients/mainnet/pull/11


  


Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>

Co-Authored-By: Michael Sproul <michael@sigmaprime.io>

Co-Authored-By: Tan Chee Keong <tanck@sigmaprime.io>
2025-11-03 06:53:13 +00:00
Mac L
2c9b670f5d Rework lighthouse_version to reduce spurious recompilation (#8336)
#8311


  Removes the `git_version` crate from `lighthouse_version` and implements git `HEAD` tracking manually.
This removes the (mostly) broken dirty tracking but prevents spurious recompilation of the `lighthouse_version` crate.

This also reworks the way crate versions are handled by utilizing workspace version inheritance and Cargo environment variables.
This means the _only_ place where Lighthouse's version is defined is in the top level `Cargo.toml` for the workspace. All relevant binaries then inherit this version. This largely makes the  `change_version.sh` script useless so I've removed it, although we could keep a version which just alters the workspace version (if we need to maintain compatibility with certain build/release tooling.

### When is a Rebuild Triggered?

1. When the build.rs file is changed.
2. When the HEAD commit changes (added, removed, rebased, etc)
3. When the branch changes (this includes changing to the current branch, and creating a detached HEAD)

Note that working/staged changes will not trigger a recompile of `lighthouse_version`.


Co-Authored-By: Mac L <mjladson@pm.me>

Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-11-03 02:46:31 +00:00
Eitan Seri-Levi
b57d046c4a Fix CGC backfill race condition (#8267)
During custody backfill sync there could be an edge case where we update CGC at the same time where we are importing a batch of columns which may cause us to incorrectly overwrite values when calling `backfill_validator_custody_requirements`. To prevent this race condition, the expected cgc is now passed into this function and is used to check if the expected cgc == the current validator cgc. If the values arent equal, this probably indicates that a very recent CGC occurred so we do not prune/update values in the `epoch_validator_custody_requirements` map.


  


Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>
2025-11-03 00:51:42 +00:00
Michael Sproul
c46cb0b5b0 Merge remote-tracking branch 'origin/release-v8.0' into unstable 2025-11-03 09:28:48 +11:00
Eitan Seri-Levi
55588f7789 Rust 1.91 lints (#8340)
Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>
2025-10-31 08:08:37 +00:00
chonghe
af9cae4d3e Add version to the response of beacon API client side (#8326)
Co-Authored-By: Tan Chee Keong <tanck@sigmaprime.io>
2025-10-30 16:47:27 +00:00
Jimmy Chen
5978b4a677 Bump gas limit to 60M (#8331)
Bump gas limit to 60M as part of Fusaka mainnet release.


  


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>

Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
2025-10-30 04:48:30 +00:00
Jimmy Chen
30094f0c08 Remove redundant subscribe_all_data_column_subnets field from network (#8259)
Addresses this comment: https://github.com/sigp/lighthouse/pull/8254#discussion_r2447998786

We're currently using `subscribe_all_data_column_subnets` here to subscribe to all subnets
522bd9e9c6/beacon_node/lighthouse_network/src/types/topics.rs (L82-L92)

But its unnecessary because the else path also works for supernode (uses `sampling_subnets` instead)

The big diffs will disappear once #8254 is merged.


  


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-30 03:42:36 +00:00
Odinson
1cee814a95 Fix: custody backfill sync display incorrect time estimation (#8291)
Fixes #8268


  Switch `est_time` from time until DA boundary slot, to time to finish total custody work from the original earliest data-column slot down to the DA boundary


Co-Authored-By: PoulavBhowmick03 <bpoulav@gmail.com>
2025-10-30 03:16:07 +00:00
Michael Sproul
f70c650d81 Update spec tests to v1.6.0-beta.1 (#8263)
Update the EF spec tests to v1.6.0-beta.1

There are a few new light client tests (which we pass), and some for progressive containers, which we haven't implemented (we ignore them).


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-29 08:21:23 +00:00
Pawan Dhananjay
b69c2f5ba1 Run CI tests only recent forks (#8271)
Partially addresses #8248


  Run the beacon chain, http and network tests only for recent forks instead of everything from phase 0.
Also added gloas also to the recent forks list. I thought that would be a good way to know if changes in the current fork affect future forks.

Not completely sure if we should run for future forks, but added it so that we can discuss here.


Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>

Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-29 07:00:25 +00:00
Michael Sproul
3bfdfa5a1a Merge remote-tracking branch 'origin/release-v8.0' into unstable 2025-10-29 16:20:42 +11:00
hopinheimer
6f0d0dec75 Fix failing CI for compile-with-beta-compiler (#8317)
Co-Authored-By: hopinheimer <knmanas6@gmail.com>
2025-10-29 05:12:57 +00:00
hopinheimer
341eeeabe3 Extracting the Error impl from the monolith eth2 (#7878)
Currently the `eth2` crate lib file is a large monolith of almost 3000 lines of code. As part of the bosun migration we are trying to increase code readability and modularity in the lighthouse crates initially, which then can be transferred to bosun


Co-Authored-By: hopinheimer <knmanas6@gmail.com>

Co-Authored-By: hopinheimer <48147533+hopinheimer@users.noreply.github.com>
2025-10-28 07:02:02 +00:00
Mac L
f4b1bb46b5 Remove compare_fields and import from crates.io (#8189)
Use the recently published `compare_fields` and remove it from Lighthouse
https://crates.io/crates/compare_fields


Co-Authored-By: Mac L <mjladson@pm.me>
2025-10-28 05:49:47 +00:00
Mac L
f5809aff87 Bump ssz_types to v0.12.2 (#8032)
https://github.com/sigp/lighthouse/issues/8012


  Replace all instances of `VariableList::from` and `FixedVector::from` to their `try_from` variants.

While I tried to use proper error handling in most cases, there were certain situations where adding an `expect` for situations where `try_from` can trivially never fail avoided adding a lot of extra complexity.


Co-Authored-By: Mac L <mjladson@pm.me>

Co-Authored-By: Michael Sproul <michaelsproul@users.noreply.github.com>

Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-28 04:01:09 +00:00
chonghe
5840004c36 Add /lighthouse/custody/info to Lighthouse book (#8305)
Co-Authored-By: Tan Chee Keong <tanck@sigmaprime.io>
2025-10-28 03:41:08 +00:00
kevaundray
6e71fd7c19 chore: fix typo (#8292)
Co-Authored-By: kevaundray <kevtheappdev@gmail.com>
2025-10-28 01:20:43 +00:00
Lion - dapplion
5db1dff8a6 Downgrade gossip logs set to INFO level (#8288)
Testing non-finality checkpoint synced these logs showed up in my INFO grep and were noisy. INFO should only include the notifier and exceptional events. I don't see why the user would care about this info.


  Downgrade to debug


Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
2025-10-27 23:33:58 +00:00
kevaundray
613ce3c011 chore!: remove pub visibility on OVERFLOW_LRU_CAPACITY and STATE_LRU_CAPACITY_NON_ZERO (#8234)
- Renames `OVERFLOW_LRU_CAPACITY` to `OVERFLOW_LRU_CAPACITY_NON_ZERO` to follow naming convention of `STATE_LRU_CAPACITY_NON_ZERO`
- Makes  `OVERFLOW_LRU_CAPACITY_NON_ZERO` and `STATE_LRU_CAPACITY_NON_ZERO` private since they are only used in this module
- Moves `STATE_LRU_CAPACITY` into test module since it is only used for tests


  


Co-Authored-By: Kevaundray Wedderburn <kevtheappdev@gmail.com>
2025-10-27 11:23:45 +00:00
chonghe
9baef8b849 Update Lighthouse book (#8284)
Co-Authored-By: Tan Chee Keong <tanck@sigmaprime.io>

Co-Authored-By: chonghe <44791194+chong-he@users.noreply.github.com>
2025-10-27 09:09:54 +00:00
Michael Sproul
d67ae92112 Implement /lighthouse/custody/info API (#8276)
Closes:

- https://github.com/sigp/lighthouse/issues/8249


  New `/lighthouse/custody` API including:

- [x] Earliest custodied data column slot
- [x] Node CGC
- [x] Custodied columns


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-27 08:48:12 +00:00
chonghe
ba706ce3bf Revise logging in BlobsByRoot requests (#8296)
#7756 introduces a logging issue, where the relevant log:
da5b231720/beacon_node/network/src/network_beacon_processor/rpc_methods.rs (L380-L385)

obtains the `block_root` from `slots_by_block_root.keys()`. If the `block_root` is empty (block not found in the data availability checker), then the log will not show any block root:

`DEBUG BlobsByRoot outgoing response processed       peer_id: 16Uiu2HAmCBxs1ZFfsbAfhSA98rUUL8Q1egLPb6WpGdKZxX6HqQYX, block_root: [], returned: 4`

This PR revises to return the `block_root` in the request as a vector of block root


  


Co-Authored-By: Tan Chee Keong <tanck@sigmaprime.io>
2025-10-27 08:48:10 +00:00
Lion - dapplion
da5b231720 Prevent dropping large binary data to logs (#8290)
Testing non finalized checkpoint sync noticed this log that dumps blob data in Debug format to the logs.


  Log only block root and commitment of each blob


Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
2025-10-26 23:47:25 +00:00
0x19dG87
4b522d760b Remove deprecated flag --disable-deposit-contract-sync from doc (#8124)
Co-Authored-By: 0x19dG87 <dmytro.ico@gmail.com>
2025-10-24 01:13:39 +00:00
Jimmy Chen
b59feb042c Release v8.0.0 rc.2 (#8255)
Open PRs to include for the release
- #7907
- #8247
- #8251
- #8253
- #8254
- #8265
- #8269
- #8266


  


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>

Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
v8.0.0-rc.2
2025-10-23 07:05:49 +00:00
Michael Sproul
2e55a0a9c8 New design for blob/column pruning (#8266)
We are seeing some crazy IO utilisation on Holesky now that data columns have started to expire. Our previous approach of _iterating the entire blobs DB_ doesn't seem to be scaling.


  New blob pruning algorithm that uses a backwards block iterator from the epoch we want to prune, stopping early if an already-pruned slot is encountered.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-23 05:54:24 +00:00
Pawan Dhananjay
c668cb7d9a Only publish reconstructed columns that we need to sample (#8269)
N/A


  We were publishing columns all columns that we didn't already have in the da cache when reconstructing. This is unnecessary outbound bandwidth for the node that is supposed to sample fewer columns.
This PR changes the behaviour to publish only columns that we are supposed to sample in the topics that we are subscribed to.


Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
2025-10-23 05:05:08 +00:00
Jimmy Chen
d8c6c57029 Trigger backfill on startup if user switches to a supernode or semi-supernode (#8265)
This PR adds backfill functionality to nodes switching to become a supernode or semi-supernode. Please note that we currently only support a CGC increase, i.e. if the node's already custodying 67 columns, switching to semi-supernode (64) will have no effect.


  From @eserilev
> if a node's cgc increases on start up, we just need two things for custody backfill to do its thing
>
> - data column custody info needs to be updated to reflect the cgc change
> - `CustodyContext::validator_registrations::epoch_validator_custody_requirements` needs to be updated to reflect the cgc change

- [x] Add tests
- [x] Test on devnet-3
- [x] switch to supernode
- [x] switch to semisupernode
- [x] Test on live testnets
- [x] Update docs (functions)


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-23 02:56:09 +00:00
Jimmy Chen
43c5e924d7 Add --semi-supernode support (#8254)
Addresses #8218

A simplified version of #8241 for the initial release.

I've tried to minimise the logic change in this PR, although introducing the `NodeCustodyType` enum still result in quite a bit a of diff, but the actual logic change in `CustodyContext` is quite small.

The main changes are in the `CustdoyContext` struct
* ~~combining `validator_custody_count` and `current_is_supernode` fields into a single `custody_group_count_at_head` field. We persist the cgc of the initial cli values into the `custody_group_count_at_head` field and only allow for increase (same behaviour as before).~~
* I noticed the above approach caused a backward compatibility issue, I've [made a fix](15569bc085) and changed the approach slightly (which was actually what I had originally in mind):
* when initialising, only override the  `validator_custody_count` value if either flag `--supernode` or `--semi-supernode` is used; otherwise leave it as the existing default `0`. Most other logic remains unchanged.

All existing validator custody unit tests are still all passing, and I've added additional tests to cover semi-supernode, and restoring `CustodyContext` from disk.

Note: I've added a `WARN` if the user attempts to switch to a `--semi-supernode` or `--supernode` - this currently has no effect, but once @eserilev column backfill is merged, we should be able to support this quite easily.

Things to test
- [x] cgc in metadata / enr
- [x] cgc in metrics
- [x] subscribed subnets
- [x] getBlobs endpoint


  


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-22 05:23:17 +00:00
Eitan Seri-Levi
33e21634cb Custody backfill sync (#7907)
#7603


  #### Custody backfill sync service
Similar in many ways to the current backfill service. There may be ways to unify the two services. The difficulty there is that the current backfill service tightly couples blocks and their associated blobs/data columns. Any attempts to unify the two services should be left to a separate PR in my opinion.

#### `SyncNeworkContext`
`SyncNetworkContext` manages custody sync data columns by range requests separetly from other sync RPC requests. I think this is a nice separation considering that custody backfill is its own service.

#### Data column import logic
The import logic verifies KZG committments and that the data columns block root matches the block root in the nodes store before importing columns

#### New channel to send messages to `SyncManager`
Now external services can communicate with the `SyncManager`. In this PR this channel is used to trigger a custody sync. Alternatively we may be able to use the existing `mpsc` channel that the `SyncNetworkContext` uses to communicate with the `SyncManager`. I will spend some time reviewing this.


Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>

Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>

Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
2025-10-22 03:51:34 +00:00
Eitan Seri-Levi
46dde9afee Fix data column rpc request (#8247)
Fixes an issue mentioned in this comment regarding data column rpc requests:
https://github.com/sigp/lighthouse/issues/6572#issuecomment-3400076236


  


Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>

Co-Authored-By: Michael Sproul <micsproul@gmail.com>
2025-10-21 23:54:35 +00:00
Michael Sproul
21bab0899a Improve block header signature handling (#8253)
Closes:

- https://github.com/sigp/lighthouse/issues/7650


  Reject blob and data column sidecars from RPC with invalid signatures.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-21 13:58:12 +00:00
chonghe
040d992132 Add version to the response of beacon API getPendingConsolidations (#8251)
* #7440


  


Co-Authored-By: Tan Chee Keong <tanck@sigmaprime.io>
2025-10-21 13:58:10 +00:00
Jimmy Chen
66f88f6bb4 Use millis_from_slot_start when comparing against reconstruction deadline (#8246)
This recent PR below changes the max reconstruction delay to be a function of slot time. However it uses `seconds_from_slot_start` when comparing (and dropping `nano`), so it might delay reconstruction on networks where the slot time isn’t a multiple of 4, e.g. on gnosis this only happens at 2s instead of 1.25s.:
- https://github.com/sigp/lighthouse/pull/8067#discussion_r2443875068


  Use `millis_from_slot_start` when comparing against reconstruction deadline

Also added some tests for reconstruction delay.


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-21 02:24:43 +00:00
Pawan Dhananjay
092aaae961 Sync cleanups (#8230)
N/A


  1. In the batch retry logic, we were failing to set the batch state to `AwaitingDownload` before attempting a retry. This PR sets it to `AwaitingDownload` before the retry and sets it back to `Downloading` if the retry suceeded in sending out a request
2. Remove all peer scoring logic from retrying and rely on just de priorotizing the failed peer. I finally concede the point to @dapplion 😄
3. Changes `block_components_by_range_request` to accept `block_peers` and `column_peers`. This is to ensure that we use the full synced peerset for requesting columns in order to avoid splitting the column peers among multiple head chains. During forward sync, we want the block peers to be the peers from the syncing chain and column peers to be all synced peers from the peerdb.
Also, fixes a typo and calls `attempt_send_awaiting_download_batches` from more places


Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
2025-10-20 11:50:00 +00:00
Jimmy Chen
c012f46cb9 Fix get_header JSON deserialization. (#8228)
#8224


  Please list or describe the changes introduced by this PR.


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-20 07:10:40 +00:00
chonghe
2b30c96f16 Avoid attempting to serve blobs after Fulu fork (#7756)
* #7122


  


Co-Authored-By: Tan Chee Keong <tanck@sigmaprime.io>

Co-Authored-By: chonghe <44791194+chong-he@users.noreply.github.com>
2025-10-20 06:29:21 +00:00
Jimmy Chen
da93b89e90 Feature gate test CLI flags (#8231)
Closes #6980


  I think these flags may be useful in future peerdas / das testing, and would be useful to keep. Hence I've gated them behind a `testing` feature flag.


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-20 03:14:16 +00:00
Michael Sproul
2f8587301d More proposer shuffling cleanup (#8130)
Addressing more review comments from:

- https://github.com/sigp/lighthouse/pull/8101

I've also tweaked a few more things that I think are minor bugs.


  - Instrument `ensure_state_can_determine_proposers_for_epoch`
- Fix `block_root` usage in `compute_proposer_duties_from_head`. This was a regression introduced in 8101 😬 .
- Update the `state_advance_timer` to prime the next-epoch proposer cache post-Fulu.


Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-20 03:14:14 +00:00
Odinson
79716f6ec1 Max reconstruction delay as a function of slot time (#8067)
Fixes #8054


  


Co-Authored-By: PoulavBhowmick03 <bpoulav@gmail.com>
2025-10-17 08:49:13 +00:00
Jimmy Chen
76a37a0aef Revert incorrect fix made in #8179 (#8215)
This PR reverts #8179.

It turns out that the fix was invalid because an unknown root is always not a finalized descendant:

522bd9e9c6/consensus/proto_array/src/proto_array.rs (L976-L979)

so for any data columns with unknown parents, it will always penalise the gossip peer and disconnect it pretty quickly. On a small network, the node may lose all of its peers.

The impact is pretty obvious when the peer count is small and sync speed is slow, and is therefore easily reproducible by running a fresh supernode on devnet-3.

This isn't as obvious on a live testnet like holesky / sepolia, we haven't noticed this, probably due to its high peer count and sync speed - the nodes might be able to reach head quickly before losing too many peers.


  The previous behaviour isn't ideal but safe:  triggering unknown parent lookup and penalise the bad peer if it happens to be malicious or faulty. So for now it's safer to revert the change and plan for a proper fix after the v8 release.


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-16 23:25:30 +00:00
Mac L
f13d0615fd Add eip_3076 crate (#8206)
#7894


  Moves the `Interchange` format from `slashing_protection` and thus removes the dependency on `slashing_protection` from `eth2` which can now just depend on the slimmer `eip_3076` crate.


Co-Authored-By: Mac L <mjladson@pm.me>
2025-10-16 16:10:42 +00:00
SunnysidedJ
d1e06dc40d #6853 Adding store tests for data column pruning (#7228)
#6853 Update store tests to cover data column pruning


  Created a helper function `check_data_column_existence` which is a copy of `check_blob_existence` but checking data columns instead.
The helper function is then used to check whether data columns are also pruned when blobs are pruned if PeerDAS is enabled.


Co-Authored-By: SunnysidedJ <j@testinprod.io>

Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>

Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
2025-10-16 15:20:26 +00:00
Pawan Dhananjay
73e75e3e69 Ignore extra columns in da cache (#8201)
N/A


  Found this issue in sepolia. Note: the custody requirement for this node is 100.
```
Oct 14 11:25:40.053 DEBUG Reconstructed columns                         count: 28, block_root: 0x4d7946dec0ab59f2afd46610d7c54af555cb4c2851d9eea7d83dd17cf6e96aae, slot: 8725628
Oct 14 11:25:45.568 WARN  Internal availability check failure           block_root: 0x4d7946dec0ab59f2afd46610d7c54af555cb4c2851d9eea7d83dd17cf6e96aae, error: Unexpected("too many columns got 128 expected 100")
```

So if any of the block components arrives late, then we reconstruct all 128 columns and try to add it to da cache and have more columns than needed for availability in the cache.

There are 2 ways I can think of fixing this:
1. pass only the required columns to the da cache after reconstruction here 60df5f4ab6/beacon_node/beacon_chain/src/data_availability_checker.rs (L647-L648)
2. Ensure that we add only columns that we need to sample in the da cache. I think this is safer since we can add columns to the cache from multiple code paths and this fixes it at the source.

~~This PR implements (2).~~ Thought more about it, I think (1) is cleaner since we filter gossip and rpc columns also before calling `put_kzg_verified_data_columns`/


Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
2025-10-16 09:25:44 +00:00
Mac L
345faf52cb Remove safe_arith and import from crates.io (#8191)
Use the recently published `safe_arith` and remove it from Lighthouse
https://crates.io/crates/safe_arith


Co-Authored-By: Mac L <mjladson@pm.me>
2025-10-15 06:03:46 +00:00
Jimmy Chen
5886a48d96 Add max_blobs_per_block check to data column gossip validation (#8198)
Addresses this spec change
https://github.com/ethereum/consensus-specs/pull/4650

Add `max_blobs_per_block` to gossip data column check so we reject large columns before processing. (we currently do this check during processing)


  


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
2025-10-15 01:52:35 +00:00
Eitan Seri-Levi
60df5f4ab6 Downgrade light client error logs (#8196)
Temporary stop gap for #7002


  Downgrade light client errors to debug

We eventually should fix our light client objects so they can consist of data across forks.


Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>
2025-10-14 03:18:50 +00:00
Jimmy Chen
1fb94ce432 Release v8.0.0-rc.1 (#8185) v8.0.0-rc.1 2025-10-13 20:32:43 +11:00