Fixes#8268
Switch `est_time` from time until DA boundary slot, to time to finish total custody work from the original earliest data-column slot down to the DA boundary
Co-Authored-By: PoulavBhowmick03 <bpoulav@gmail.com>
Testing non-finality checkpoint synced these logs showed up in my INFO grep and were noisy. INFO should only include the notifier and exceptional events. I don't see why the user would care about this info.
Downgrade to debug
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
- Renames `OVERFLOW_LRU_CAPACITY` to `OVERFLOW_LRU_CAPACITY_NON_ZERO` to follow naming convention of `STATE_LRU_CAPACITY_NON_ZERO`
- Makes `OVERFLOW_LRU_CAPACITY_NON_ZERO` and `STATE_LRU_CAPACITY_NON_ZERO` private since they are only used in this module
- Moves `STATE_LRU_CAPACITY` into test module since it is only used for tests
Co-Authored-By: Kevaundray Wedderburn <kevtheappdev@gmail.com>
#7756 introduces a logging issue, where the relevant log:
da5b231720/beacon_node/network/src/network_beacon_processor/rpc_methods.rs (L380-L385)
obtains the `block_root` from `slots_by_block_root.keys()`. If the `block_root` is empty (block not found in the data availability checker), then the log will not show any block root:
`DEBUG BlobsByRoot outgoing response processed peer_id: 16Uiu2HAmCBxs1ZFfsbAfhSA98rUUL8Q1egLPb6WpGdKZxX6HqQYX, block_root: [], returned: 4`
This PR revises to return the `block_root` in the request as a vector of block root
Co-Authored-By: Tan Chee Keong <tanck@sigmaprime.io>
Testing non finalized checkpoint sync noticed this log that dumps blob data in Debug format to the logs.
Log only block root and commitment of each blob
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
We are seeing some crazy IO utilisation on Holesky now that data columns have started to expire. Our previous approach of _iterating the entire blobs DB_ doesn't seem to be scaling.
New blob pruning algorithm that uses a backwards block iterator from the epoch we want to prune, stopping early if an already-pruned slot is encountered.
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
N/A
We were publishing columns all columns that we didn't already have in the da cache when reconstructing. This is unnecessary outbound bandwidth for the node that is supposed to sample fewer columns.
This PR changes the behaviour to publish only columns that we are supposed to sample in the topics that we are subscribed to.
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
This PR adds backfill functionality to nodes switching to become a supernode or semi-supernode. Please note that we currently only support a CGC increase, i.e. if the node's already custodying 67 columns, switching to semi-supernode (64) will have no effect.
From @eserilev
> if a node's cgc increases on start up, we just need two things for custody backfill to do its thing
>
> - data column custody info needs to be updated to reflect the cgc change
> - `CustodyContext::validator_registrations::epoch_validator_custody_requirements` needs to be updated to reflect the cgc change
- [x] Add tests
- [x] Test on devnet-3
- [x] switch to supernode
- [x] switch to semisupernode
- [x] Test on live testnets
- [x] Update docs (functions)
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Addresses #8218
A simplified version of #8241 for the initial release.
I've tried to minimise the logic change in this PR, although introducing the `NodeCustodyType` enum still result in quite a bit a of diff, but the actual logic change in `CustodyContext` is quite small.
The main changes are in the `CustdoyContext` struct
* ~~combining `validator_custody_count` and `current_is_supernode` fields into a single `custody_group_count_at_head` field. We persist the cgc of the initial cli values into the `custody_group_count_at_head` field and only allow for increase (same behaviour as before).~~
* I noticed the above approach caused a backward compatibility issue, I've [made a fix](15569bc085) and changed the approach slightly (which was actually what I had originally in mind):
* when initialising, only override the `validator_custody_count` value if either flag `--supernode` or `--semi-supernode` is used; otherwise leave it as the existing default `0`. Most other logic remains unchanged.
All existing validator custody unit tests are still all passing, and I've added additional tests to cover semi-supernode, and restoring `CustodyContext` from disk.
Note: I've added a `WARN` if the user attempts to switch to a `--semi-supernode` or `--supernode` - this currently has no effect, but once @eserilev column backfill is merged, we should be able to support this quite easily.
Things to test
- [x] cgc in metadata / enr
- [x] cgc in metrics
- [x] subscribed subnets
- [x] getBlobs endpoint
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
#7603
#### Custody backfill sync service
Similar in many ways to the current backfill service. There may be ways to unify the two services. The difficulty there is that the current backfill service tightly couples blocks and their associated blobs/data columns. Any attempts to unify the two services should be left to a separate PR in my opinion.
#### `SyncNeworkContext`
`SyncNetworkContext` manages custody sync data columns by range requests separetly from other sync RPC requests. I think this is a nice separation considering that custody backfill is its own service.
#### Data column import logic
The import logic verifies KZG committments and that the data columns block root matches the block root in the nodes store before importing columns
#### New channel to send messages to `SyncManager`
Now external services can communicate with the `SyncManager`. In this PR this channel is used to trigger a custody sync. Alternatively we may be able to use the existing `mpsc` channel that the `SyncNetworkContext` uses to communicate with the `SyncManager`. I will spend some time reviewing this.
Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>
Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
This recent PR below changes the max reconstruction delay to be a function of slot time. However it uses `seconds_from_slot_start` when comparing (and dropping `nano`), so it might delay reconstruction on networks where the slot time isn’t a multiple of 4, e.g. on gnosis this only happens at 2s instead of 1.25s.:
- https://github.com/sigp/lighthouse/pull/8067#discussion_r2443875068
Use `millis_from_slot_start` when comparing against reconstruction deadline
Also added some tests for reconstruction delay.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
N/A
1. In the batch retry logic, we were failing to set the batch state to `AwaitingDownload` before attempting a retry. This PR sets it to `AwaitingDownload` before the retry and sets it back to `Downloading` if the retry suceeded in sending out a request
2. Remove all peer scoring logic from retrying and rely on just de priorotizing the failed peer. I finally concede the point to @dapplion 😄
3. Changes `block_components_by_range_request` to accept `block_peers` and `column_peers`. This is to ensure that we use the full synced peerset for requesting columns in order to avoid splitting the column peers among multiple head chains. During forward sync, we want the block peers to be the peers from the syncing chain and column peers to be all synced peers from the peerdb.
Also, fixes a typo and calls `attempt_send_awaiting_download_batches` from more places
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
Closes#6980
I think these flags may be useful in future peerdas / das testing, and would be useful to keep. Hence I've gated them behind a `testing` feature flag.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Addressing more review comments from:
- https://github.com/sigp/lighthouse/pull/8101
I've also tweaked a few more things that I think are minor bugs.
- Instrument `ensure_state_can_determine_proposers_for_epoch`
- Fix `block_root` usage in `compute_proposer_duties_from_head`. This was a regression introduced in 8101 😬 .
- Update the `state_advance_timer` to prime the next-epoch proposer cache post-Fulu.
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
This PR reverts #8179.
It turns out that the fix was invalid because an unknown root is always not a finalized descendant:
522bd9e9c6/consensus/proto_array/src/proto_array.rs (L976-L979)
so for any data columns with unknown parents, it will always penalise the gossip peer and disconnect it pretty quickly. On a small network, the node may lose all of its peers.
The impact is pretty obvious when the peer count is small and sync speed is slow, and is therefore easily reproducible by running a fresh supernode on devnet-3.
This isn't as obvious on a live testnet like holesky / sepolia, we haven't noticed this, probably due to its high peer count and sync speed - the nodes might be able to reach head quickly before losing too many peers.
The previous behaviour isn't ideal but safe: triggering unknown parent lookup and penalise the bad peer if it happens to be malicious or faulty. So for now it's safer to revert the change and plan for a proper fix after the v8 release.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
#6853 Update store tests to cover data column pruning
Created a helper function `check_data_column_existence` which is a copy of `check_blob_existence` but checking data columns instead.
The helper function is then used to check whether data columns are also pruned when blobs are pruned if PeerDAS is enabled.
Co-Authored-By: SunnysidedJ <j@testinprod.io>
Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
N/A
Found this issue in sepolia. Note: the custody requirement for this node is 100.
```
Oct 14 11:25:40.053 DEBUG Reconstructed columns count: 28, block_root: 0x4d7946dec0ab59f2afd46610d7c54af555cb4c2851d9eea7d83dd17cf6e96aae, slot: 8725628
Oct 14 11:25:45.568 WARN Internal availability check failure block_root: 0x4d7946dec0ab59f2afd46610d7c54af555cb4c2851d9eea7d83dd17cf6e96aae, error: Unexpected("too many columns got 128 expected 100")
```
So if any of the block components arrives late, then we reconstruct all 128 columns and try to add it to da cache and have more columns than needed for availability in the cache.
There are 2 ways I can think of fixing this:
1. pass only the required columns to the da cache after reconstruction here 60df5f4ab6/beacon_node/beacon_chain/src/data_availability_checker.rs (L647-L648)
2. Ensure that we add only columns that we need to sample in the da cache. I think this is safer since we can add columns to the cache from multiple code paths and this fixes it at the source.
~~This PR implements (2).~~ Thought more about it, I think (1) is cleaner since we filter gossip and rpc columns also before calling `put_kzg_verified_data_columns`/
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
Temporary stop gap for #7002
Downgrade light client errors to debug
We eventually should fix our light client objects so they can consist of data across forks.
Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>
* Only persist custody columns
* Get claude to write tests
* lint
* Address review comments and fix tests.
* Use supernode only when building chain segments
* Clean up
* Rewrite tests.
* Fix tests
* Clippy
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
This issue was identified during the fusaka audit competition.
The [`verify_parent_block_and_finalized_descendant`](62d9302e0f/beacon_node/beacon_chain/src/data_column_verification.rs (L606-L627)) in data column gossip verification currently load the parent first before checking if the column descends from the finalized root.
However, the `fork_choice.get_block(&block_parent_root)` function also make the same check internally:
8a4f6cf0d5/consensus/fork_choice/src/fork_choice.rs (L1242-L1249)
Therefore, if the column does not descend from the finalized root, we return an `UnknownParent` error, before hitting the `is_finalized_checkpoint_or_descendant` check just below.
Which means we `IGNORE` the gossip message instead `REJECT`, and the gossip peer is not _immediately_ penalised. This deviates from the spec.
However, worth noting that lighthouse will currently attempt to request the parent from this peer, and if the peer is not able to serve the parent, it gets penalised with a `LowToleranceError`, and will get banned after ~5 occurences.
ffa7b2b2b9/beacon_node/network/src/sync/network_context.rs (L1530-L1532)
This PR will penalise the bad peer immediately instead of performing block lookups before penalising it.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Closes:
- https://github.com/sigp/lighthouse/issues/4412
This should reduce Lighthouse's block proposal times on Holesky and prevent us getting reorged.
- [x] Allow the head state to be advanced further than 1 slot. This lets us avoid epoch processing on hot paths including block production, by having new epoch boundaries pre-computed and available in the state cache.
- [x] Use the finalized state to prune the op pool. We were previously using the head state and trying to infer slashing/exit relevance based on `exit_epoch`. However some exit epochs are far in the future, despite occurring recently.
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
N/A
Post fulu, we should be calling the v2 api on the relays that doesn't return the blobs/data columns.
However, we decided to start hitting the v2 api as soon as fulu is scheduled to avoid unexpected surprises at the fork.
In the ACDT call, it seems like most clients are calling v2 only after the fulu fork.
This PR aims to be the best of both worlds where we fallback to hitting v1 api if v2 fails. This way, we know beforehand if relays don't support it and can potentially alert them.
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
@michaelsproul noticed this warning on a devnet-3 node
```
Oct 01 16:37:29.896 WARN Error when importing rpc custody columns error: ParentUnknown { parent_root: 0xe4cc85a2137b76eb083d7076255094a90f10caaec0afc8fd36807db742f6ff13 }, block_hash: 0x43ce63b2344990f5f4d8911b8f14e3d3b6b006edc35bbc833360e667df0edef7
```
We're also seeing similar `WARN` logs for blobs on our live nodes.
It's normal to get parent unknown in lookups and it's handled here
a134d43446/beacon_node/network/src/sync/block_lookups/mod.rs (L611-L619)
These shouldn't be a `WARN`, and we also log the same error in block lookups at `DEBUG` level here:
a134d43446/beacon_node/network/src/sync/block_lookups/mod.rs (L643-L648)
So i've removed these extra WARN logs.
I've also lower the level of an `ERROR` log when unable to serve data column root requests - it's unexpected, but is unlikely to impact the nodes performance, so I think we can downgrade this.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Follow-up to the bug fixed in:
- https://github.com/sigp/lighthouse/pull/8121
This fixes the root cause of that bug, which was introduced by me in:
- https://github.com/sigp/lighthouse/pull/8101
Lion identified the issue here:
- https://github.com/sigp/lighthouse/pull/8101#discussion_r2382710356
In the methods that compute the proposer shuffling decision root, ensure we don't use lookahead for the Fulu fork epoch itself. This is accomplished by checking if Fulu is enabled at `epoch - 1`, i.e. if `epoch > fulu_fork_epoch`.
I haven't updated the methods that _compute_ shufflings to use these new corrected bounds (e.g. `BeaconState::compute_proposer_indices`), although we could make this change in future. The `get_beacon_proposer_indices` method already gracefully handles the Fulu boundary case by using the `proposer_lookahead` field (if initialised).
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
N/A
In #8101 , when we modified the logic to get the proposer index post fulu, we seem to have missed advancing the state at the fork boundaries to get the right `Fork` for signature verification.
This led to lighthouse failing all gossip verification right after transitioning to fulu that was observed on the holesky shadow fork
```
Sep 26 14:24:00.088 DEBUG Rejected gossip block error: "InvalidSignature(ProposerSignature)", graffiti: "grandine-geth-super-1", slot: 640
Sep 26 14:24:00.099 WARN Could not verify block for gossip. Rejecting the block error: InvalidSignature(ProposerSignature)
```
I'm not completely sure this is the correct fix, but this fixes the issue with `InvalidProposerSignature` on the holesky shadow fork.
Thanks to @eserilev for helping debug this
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
As identified by a researcher during the Fusaka security competition, we were computing the proposer index incorrectly in some places by computing without lookahead.
- [x] Add "low level" checks to computation functions in `consensus/types` to ensure they error cleanly
- [x] Re-work the determination of proposer shuffling decision roots, which are now fork aware.
- [x] Re-work and simplify the beacon proposer cache to be fork-aware.
- [x] Optimise `with_proposer_cache` to use `OnceCell`.
- [x] All tests passing.
- [x] Resolve all remaining `FIXME(sproul)`s.
- [x] Unit tests for `ProtoBlock::proposer_shuffling_root_for_child_block`.
- [x] End-to-end regression test.
- [x] Test on pre-Fulu network.
- [x] Test on post-Fulu network.
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
- PR https://github.com/sigp/lighthouse/pull/8045 introduced a regression of how lookup sync interacts with the da_checker.
Now in unstable block import from the HTTP API also insert the block in the da_checker while the block is being execution verified. If lookup sync finds the block in the da_checker in `NotValidated` state it expects a `GossipBlockProcessResult` message sometime later. That message is only sent after block import in gossip.
I confirmed in our node's logs for 4/4 cases of stuck lookups are caused by this sequence of events:
- Receive block through API, insert into da_checker in fn process_block in put_pre_execution_block
- Create lookup and leave in AwaitingDownload(block in processing cache) state
- Block from HTTP API finishes importing
- Lookup is left stuck
Closes https://github.com/sigp/lighthouse/issues/8104
- https://github.com/sigp/lighthouse/pull/8110 was my initial solution attempt but we can't send the `GossipBlockProcessResult` event from the `http_api` crate without adding new channels, which seems messy.
For a given node it's rare that a lookup is created at the same time that a block is being published. This PR solves https://github.com/sigp/lighthouse/issues/8104 by allowing lookup sync to import the block twice in that case.
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
#8105 (to be confirmed)
I noticed a large number of failed discovery requests after deploying latest `unstable` to some of our testnet and mainnet nodes. This is because of a recent PeerDAS change to attempt to maintain sufficient peers across data column subnets - this shouldn't be enabled on network without peerdas scheduled, otherwise it will keep retrying discovery on these subnets and never succeed.
Also removed some unused files.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
Ensure that we don't log a warning for HTTP 202s, which are expected on the blinded block endpoints after Fulu.
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
This PR consolidates the `reqresp_pre_import_cache` into the `data_availability_checker` for the following reasons:
- the `reqresp_pre_import_cache` suffers from the same TOCTOU bug we had with `data_availability_checker` earlier, and leads to unbounded memory leak, which we have observed over the last 6 months on some nodes.
- the `reqresp_pre_import_cache` is no longer necessary, because we now hold blocks in the `data_availability_checker` for longer since (#7961), and recent blocks can be served from the DA checker.
This PR also maintains the following functionalities
- Serving pre-executed blocks over RPC, and they're now served from the `data_availability_checker` instead.
- Using the cache for de-duplicating lookup requests.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
Part of #7866
- Continuation of #7921
In the above PR, we enabled rayon for batch KZG verification in chain segment processing. However, using the global rayon thread pool for backfill is likely to create resource contention with higher-priority beacon processor work.
This PR introduces a dedicated low-priority rayon thread pool `LOW_PRIORITY_RAYON_POOL` and uses it for processing backfill chain segments.
This prevents backfill KZG verification from using the global rayon thread pool and competing with high-priority beacon processor tasks for CPU resources.
However, this PR by itself doesn't prevent CPU oversubscription because other tasks could still fill up the global rayon thread pool, and having an extra thread pool could make things worse. To address this we need the beacon
processor to coordinate total CPU allocation across all tasks, which is covered in:
- #7789
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>
Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>
Address contention on the store's `block_cache` by allowing it to be disabled when `--block-cache-size 0` is provided, and also making this the default.
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
A different (and complementary) approach for:
- https://github.com/sigp/lighthouse/issues/5391
This PR adds a flag to set the DA boundary to the Deneb fork. The effect of this change is that Lighthouse will try to backfill _all_ blobs.
Most peers do not have this data, but I'm thinking that combined with `trusted-peers` this could be quite effective.
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
Fix a memory leak in the reprocess queue.
If the vec of attestation IDs for a block is never evicted from the reprocess queue by a `BlockImported` event, then it stays in the map forever consuming memory. The fix is to remove the entry when its last attestation times out. We do similarly for light client updates.
In practice this will only occur if there is a race between adding an attestation to the queue and processing the `BlockImported` event, or if there are attestations for block roots that we never import (e.g. random block roots, block roots of invalid blocks).
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>