Which issue # does this PR address?
None
All of these are performing a check, and adding a batch, or creating a new lookup, or a new query, etc..
Hence all of these limits would be off by one.
Example:
```rust
// BACKFILL_BATCH_BUFFER_SIZE = 5
if self.batches.iter().filter(...).count() >= BACKFILL_BATCH_BUFFER_SIZE {
return None; // ← REJECT
}
// ... later adds batch via Entry::Vacant(entry).insert(...)
```
Without the `>` being changed to a `>=` , we would allow 6. The same idea applies to all changes proposed.
Co-Authored-By: Antoine James <antoine@ethereum.org>
Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
N/A
Another find by @gitToki. Sort the preferred_ids in descending order as originally intended from the comment in the function.
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
- Step 0 of the tree-sync roadmap https://github.com/sigp/lighthouse/issues/7678
Current lookup sync tests are written in an explicit way that assume how the internals of lookup sync work. For example the test would do:
- Emit unknown block parent message
- Expect block request for X
- Respond with successful block request
- Expect block processing request for X
- Response with successful processing request
- etc..
This is unnecessarily verbose. And it will requires a complete re-write when something changes in the internals of lookup sync (has happened a few times, mostly for deneb and fulu).
What we really want to assert is:
- WHEN: we receive an unknown block parent message
- THEN: Lookup sync can sync that block
- ASSERT: Without penalizing peers, without unnecessary retries
Keep all existing tests and add new cases but written in the new style described above. The logic to serve and respond to request is in this function `fn simulate` 2288a3aeb1/beacon_node/network/src/sync/tests/lookups.rs (L301)
- It controls peer behavior based on a `CompleteStrategy` where you can set for example "respond to BlocksByRoot requests with empty"
- It actually runs beacon processor messages running their clousures. Now sync tests actually import blocks, increasing the test coverage to the interaction of sync and the da_checker.
- To achieve the above the tests create real blocks with the test harness. To make the tests as fast as before, I disabled crypto with `TestConfig`
Along the way I found a couple bugs, which I documented on the diff.
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
Add error rates metrics on unstable to benchmark against tree-sync. In my branch there are frequent errors but mostly connections errors as the node is still finding it set of stable peers.
These metrics are very useful and unstable can benefit from them ahead of tree-sync
Add three new metrics:
- sync_rpc_requests_success_total: Total count of sync RPC requests successes
- sync_rpc_requests_error_total: Total count of sync RPC requests errors
- sync_rpc_request_duration_sec: Time to complete a successful sync RPC requesst
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
N/A
Fixes the issue where we were setting block observed timings for blocks that were potentially gossip invalid.
Thanks @gitToki for the find
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
Co-Authored-By: Michael Sproul <michaelsproul@users.noreply.github.com>
All the required boilerplate for gloas gossip. We'll include the gossip message processing logic in a separate PR
Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Removes the remaining facade re-exports from `consensus/types`.
I have left `graffiti` as I think it has some utility so am leaning towards keeping it in the final API design.
Co-Authored-By: Mac L <mjladson@pm.me>
Just visual clean-up, making logging statements look uniform. There's no reason to use `tracing::debug` instead of `debug`. If we ever need to migrate our logging lib in the future it would make things easier too.
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Co-Authored-By: Michael Sproul <michaelsproul@users.noreply.github.com>
Which issue # does this PR address?
#8586
Please list or describe the changes introduced by this PR.
Remove `service_name` from `TaskExecutor`
Co-Authored-By: Abhivansh <31abhivanshj@gmail.com>
Which issue # does this PR address?
None
The `visualize_batch_state` functions uses the following loop `for mut batch_index in 0..BATCH_BUFFER_SIZE`, making it from `0` to `BATCH_BUFFER_SIZE - 1` (behind the scenes).
Hence we would never hit the following condition:
```rust
if batch_index != BATCH_BUFFER_SIZE {
visualization_string.push(',');
}
```
Replacing `!=` with `<` & `BATCH_BUFFER_SIZE -1` allows for the following change:
`[A,B,C,D,E,]` to become: `[A,B,C,D,E]`
Co-Authored-By: Antoine James <antoine@ethereum.org>
There are certain crates which we re-export within `types` which creates a fragmented DevEx, where there are various ways to import the same crates.
```rust
// consensus/types/src/lib.rs
pub use bls::{
AggregatePublicKey, AggregateSignature, Error as BlsError, Keypair, PUBLIC_KEY_BYTES_LEN,
PublicKey, PublicKeyBytes, SIGNATURE_BYTES_LEN, SecretKey, Signature, SignatureBytes,
get_withdrawal_credentials,
};
pub use context_deserialize::{ContextDeserialize, context_deserialize};
pub use fixed_bytes::FixedBytesExtended;
pub use milhouse::{self, List, Vector};
pub use ssz_types::{BitList, BitVector, FixedVector, VariableList, typenum, typenum::Unsigned};
pub use superstruct::superstruct;
```
This PR removes these re-exports and makes it explicit that these types are imported from a non-`consensus/types` crate.
Co-Authored-By: Mac L <mjladson@pm.me>
Organize and categorize `consensus/types` into modules based on their relation to key consensus structures/concepts.
This is a precursor to a sensible public interface.
While this refactor is very opinionated, I am open to suggestions on module names, or type groupings if my current ones are inappropriate.
Co-Authored-By: Mac L <mjladson@pm.me>
We want to not require checkpoint sync starts to include the required custody data columns, and instead fetch them from p2p.
Closes https://github.com/sigp/lighthouse/issues/6837
The checkpoint sync slot can:
1. Be the first slot in the epoch, such that the epoch of the block == the start checkpoint epoch
2. Be in an epoch prior to the start checkpoint epoch
In both cases backfill sync already fetches that epoch worth of blocks with current code. This PR modifies the backfill import filter function to allow to re-importing the oldest block slot in the DB.
I feel this solution is sufficient unless I'm missing something. ~~I have not tested this yet!~~ Michael has tested this and it works.
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
N/A
The difference is computed by taking the difference of expected with received. We were doing the inverse.
Thanks to Yassine for finding the issue.
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
Take 2 of #8390.
Fixes the race condition properly instead of propagating the error. I think this is a better alternative, and doesn't seem to look that bad.
* Lift node id loading or generation from `NetworkService ` startup to the `ClientBuilder`, so that it can be used to compute custody columns for the beacon chain without waiting for Network bootstrap.
I've considered and implemented a few alternatives:
1. passing `node_id` to beacon chain builder and compute columns when creating `CustodyContext`. This approach isn't good for separation of concerns and isn't great for testability
2. passing `ordered_custody_groups` to beacon chain. `CustodyContext` only uses this to compute ordered custody columns, so we might as well lift this logic out, so we don't have to do error handling in `CustodyContext` construction. Less tests to update;.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
FIx flaky tests that depends on timing. Previously the test processes all 128 columns and expect reconstruction to happen after all columns are processed. There is a race here, and reconstruction could be triggered before all columns are processed.
I've updated the tests to process 64 columns, just enough for reconstruction and wait for 50ms for reconstruction to be triggered.
This PR requires the change made in https://github.com/sigp/lighthouse/pull/8194 for the test to pass consistently (blob count set to 1 for all blocks instead of random blob count between 0..max)
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
### Downgrade a non error to `Debug`
I noticed this error on one of our hoodi nodes:
```
Nov 04 05:13:38.892 ERROR Error during data column reconstruction block_root: 0x4271b9efae7deccec3989bd2418e998b83ce8144210c2b17200abb62b7951190, error: DuplicateFullyImported(0x4271b9efae7deccec3989bd2418e998b83ce8144210c2b17200abb62b7951190)
```
This shouldn't be logged as an error and it's due to a normal race condition, and it doesn't impact the node negatively.
### Remove spammy logs
This logs is filling up the log files quite quickly and it is also something we'd expect during normal operation - getting columns via EL before gossip. We haven't found this debug log to be useful, so I propose we remove it to avoid spamming debug logs.
```
Received already available column sidecar. Ignoring the column sidecar
```
In the process of removing this, I noticed we aren't propagating the validation result, which I think we should so I've added this. The impact should be quite minimal - the message will stay in the gossip memcache for a bit longer but should be evicted in the next heartbeat.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
`beacon-chain-tests` is now regularly taking 1h+ on CI since Fulu fork was added.
This PR attemtpts to reduce the test time by bringing down the number of blobs generated in tests - instead of generating 0..max_blobs, the generator now generates 0..1 blobs by default, and this can be modified by setting `harness.execution_block_generator.set_min_blob_count(n)`.
Note: The blobs are pre-generated and doesn't require too much CPU to generate however processing a larger number of them on the beacon chain does take a lot of time.
This PR also include a few other small improvements
- Our slowest test (`chain_segment_varying_chunk_size`) runs 3x faster in Fulu just by reusing chain segments
- Avoid re-running fork specific tests on all forks
- Fix a bunch of tests that depends on the harness's existing random blob generation, which is fragile
beacon chain test time on test machine is **~2x** faster:
### `unstable`
```
Summary [ 751.586s] 291 tests run: 291 passed (13 slow), 0 skipped
```
### this branch
```
Summary [ 373.792s] 291 tests run: 291 passed (2 slow), 0 skipped
```
The next set of tests to optimise is the ones that use [`get_chain_segment`](77a9af96de/beacon_node/beacon_chain/tests/block_verification.rs (L45)), as it by default build 320 blocks with supernode - an easy optimisation would be to build these blocks with cgc = 8 for tests that only require fullnodes.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
Custody backfill sync has a bug when we request columns from more than one peer per batch. The fix here ensures we wait for all requests to be completed before performing verification and importing the responses.
I've also added an endpoint `lighthouse/custody/backfill` that resets a nodes earliest available data column to the current epoch so that custody backfill can be triggered. This endpoint is needed to rescue any nodes that may have missing columns due to the custody backfill sync bug without requiring a full re-sync.
Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>
Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Co-Authored-By: Michael Sproul <michaelsproul@users.noreply.github.com>
During custody backfill sync there could be an edge case where we update CGC at the same time where we are importing a batch of columns which may cause us to incorrectly overwrite values when calling `backfill_validator_custody_requirements`. To prevent this race condition, the expected cgc is now passed into this function and is used to check if the expected cgc == the current validator cgc. If the values arent equal, this probably indicates that a very recent CGC occurred so we do not prune/update values in the `epoch_validator_custody_requirements` map.
Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>
https://github.com/sigp/lighthouse/issues/8012
Replace all instances of `VariableList::from` and `FixedVector::from` to their `try_from` variants.
While I tried to use proper error handling in most cases, there were certain situations where adding an `expect` for situations where `try_from` can trivially never fail avoided adding a lot of extra complexity.
Co-Authored-By: Mac L <mjladson@pm.me>
Co-Authored-By: Michael Sproul <michaelsproul@users.noreply.github.com>
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
Testing non-finality checkpoint synced these logs showed up in my INFO grep and were noisy. INFO should only include the notifier and exceptional events. I don't see why the user would care about this info.
Downgrade to debug
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
#7756 introduces a logging issue, where the relevant log:
da5b231720/beacon_node/network/src/network_beacon_processor/rpc_methods.rs (L380-L385)
obtains the `block_root` from `slots_by_block_root.keys()`. If the `block_root` is empty (block not found in the data availability checker), then the log will not show any block root:
`DEBUG BlobsByRoot outgoing response processed peer_id: 16Uiu2HAmCBxs1ZFfsbAfhSA98rUUL8Q1egLPb6WpGdKZxX6HqQYX, block_root: [], returned: 4`
This PR revises to return the `block_root` in the request as a vector of block root
Co-Authored-By: Tan Chee Keong <tanck@sigmaprime.io>
Testing non finalized checkpoint sync noticed this log that dumps blob data in Debug format to the logs.
Log only block root and commitment of each blob
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
Addresses #8218
A simplified version of #8241 for the initial release.
I've tried to minimise the logic change in this PR, although introducing the `NodeCustodyType` enum still result in quite a bit a of diff, but the actual logic change in `CustodyContext` is quite small.
The main changes are in the `CustdoyContext` struct
* ~~combining `validator_custody_count` and `current_is_supernode` fields into a single `custody_group_count_at_head` field. We persist the cgc of the initial cli values into the `custody_group_count_at_head` field and only allow for increase (same behaviour as before).~~
* I noticed the above approach caused a backward compatibility issue, I've [made a fix](15569bc085) and changed the approach slightly (which was actually what I had originally in mind):
* when initialising, only override the `validator_custody_count` value if either flag `--supernode` or `--semi-supernode` is used; otherwise leave it as the existing default `0`. Most other logic remains unchanged.
All existing validator custody unit tests are still all passing, and I've added additional tests to cover semi-supernode, and restoring `CustodyContext` from disk.
Note: I've added a `WARN` if the user attempts to switch to a `--semi-supernode` or `--supernode` - this currently has no effect, but once @eserilev column backfill is merged, we should be able to support this quite easily.
Things to test
- [x] cgc in metadata / enr
- [x] cgc in metrics
- [x] subscribed subnets
- [x] getBlobs endpoint
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
#7603
#### Custody backfill sync service
Similar in many ways to the current backfill service. There may be ways to unify the two services. The difficulty there is that the current backfill service tightly couples blocks and their associated blobs/data columns. Any attempts to unify the two services should be left to a separate PR in my opinion.
#### `SyncNeworkContext`
`SyncNetworkContext` manages custody sync data columns by range requests separetly from other sync RPC requests. I think this is a nice separation considering that custody backfill is its own service.
#### Data column import logic
The import logic verifies KZG committments and that the data columns block root matches the block root in the nodes store before importing columns
#### New channel to send messages to `SyncManager`
Now external services can communicate with the `SyncManager`. In this PR this channel is used to trigger a custody sync. Alternatively we may be able to use the existing `mpsc` channel that the `SyncNetworkContext` uses to communicate with the `SyncManager`. I will spend some time reviewing this.
Co-Authored-By: Eitan Seri-Levi <eserilev@ucsc.edu>
Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
N/A
1. In the batch retry logic, we were failing to set the batch state to `AwaitingDownload` before attempting a retry. This PR sets it to `AwaitingDownload` before the retry and sets it back to `Downloading` if the retry suceeded in sending out a request
2. Remove all peer scoring logic from retrying and rely on just de priorotizing the failed peer. I finally concede the point to @dapplion 😄
3. Changes `block_components_by_range_request` to accept `block_peers` and `column_peers`. This is to ensure that we use the full synced peerset for requesting columns in order to avoid splitting the column peers among multiple head chains. During forward sync, we want the block peers to be the peers from the syncing chain and column peers to be all synced peers from the peerdb.
Also, fixes a typo and calls `attempt_send_awaiting_download_batches` from more places
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
@michaelsproul noticed this warning on a devnet-3 node
```
Oct 01 16:37:29.896 WARN Error when importing rpc custody columns error: ParentUnknown { parent_root: 0xe4cc85a2137b76eb083d7076255094a90f10caaec0afc8fd36807db742f6ff13 }, block_hash: 0x43ce63b2344990f5f4d8911b8f14e3d3b6b006edc35bbc833360e667df0edef7
```
We're also seeing similar `WARN` logs for blobs on our live nodes.
It's normal to get parent unknown in lookups and it's handled here
a134d43446/beacon_node/network/src/sync/block_lookups/mod.rs (L611-L619)
These shouldn't be a `WARN`, and we also log the same error in block lookups at `DEBUG` level here:
a134d43446/beacon_node/network/src/sync/block_lookups/mod.rs (L643-L648)
So i've removed these extra WARN logs.
I've also lower the level of an `ERROR` log when unable to serve data column root requests - it's unexpected, but is unlikely to impact the nodes performance, so I think we can downgrade this.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>