Which issue # does this PR address?
None
All of these are performing a check, and adding a batch, or creating a new lookup, or a new query, etc..
Hence all of these limits would be off by one.
Example:
```rust
// BACKFILL_BATCH_BUFFER_SIZE = 5
if self.batches.iter().filter(...).count() >= BACKFILL_BATCH_BUFFER_SIZE {
return None; // ← REJECT
}
// ... later adds batch via Entry::Vacant(entry).insert(...)
```
Without the `>` being changed to a `>=` , we would allow 6. The same idea applies to all changes proposed.
Co-Authored-By: Antoine James <antoine@ethereum.org>
Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
N/A
Another find by @gitToki. Sort the preferred_ids in descending order as originally intended from the comment in the function.
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
- Step 0 of the tree-sync roadmap https://github.com/sigp/lighthouse/issues/7678
Current lookup sync tests are written in an explicit way that assume how the internals of lookup sync work. For example the test would do:
- Emit unknown block parent message
- Expect block request for X
- Respond with successful block request
- Expect block processing request for X
- Response with successful processing request
- etc..
This is unnecessarily verbose. And it will requires a complete re-write when something changes in the internals of lookup sync (has happened a few times, mostly for deneb and fulu).
What we really want to assert is:
- WHEN: we receive an unknown block parent message
- THEN: Lookup sync can sync that block
- ASSERT: Without penalizing peers, without unnecessary retries
Keep all existing tests and add new cases but written in the new style described above. The logic to serve and respond to request is in this function `fn simulate` 2288a3aeb1/beacon_node/network/src/sync/tests/lookups.rs (L301)
- It controls peer behavior based on a `CompleteStrategy` where you can set for example "respond to BlocksByRoot requests with empty"
- It actually runs beacon processor messages running their clousures. Now sync tests actually import blocks, increasing the test coverage to the interaction of sync and the da_checker.
- To achieve the above the tests create real blocks with the test harness. To make the tests as fast as before, I disabled crypto with `TestConfig`
Along the way I found a couple bugs, which I documented on the diff.
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
Add error rates metrics on unstable to benchmark against tree-sync. In my branch there are frequent errors but mostly connections errors as the node is still finding it set of stable peers.
These metrics are very useful and unstable can benefit from them ahead of tree-sync
Add three new metrics:
- sync_rpc_requests_success_total: Total count of sync RPC requests successes
- sync_rpc_requests_error_total: Total count of sync RPC requests errors
- sync_rpc_request_duration_sec: Time to complete a successful sync RPC requesst
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
N/A
Fixes the issue where we were setting block observed timings for blocks that were potentially gossip invalid.
Thanks @gitToki for the find
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
Co-Authored-By: Michael Sproul <michaelsproul@users.noreply.github.com>
All the required boilerplate for gloas gossip. We'll include the gossip message processing logic in a separate PR
Co-Authored-By: Eitan Seri- Levi <eserilev@gmail.com>
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Removes the remaining facade re-exports from `consensus/types`.
I have left `graffiti` as I think it has some utility so am leaning towards keeping it in the final API design.
Co-Authored-By: Mac L <mjladson@pm.me>
Just visual clean-up, making logging statements look uniform. There's no reason to use `tracing::debug` instead of `debug`. If we ever need to migrate our logging lib in the future it would make things easier too.
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Co-Authored-By: Michael Sproul <michaelsproul@users.noreply.github.com>
Which issue # does this PR address?
#8586
Please list or describe the changes introduced by this PR.
Remove `service_name` from `TaskExecutor`
Co-Authored-By: Abhivansh <31abhivanshj@gmail.com>
Which issue # does this PR address?
None
The `visualize_batch_state` functions uses the following loop `for mut batch_index in 0..BATCH_BUFFER_SIZE`, making it from `0` to `BATCH_BUFFER_SIZE - 1` (behind the scenes).
Hence we would never hit the following condition:
```rust
if batch_index != BATCH_BUFFER_SIZE {
visualization_string.push(',');
}
```
Replacing `!=` with `<` & `BATCH_BUFFER_SIZE -1` allows for the following change:
`[A,B,C,D,E,]` to become: `[A,B,C,D,E]`
Co-Authored-By: Antoine James <antoine@ethereum.org>
There are certain crates which we re-export within `types` which creates a fragmented DevEx, where there are various ways to import the same crates.
```rust
// consensus/types/src/lib.rs
pub use bls::{
AggregatePublicKey, AggregateSignature, Error as BlsError, Keypair, PUBLIC_KEY_BYTES_LEN,
PublicKey, PublicKeyBytes, SIGNATURE_BYTES_LEN, SecretKey, Signature, SignatureBytes,
get_withdrawal_credentials,
};
pub use context_deserialize::{ContextDeserialize, context_deserialize};
pub use fixed_bytes::FixedBytesExtended;
pub use milhouse::{self, List, Vector};
pub use ssz_types::{BitList, BitVector, FixedVector, VariableList, typenum, typenum::Unsigned};
pub use superstruct::superstruct;
```
This PR removes these re-exports and makes it explicit that these types are imported from a non-`consensus/types` crate.
Co-Authored-By: Mac L <mjladson@pm.me>
Organize and categorize `consensus/types` into modules based on their relation to key consensus structures/concepts.
This is a precursor to a sensible public interface.
While this refactor is very opinionated, I am open to suggestions on module names, or type groupings if my current ones are inappropriate.
Co-Authored-By: Mac L <mjladson@pm.me>
We want to not require checkpoint sync starts to include the required custody data columns, and instead fetch them from p2p.
Closes https://github.com/sigp/lighthouse/issues/6837
The checkpoint sync slot can:
1. Be the first slot in the epoch, such that the epoch of the block == the start checkpoint epoch
2. Be in an epoch prior to the start checkpoint epoch
In both cases backfill sync already fetches that epoch worth of blocks with current code. This PR modifies the backfill import filter function to allow to re-importing the oldest block slot in the DB.
I feel this solution is sufficient unless I'm missing something. ~~I have not tested this yet!~~ Michael has tested this and it works.
Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
Co-Authored-By: Michael Sproul <michael@sigmaprime.io>
N/A
The difference is computed by taking the difference of expected with received. We were doing the inverse.
Thanks to Yassine for finding the issue.
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
Take 2 of #8390.
Fixes the race condition properly instead of propagating the error. I think this is a better alternative, and doesn't seem to look that bad.
* Lift node id loading or generation from `NetworkService ` startup to the `ClientBuilder`, so that it can be used to compute custody columns for the beacon chain without waiting for Network bootstrap.
I've considered and implemented a few alternatives:
1. passing `node_id` to beacon chain builder and compute columns when creating `CustodyContext`. This approach isn't good for separation of concerns and isn't great for testability
2. passing `ordered_custody_groups` to beacon chain. `CustodyContext` only uses this to compute ordered custody columns, so we might as well lift this logic out, so we don't have to do error handling in `CustodyContext` construction. Less tests to update;.
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
FIx flaky tests that depends on timing. Previously the test processes all 128 columns and expect reconstruction to happen after all columns are processed. There is a race here, and reconstruction could be triggered before all columns are processed.
I've updated the tests to process 64 columns, just enough for reconstruction and wait for 50ms for reconstruction to be triggered.
This PR requires the change made in https://github.com/sigp/lighthouse/pull/8194 for the test to pass consistently (blob count set to 1 for all blocks instead of random blob count between 0..max)
Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>