mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-11 04:31:51 +00:00
* 1D PeerDAS prototype: Data format and Distribution (#5050) * Build and publish column sidecars. Add stubs for gossip. * Add blob column subnets * Add `BlobColumnSubnetId` and initial compute subnet logic. * Subscribe to blob column subnets. * Introduce `BLOB_COLUMN_SUBNET_COUNT` based on DAS configuration parameter changes. * Fix column sidecar type to use `VariableList` for data. * Fix lint errors. * Update types and naming to latest consensus-spec #3574. * Fix test and some cleanups. * Merge branch 'unstable' into das * Merge branch 'unstable' into das * Merge branch 'unstable' into das # Conflicts: # consensus/types/src/chain_spec.rs * Add `DataColumnSidecarsByRoot ` req/resp protocol (#5196) * Add stub for `DataColumnsByRoot` * Add basic implementation of serving RPC data column from DA checker. * Store data columns in early attester cache and blobs db. * Apply suggestions from code review Co-authored-by: Eitan Seri-Levi <eserilev@gmail.com> Co-authored-by: Jacob Kaufmann <jacobkaufmann18@gmail.com> * Fix build. * Store `DataColumnInfo` in database and various cleanups. * Update `DataColumnSidecar` ssz max size and remove panic code. --------- Co-authored-by: Eitan Seri-Levi <eserilev@gmail.com> Co-authored-by: Jacob Kaufmann <jacobkaufmann18@gmail.com> * feat: add DAS KZG in data col construction (#5210) * feat: add DAS KZG in data col construction * refactor data col sidecar construction * refactor: add data cols to GossipVerifiedBlockContents * Disable windows tests for `das` branch. (c-kzg doesn't build on windows) * Formatting and lint changes only. * refactor: remove iters in construction of data cols * Update vec capacity and error handling. * Add `data_column_sidecar_computation_seconds` metric. --------- Co-authored-by: Jimmy Chen <jchen.tc@gmail.com> * Merge branch 'unstable' into das # Conflicts: # .github/workflows/test-suite.yml # beacon_node/lighthouse_network/src/types/topics.rs * fix: update data col subnet count from 64 to 32 (#5413) * feat: add peerdas custody field to ENR (#5409) * feat: add peerdas custody field to ENR * add hash prefix step in subnet computation * refactor test and fix possible u64 overflow * default to min custody value if not present in ENR * Merge branch 'unstable' into das * Merge branch 'unstable' into das-unstable-merge-0415 # Conflicts: # Cargo.lock # beacon_node/beacon_chain/src/data_availability_checker.rs # beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs # beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs # beacon_node/beacon_chain/src/data_availability_checker/processing_cache.rs # beacon_node/lighthouse_network/src/rpc/methods.rs # beacon_node/network/src/network_beacon_processor/mod.rs # beacon_node/network/src/sync/block_lookups/tests.rs # crypto/kzg/Cargo.toml * Merge remote-tracking branch 'sigp/unstable' into das * Merge remote-tracking branch 'sigp/unstable' into das * Fix merge conflicts. * Send custody data column to `DataAvailabilityChecker` for determining block importability (#5570) * Only import custody data columns after publishing a block. * Add `subscribe-all-data-column-subnets` and pass custody column count to `availability_cache`. * Add custody requirement checks to `availability_cache`. * Fix config not being passed to DAChecker and add more logging. * Introduce `peer_das_epoch` and make blobs and columns mutually exclusive. * Add DA filter for PeerDAS. * Fix data availability check and use test_logger in tests. * Fix subscribe to all data column subnets not working correctly. * Fix tests. * Only publish column sidecars if PeerDAS is activated. Add `PEER_DAS_EPOCH` chain spec serialization. * Remove unused data column index in `OverflowKey`. * Fix column sidecars incorrectly produced when there are no blobs. * Re-instate index to `OverflowKey::DataColumn` and downgrade noisy debug log to `trace`. * DAS sampling on sync (#5616) * Data availability sampling on sync * Address @jimmygchen review * Trigger sampling * Address some review comments and only send `SamplingBlock` sync message after PEER_DAS_EPOCH. --------- Co-authored-by: Jimmy Chen <jchen.tc@gmail.com> * Merge branch 'unstable' into das # Conflicts: # Cargo.lock # Cargo.toml # beacon_node/beacon_chain/src/block_verification.rs # beacon_node/http_api/src/publish_blocks.rs # beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs # beacon_node/lighthouse_network/src/rpc/protocol.rs # beacon_node/lighthouse_network/src/types/pubsub.rs # beacon_node/network/src/sync/block_lookups/single_block_lookup.rs # beacon_node/store/src/hot_cold_store.rs # consensus/types/src/beacon_state.rs # consensus/types/src/chain_spec.rs # consensus/types/src/eth_spec.rs * Merge branch 'unstable' into das * Re-process early sampling requests (#5569) * Re-process early sampling requests # Conflicts: # beacon_node/beacon_processor/src/work_reprocessing_queue.rs # beacon_node/lighthouse_network/src/rpc/methods.rs # beacon_node/network/src/network_beacon_processor/rpc_methods.rs * Update beacon_node/beacon_processor/src/work_reprocessing_queue.rs Co-authored-by: Jimmy Chen <jchen.tc@gmail.com> * Add missing var * Beta compiler fixes and small typo fixes. * Remove duplicate method. --------- Co-authored-by: Jimmy Chen <jchen.tc@gmail.com> * Merge remote-tracking branch 'sigp/unstable' into das * Fix merge conflict. * Add data columns by root to currently supported protocol list (#5678) * Add data columns by root to currently supported protocol list. * Add missing data column by roots handling. * Merge branch 'unstable' into das # Conflicts: # Cargo.lock # Cargo.toml # beacon_node/network/src/sync/block_lookups/tests.rs # beacon_node/network/src/sync/manager.rs * Fix simulator tests on `das` branch (#5731) * Bump genesis delay in sim tests as KZG setup takes longer for DAS. * Fix incorrect YAML spacing. * DataColumnByRange boilerplate (#5353) * add boilerplate * fmt * PeerDAS custody lookup sync (#5684) * Implement custody sync * Lint * Fix tests * Fix rebase issue * Add data column kzg verification and update `c-kzg`. (#5701) * Add data column kzg verification and update `c-kzg`. * Fix incorrect `Cell` size. * Add kzg verification on rpc blocks. * Add kzg verification on rpc data columns. * Rename `PEER_DAS_EPOCH` to `EIP7594_FORK_EPOCH` for client interop. (#5750) * Fetch custody columns in range sync (#5747) * Fetch custody columns in range sync * Clean up todos * Remove `BlobSidecar` construction and publish after PeerDAS activated (#5759) * Avoid building and publishing blob sidecars after PeerDAS. * Ignore gossip blobs with a slot greater than peer das activation epoch. * Only attempt to verify blob count and import blobs before PeerDAS. * #5684 review comments (#5748) * #5684 review comments. * Doc and message update only. * Fix incorrect condition when constructing `RpcBlock` with `DataColumn`s * Make sampling tests deterministic (#5775) * PeerDAS spec tests (#5772) * Add get_custody_columns spec tests. * Add kzg merkle proof spec tests. * Add SSZ spec tests. * Add remaining KZG tests * Load KZG only once per process, exclude electra tests and add missing SSZ tests. * Fix lint and missing changes. * Ignore macOS generated file. * Merge remote branch 'sigp/unstable' into das * Merge remote tracking branch 'origin/unstable' into das * Implement unconditional reconstruction for supernodes (#5781) * Implement unconditional reconstruction for supernodes * Move code into KzgVerifiedCustodyDataColumn * Remove expect * Add test * Thanks justin * Add withhold attack mode for interop (#5788) * Add withhold attack mode * Update readme * Drop added readmes * Undo styling changes * Add column gossip verification and handle unknown parent block (#5783) * Add column gossip verification and handle missing parent for columns. * Review PR * Fix rebase issue * more lint issues :) --------- Co-authored-by: dapplion <35266934+dapplion@users.noreply.github.com> * Trigger sampling on sync events (#5776) * Trigger sampling on sync events * Update beacon_chain.rs * Fix tests * Fix tests * PeerDAS parameter changes for devnet-0 (#5779) * Update PeerDAS parameters to latest values. * Lint fix * Fix lint. * Update hardcoded subnet count to 64 (#5791) * Fix incorrect columns per subnet and config cleanup (#5792) * Tidy up PeerDAS preset and config values. * Fix broken config * Fix DAS branch CI (#5793) * Fix invalid syntax. * Update cli doc. Ignore get_custody_columns test temporarily. * Fix failing test and add verify inclusion test. * Undo accidentally removed code. * Only attempt reconstruct columns once. (#5794) * Re-enable precompute table for peerdas kzg (#5795) * Merge branch 'unstable' into das * Update subscription filter. (#5797) * Remove penalty for duplicate columns (expected due to reconstruction) (#5798) * Revert DAS config for interop testing. Optimise get_custody_columns function. (#5799) * Don't perform reconstruction for proposer node as it already has all the columns. (#5806) * Multithread compute_cells_and_proofs (#5805) * Multi-thread reconstruct data columns * Multi-thread path for block production * Merge branch 'unstable' into das # Conflicts: # .github/workflows/test-suite.yml # beacon_node/network/src/sync/block_lookups/mod.rs # beacon_node/network/src/sync/block_lookups/single_block_lookup.rs # beacon_node/network/src/sync/network_context.rs * Fix CI errors. * Move PeerDAS type-level config to configurable `ChainSpec` (#5828) * Move PeerDAS type level config to `ChainSpec`. * Fix tests * Misc custody lookup improvements (#5821) * Improve custody requests * Type DataColumnsByRootRequestId * Prioritize peers and load balance * Update tests * Address PR review * Merge branch 'unstable' into das * Rename deploy_block in network config (`das` branch) (#5852) * Rename deploy_block.txt to deposit_contract_block.txt * fmt --------- Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com> * Merge branch 'unstable' into das * Fix CI and merge issues. * Merge branch 'unstable' into das # Conflicts: # beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs # lcli/src/main.rs * Store data columns individually in store and caches (#5890) * Store data columns individually in store and caches * Implement data column pruning * Merge branch 'unstable' into das # Conflicts: # Cargo.lock * Update reconstruction benches to newer criterion version. (#5949) * Merge branch 'unstable' into das # Conflicts: # .github/workflows/test-suite.yml * chore: add `recover_cells_and_compute_proofs` method (#5938) * chore: add recover_cells_and_compute_proofs method * Introduce type alias `CellsAndKzgProofs` to address type complexity. --------- Co-authored-by: Jimmy Chen <jchen.tc@gmail.com> * Update `csc` format in ENR and spec tests for devnet-1 (#5966) * Update `csc` format in ENR. * Add spec tests for `recover_cells_and_kzg_proofs`. * Add tests for ENR. * Fix failing tests. * Add protection against invalid csc value in ENR. * Fix lint * Fix csc encoding and decoding (#5997) * Fix data column rpc request not being sent due to incorrect limits set. (#6000) * Fix incorrect inbound request count causing rate limiting. (#6025) * Merge branch 'stable' into das # Conflicts: # beacon_node/network/src/sync/block_lookups/tests.rs # beacon_node/network/src/sync/block_sidecar_coupling.rs # beacon_node/network/src/sync/manager.rs # beacon_node/network/src/sync/network_context.rs # beacon_node/network/src/sync/network_context/requests.rs * Merge remote-tracking branch 'unstable' into das * Add kurtosis config for DAS testing (#5968) * Add kurtosis config for DAS testing. * Fix invalid yaml file * Update network parameter files. * chore: add rust PeerdasKZG crypto library for peerdas functionality and rollback c-kzg dependency to 4844 version (#5941) * chore: add recover_cells_and_compute_proofs method * chore: add rust peerdas crypto library * chore: integrate peerdaskzg rust library into kzg crate * chore(multi): - update `ssz_cell_to_crypto_cell` - update conversion from the crypto cell type to a Vec<u8>. Since the Rust library defines them as references to an array, the conversion is simply `to_vec` * chore(multi): - update rest of code to handle the new crypto `Cell` type - update test case code to no longer use the Box type * chore: cleanup of superfluous conversions * chore: revert c-kzg dependency back to v1 * chore: move dependency into correct order * chore: update rust dependency - This version includes a new method `PeerDasContext::with_num_threads` * chore: remove Default initialization of PeerDasContext and explicitly set the parameters in `new_from_trusted_setup` * chore: cleanup exports * chore: commit updated cargo.lock * Update Cargo.toml Co-authored-by: Jimmy Chen <jchen.tc@gmail.com> * chore: rename dependency * chore: update peerdas lib - sets the blst version to 0.3 so that it matches whatever lighthouse is using. Although 0.3.12 is latest, lighthouse is pinned to 0.3.3 * chore: fix clippy lifetime - Rust doesn't allow you to elide the lifetime on type aliases * chore: cargo clippy fix * chore: cargo fmt * chore: update lib to add redundant checks (these will be removed in consensus-specs PR 3819) * chore: update dependency to ignore proofs * chore: update peerdas lib to latest * update lib * chore: remove empty proof parameter --------- Co-authored-by: Jimmy Chen <jchen.tc@gmail.com> * Update PeerDAS interop testnet config (#6069) * Update interop testnet config. * Fix typo and remove target peers * Avoid retrying same sampling peer that previously failed. (#6084) * Various fixes to custody range sync (#6004) * Only start requesting batches when there are good peers across all custody columns to avoid spaming block requests. * Add custody peer check before mutating `BatchInfo` to avoid inconsistent state. * Add check to cover a case where batch is not processed while waiting for custody peers to become available. * Fix lint and logic error * Fix `good_peers_on_subnet` always returning false for `DataColumnSubnet`. * Add test for `get_custody_peers_for_column` * Revert epoch parameter refactor. * Fall back to default custody requiremnt if peer ENR is not present. * Add metrics and update code comment. * Add more debug logs. * Use subscribed peers on subnet before MetaDataV3 is implemented. Remove peer_id matching when injecting error because multiple peers are used for range requests. Use randomized custodial peer to avoid repeatedly sending requests to failing peers. Batch by range request where possible. * Remove unused code and update docs. * Add comment * chore: update peerdas-kzg library (#6118) * chore: update peerDAS lib * chore: update library * chore: update library to version that include "init context" benchmarks and optional validation checks * chore: (can remove) -- Add benchmarks for init context * Prevent continuous searchers for low-peer networks (#6162) * Merge branch 'unstable' into das * Fix merge conflicts * Add cli flag to enable sampling and disable by default. (#6209) * chore: Use reference to an array representing a blob instead of an owned KzgBlob (#6179) * add KzgBlobRef type * modify code to use KzgBlobRef * clippy * Remove Deneb blob related changes to maintain compatibility with `c-kzg-4844`. --------- Co-authored-by: Jimmy Chen <jchen.tc@gmail.com> * Store computed custody subnets in PeerDB and fix custody lookup test (#6218) * Fix failing custody lookup tests. * Store custody subnets in PeerDB, fix custody lookup test and refactor some methods. * Merge branch 'unstable' into das # Conflicts: # beacon_node/beacon_chain/src/beacon_chain.rs # beacon_node/beacon_chain/src/block_verification_types.rs # beacon_node/beacon_chain/src/builder.rs # beacon_node/beacon_chain/src/data_availability_checker.rs # beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs # beacon_node/beacon_chain/src/data_column_verification.rs # beacon_node/beacon_chain/src/early_attester_cache.rs # beacon_node/beacon_chain/src/historical_blocks.rs # beacon_node/beacon_chain/tests/store_tests.rs # beacon_node/lighthouse_network/src/discovery/enr.rs # beacon_node/network/src/service.rs # beacon_node/src/cli.rs # beacon_node/store/src/hot_cold_store.rs # beacon_node/store/src/lib.rs # lcli/src/generate_bootnode_enr.rs * Fix CI failures after merge. * Batch sampling requests by peer (#6256) * Batch sampling requests by peer * Fix clippy errors * Fix tests * Add column_index to error message for ease of tracing * Remove outdated comment * Fix range sync never evaluating request as finished, causing it to get stuck. (#6276) * Merge branch 'unstable' into das-0821-merge # Conflicts: # Cargo.lock # Cargo.toml # beacon_node/beacon_chain/src/beacon_chain.rs # beacon_node/beacon_chain/src/data_availability_checker.rs # beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs # beacon_node/beacon_chain/src/data_column_verification.rs # beacon_node/beacon_chain/src/kzg_utils.rs # beacon_node/beacon_chain/src/metrics.rs # beacon_node/beacon_processor/src/lib.rs # beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs # beacon_node/lighthouse_network/src/rpc/config.rs # beacon_node/lighthouse_network/src/rpc/methods.rs # beacon_node/lighthouse_network/src/rpc/outbound.rs # beacon_node/lighthouse_network/src/rpc/rate_limiter.rs # beacon_node/lighthouse_network/src/service/api_types.rs # beacon_node/lighthouse_network/src/types/globals.rs # beacon_node/network/src/network_beacon_processor/mod.rs # beacon_node/network/src/network_beacon_processor/rpc_methods.rs # beacon_node/network/src/network_beacon_processor/sync_methods.rs # beacon_node/network/src/sync/block_lookups/common.rs # beacon_node/network/src/sync/block_lookups/mod.rs # beacon_node/network/src/sync/block_lookups/single_block_lookup.rs # beacon_node/network/src/sync/block_lookups/tests.rs # beacon_node/network/src/sync/manager.rs # beacon_node/network/src/sync/network_context.rs # consensus/types/src/data_column_sidecar.rs # crypto/kzg/Cargo.toml # crypto/kzg/benches/benchmark.rs # crypto/kzg/src/lib.rs * Fix custody tests and load PeerDAS KZG instead. * Fix ef tests and bench compilation. * Fix failing sampling test. * Merge pull request #6287 from jimmygchen/das-0821-merge Merge `unstable` into `das` 20240821 * Remove get_block_import_status * Merge branch 'unstable' into das * Re-enable Windows release tests. * Address some review comments. * Address more review comments and cleanups. * Comment out peer DAS KZG EF tests for now * Address more review comments and fix build. * Merge branch 'das' of github.com:sigp/lighthouse into das * Unignore Electra tests * Fix metric name * Address some of Pawan's review comments * Merge remote-tracking branch 'origin/unstable' into das * Update PeerDAS network parameters for peerdas-devnet-2 (#6290) * update subnet count & custody req * das network params * update ef tests --------- Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
629 lines
25 KiB
Rust
629 lines
25 KiB
Rust
use self::request::ActiveColumnSampleRequest;
|
|
use super::network_context::{
|
|
DataColumnsByRootSingleBlockRequest, RpcResponseError, SyncNetworkContext,
|
|
};
|
|
use crate::metrics;
|
|
use beacon_chain::BeaconChainTypes;
|
|
use fnv::FnvHashMap;
|
|
use lighthouse_network::service::api_types::{
|
|
DataColumnsByRootRequester, SamplingId, SamplingRequestId, SamplingRequester,
|
|
};
|
|
use lighthouse_network::{PeerAction, PeerId};
|
|
use rand::{seq::SliceRandom, thread_rng};
|
|
use slog::{debug, error, warn};
|
|
use std::{
|
|
collections::hash_map::Entry, collections::HashMap, marker::PhantomData, sync::Arc,
|
|
time::Duration,
|
|
};
|
|
use types::{data_column_sidecar::ColumnIndex, ChainSpec, DataColumnSidecar, Hash256};
|
|
|
|
pub type SamplingResult = Result<(), SamplingError>;
|
|
|
|
type DataColumnSidecarList<E> = Vec<Arc<DataColumnSidecar<E>>>;
|
|
|
|
pub struct Sampling<T: BeaconChainTypes> {
|
|
// TODO(das): stalled sampling request are never cleaned up
|
|
requests: HashMap<SamplingRequester, ActiveSamplingRequest<T>>,
|
|
sampling_config: SamplingConfig,
|
|
log: slog::Logger,
|
|
}
|
|
|
|
impl<T: BeaconChainTypes> Sampling<T> {
|
|
pub fn new(sampling_config: SamplingConfig, log: slog::Logger) -> Self {
|
|
Self {
|
|
requests: <_>::default(),
|
|
sampling_config,
|
|
log,
|
|
}
|
|
}
|
|
|
|
#[cfg(test)]
|
|
pub fn active_sampling_requests(&self) -> Vec<Hash256> {
|
|
self.requests.values().map(|r| r.block_root).collect()
|
|
}
|
|
|
|
/// Create a new sampling request for a known block
|
|
///
|
|
/// ### Returns
|
|
///
|
|
/// - `Some`: Request completed, won't make more progress. Expect requester to act on the result.
|
|
/// - `None`: Request still active, requester should do no action
|
|
pub fn on_new_sample_request(
|
|
&mut self,
|
|
block_root: Hash256,
|
|
cx: &mut SyncNetworkContext<T>,
|
|
) -> Option<(SamplingRequester, SamplingResult)> {
|
|
let id = SamplingRequester::ImportedBlock(block_root);
|
|
|
|
let request = match self.requests.entry(id) {
|
|
Entry::Vacant(e) => e.insert(ActiveSamplingRequest::new(
|
|
block_root,
|
|
id,
|
|
&self.sampling_config,
|
|
self.log.clone(),
|
|
&cx.chain.spec,
|
|
)),
|
|
Entry::Occupied(_) => {
|
|
// Sampling is triggered from multiple sources, duplicate sampling requests are
|
|
// likely (gossip block + gossip data column)
|
|
// TODO(das): Should track failed sampling request for some time? Otherwise there's
|
|
// a risk of a loop with multiple triggers creating the request, then failing,
|
|
// and repeat.
|
|
debug!(self.log, "Ignoring duplicate sampling request"; "id" => ?id);
|
|
return None;
|
|
}
|
|
};
|
|
|
|
debug!(self.log, "Created new sample request"; "id" => ?id);
|
|
|
|
// TOOD(das): If a node has very little peers, continue_sampling() will attempt to find enough
|
|
// to sample here, immediately failing the sampling request. There should be some grace
|
|
// period to allow the peer manager to find custody peers.
|
|
let result = request.continue_sampling(cx);
|
|
self.handle_sampling_result(result, &id)
|
|
}
|
|
|
|
/// Insert a downloaded column into an active sampling request. Then make progress on the
|
|
/// entire request.
|
|
///
|
|
/// ### Returns
|
|
///
|
|
/// - `Some`: Request completed, won't make more progress. Expect requester to act on the result.
|
|
/// - `None`: Request still active, requester should do no action
|
|
pub fn on_sample_downloaded(
|
|
&mut self,
|
|
id: SamplingId,
|
|
peer_id: PeerId,
|
|
resp: Result<(DataColumnSidecarList<T::EthSpec>, Duration), RpcResponseError>,
|
|
cx: &mut SyncNetworkContext<T>,
|
|
) -> Option<(SamplingRequester, SamplingResult)> {
|
|
let Some(request) = self.requests.get_mut(&id.id) else {
|
|
// TOOD(das): This log can happen if the request is error'ed early and dropped
|
|
debug!(self.log, "Sample downloaded event for unknown request"; "id" => ?id);
|
|
return None;
|
|
};
|
|
|
|
let result = request.on_sample_downloaded(peer_id, id.sampling_request_id, resp, cx);
|
|
self.handle_sampling_result(result, &id.id)
|
|
}
|
|
|
|
/// Insert a downloaded column into an active sampling request. Then make progress on the
|
|
/// entire request.
|
|
///
|
|
/// ### Returns
|
|
///
|
|
/// - `Some`: Request completed, won't make more progress. Expect requester to act on the result.
|
|
/// - `None`: Request still active, requester should do no action
|
|
pub fn on_sample_verified(
|
|
&mut self,
|
|
id: SamplingId,
|
|
result: Result<(), String>,
|
|
cx: &mut SyncNetworkContext<T>,
|
|
) -> Option<(SamplingRequester, SamplingResult)> {
|
|
let Some(request) = self.requests.get_mut(&id.id) else {
|
|
// TOOD(das): This log can happen if the request is error'ed early and dropped
|
|
debug!(self.log, "Sample verified event for unknown request"; "id" => ?id);
|
|
return None;
|
|
};
|
|
|
|
let result = request.on_sample_verified(id.sampling_request_id, result, cx);
|
|
self.handle_sampling_result(result, &id.id)
|
|
}
|
|
|
|
/// Converts a result from the internal format of `ActiveSamplingRequest` (error first to use ?
|
|
/// conveniently), to an Option first format to use an `if let Some() { act on result }` pattern
|
|
/// in the sync manager.
|
|
fn handle_sampling_result(
|
|
&mut self,
|
|
result: Result<Option<()>, SamplingError>,
|
|
id: &SamplingRequester,
|
|
) -> Option<(SamplingRequester, SamplingResult)> {
|
|
let result = result.transpose();
|
|
if let Some(result) = result {
|
|
debug!(self.log, "Sampling request completed, removing"; "id" => ?id, "result" => ?result);
|
|
metrics::inc_counter_vec(
|
|
&metrics::SAMPLING_REQUEST_RESULT,
|
|
&[metrics::from_result(&result)],
|
|
);
|
|
self.requests.remove(id);
|
|
Some((*id, result))
|
|
} else {
|
|
None
|
|
}
|
|
}
|
|
}
|
|
|
|
pub struct ActiveSamplingRequest<T: BeaconChainTypes> {
|
|
block_root: Hash256,
|
|
requester_id: SamplingRequester,
|
|
column_requests: FnvHashMap<ColumnIndex, ActiveColumnSampleRequest>,
|
|
/// Mapping of column indexes for a sampling request.
|
|
column_indexes_by_sampling_request: FnvHashMap<SamplingRequestId, Vec<ColumnIndex>>,
|
|
/// Sequential ID for sampling requests.
|
|
current_sampling_request_id: SamplingRequestId,
|
|
column_shuffle: Vec<ColumnIndex>,
|
|
required_successes: Vec<usize>,
|
|
/// Logger for the `SyncNetworkContext`.
|
|
pub log: slog::Logger,
|
|
_phantom: PhantomData<T>,
|
|
}
|
|
|
|
#[derive(Debug)]
|
|
pub enum SamplingError {
|
|
SendFailed(#[allow(dead_code)] &'static str),
|
|
ProcessorUnavailable,
|
|
TooManyFailures,
|
|
BadState(#[allow(dead_code)] String),
|
|
ColumnIndexOutOfBounds,
|
|
}
|
|
|
|
/// Required success index by current failures, with p_target=5.00E-06
|
|
/// Ref: https://colab.research.google.com/drive/18uUgT2i-m3CbzQ5TyP9XFKqTn1DImUJD#scrollTo=E82ITcgB5ATh
|
|
const REQUIRED_SUCCESSES: [usize; 11] = [16, 20, 23, 26, 29, 32, 34, 37, 39, 42, 44];
|
|
|
|
#[derive(Debug, Clone)]
|
|
pub enum SamplingConfig {
|
|
Default,
|
|
#[allow(dead_code)]
|
|
Custom {
|
|
required_successes: Vec<usize>,
|
|
},
|
|
}
|
|
|
|
impl<T: BeaconChainTypes> ActiveSamplingRequest<T> {
|
|
fn new(
|
|
block_root: Hash256,
|
|
requester_id: SamplingRequester,
|
|
sampling_config: &SamplingConfig,
|
|
log: slog::Logger,
|
|
spec: &ChainSpec,
|
|
) -> Self {
|
|
// Select ahead of time the full list of to-sample columns
|
|
let mut column_shuffle =
|
|
(0..spec.number_of_columns as ColumnIndex).collect::<Vec<ColumnIndex>>();
|
|
let mut rng = thread_rng();
|
|
column_shuffle.shuffle(&mut rng);
|
|
|
|
Self {
|
|
block_root,
|
|
requester_id,
|
|
column_requests: <_>::default(),
|
|
column_indexes_by_sampling_request: <_>::default(),
|
|
current_sampling_request_id: SamplingRequestId(0),
|
|
column_shuffle,
|
|
required_successes: match sampling_config {
|
|
SamplingConfig::Default => REQUIRED_SUCCESSES.to_vec(),
|
|
SamplingConfig::Custom { required_successes } => required_successes.clone(),
|
|
},
|
|
log,
|
|
_phantom: PhantomData,
|
|
}
|
|
}
|
|
|
|
/// Insert a downloaded column into an active sampling request. Then make progress on the
|
|
/// entire request.
|
|
///
|
|
/// ### Returns
|
|
///
|
|
/// - `Err`: Sampling request has failed and will be dropped
|
|
/// - `Ok(Some)`: Sampling request has successfully completed and will be dropped
|
|
/// - `Ok(None)`: Sampling request still active
|
|
pub(crate) fn on_sample_downloaded(
|
|
&mut self,
|
|
_peer_id: PeerId,
|
|
sampling_request_id: SamplingRequestId,
|
|
resp: Result<(DataColumnSidecarList<T::EthSpec>, Duration), RpcResponseError>,
|
|
cx: &mut SyncNetworkContext<T>,
|
|
) -> Result<Option<()>, SamplingError> {
|
|
// Select columns to sample
|
|
// Create individual request per column
|
|
// Progress requests
|
|
// If request fails retry or expand search
|
|
// If all good return
|
|
let Some(column_indexes) = self
|
|
.column_indexes_by_sampling_request
|
|
.get(&sampling_request_id)
|
|
else {
|
|
error!(self.log, "Column indexes for the sampling request ID not found"; "sampling_request_id" => ?sampling_request_id);
|
|
return Ok(None);
|
|
};
|
|
|
|
match resp {
|
|
Ok((mut resp_data_columns, seen_timestamp)) => {
|
|
debug!(self.log, "Sample download success"; "block_root" => %self.block_root, "column_indexes" => ?column_indexes, "count" => resp_data_columns.len());
|
|
metrics::inc_counter_vec(&metrics::SAMPLE_DOWNLOAD_RESULT, &[metrics::SUCCESS]);
|
|
|
|
// Filter the data received in the response using the requested column indexes.
|
|
let mut data_columns = vec![];
|
|
for column_index in column_indexes {
|
|
let Some(request) = self.column_requests.get_mut(column_index) else {
|
|
warn!(
|
|
self.log,
|
|
"Active column sample request not found"; "block_root" => %self.block_root, "column_index" => column_index
|
|
);
|
|
continue;
|
|
};
|
|
|
|
let Some(data_pos) = resp_data_columns
|
|
.iter()
|
|
.position(|data| &data.index == column_index)
|
|
else {
|
|
// Peer does not have the requested data.
|
|
// TODO(das) what to do?
|
|
debug!(self.log, "Sampling peer claims to not have the data"; "block_root" => %self.block_root, "column_index" => column_index);
|
|
request.on_sampling_error()?;
|
|
continue;
|
|
};
|
|
|
|
data_columns.push(resp_data_columns.swap_remove(data_pos));
|
|
}
|
|
|
|
if !resp_data_columns.is_empty() {
|
|
let resp_column_indexes = resp_data_columns
|
|
.iter()
|
|
.map(|d| d.index)
|
|
.collect::<Vec<_>>();
|
|
debug!(
|
|
self.log,
|
|
"Received data that was not requested"; "block_root" => %self.block_root, "column_indexes" => ?resp_column_indexes
|
|
);
|
|
}
|
|
|
|
// Handle the downloaded data columns.
|
|
if data_columns.is_empty() {
|
|
debug!(self.log,"Received empty response"; "block_root" => %self.block_root);
|
|
self.column_indexes_by_sampling_request
|
|
.remove(&sampling_request_id);
|
|
} else {
|
|
// Overwrite `column_indexes` with the column indexes received in the response.
|
|
let column_indexes = data_columns.iter().map(|d| d.index).collect::<Vec<_>>();
|
|
self.column_indexes_by_sampling_request
|
|
.insert(sampling_request_id, column_indexes.clone());
|
|
// Peer has data column, send to verify
|
|
let Some(beacon_processor) = cx.beacon_processor_if_enabled() else {
|
|
// If processor is not available, error the entire sampling
|
|
debug!(self.log, "Dropping sampling"; "block" => %self.block_root, "reason" => "beacon processor unavailable");
|
|
return Err(SamplingError::ProcessorUnavailable);
|
|
};
|
|
debug!(self.log, "Sending data_column for verification"; "block" => ?self.block_root, "column_indexes" => ?column_indexes);
|
|
if let Err(e) = beacon_processor.send_rpc_validate_data_columns(
|
|
self.block_root,
|
|
data_columns,
|
|
seen_timestamp,
|
|
SamplingId {
|
|
id: self.requester_id,
|
|
sampling_request_id,
|
|
},
|
|
) {
|
|
// TODO(das): Beacon processor is overloaded, what should we do?
|
|
error!(self.log, "Dropping sampling"; "block" => %self.block_root, "reason" => e.to_string());
|
|
return Err(SamplingError::SendFailed("beacon processor send failure"));
|
|
}
|
|
}
|
|
}
|
|
Err(err) => {
|
|
debug!(self.log, "Sample download error"; "block_root" => %self.block_root, "column_indexes" => ?column_indexes, "error" => ?err);
|
|
metrics::inc_counter_vec(&metrics::SAMPLE_DOWNLOAD_RESULT, &[metrics::FAILURE]);
|
|
|
|
// Error downloading, maybe penalize peer and retry again.
|
|
// TODO(das) with different peer or different peer?
|
|
for column_index in column_indexes {
|
|
let Some(request) = self.column_requests.get_mut(column_index) else {
|
|
warn!(
|
|
self.log,
|
|
"Active column sample request not found"; "block_root" => %self.block_root, "column_index" => column_index
|
|
);
|
|
continue;
|
|
};
|
|
request.on_sampling_error()?;
|
|
}
|
|
}
|
|
};
|
|
|
|
self.continue_sampling(cx)
|
|
}
|
|
|
|
/// Insert a column verification result into an active sampling request. Then make progress
|
|
/// on the entire request.
|
|
///
|
|
/// ### Returns
|
|
///
|
|
/// - `Err`: Sampling request has failed and will be dropped
|
|
/// - `Ok(Some)`: Sampling request has successfully completed and will be dropped
|
|
/// - `Ok(None)`: Sampling request still active
|
|
pub(crate) fn on_sample_verified(
|
|
&mut self,
|
|
sampling_request_id: SamplingRequestId,
|
|
result: Result<(), String>,
|
|
cx: &mut SyncNetworkContext<T>,
|
|
) -> Result<Option<()>, SamplingError> {
|
|
let Some(column_indexes) = self
|
|
.column_indexes_by_sampling_request
|
|
.get(&sampling_request_id)
|
|
else {
|
|
error!(self.log, "Column indexes for the sampling request ID not found"; "sampling_request_id" => ?sampling_request_id);
|
|
return Ok(None);
|
|
};
|
|
|
|
match result {
|
|
Ok(_) => {
|
|
debug!(self.log, "Sample verification success"; "block_root" => %self.block_root, "column_indexes" => ?column_indexes);
|
|
metrics::inc_counter_vec(&metrics::SAMPLE_VERIFY_RESULT, &[metrics::SUCCESS]);
|
|
|
|
// Valid, continue_sampling will maybe consider sampling succees
|
|
for column_index in column_indexes {
|
|
let Some(request) = self.column_requests.get_mut(column_index) else {
|
|
warn!(
|
|
self.log,
|
|
"Active column sample request not found"; "block_root" => %self.block_root, "column_index" => column_index
|
|
);
|
|
continue;
|
|
};
|
|
request.on_sampling_success()?;
|
|
}
|
|
}
|
|
Err(err) => {
|
|
debug!(self.log, "Sample verification failure"; "block_root" => %self.block_root, "column_indexes" => ?column_indexes, "reason" => ?err);
|
|
metrics::inc_counter_vec(&metrics::SAMPLE_VERIFY_RESULT, &[metrics::FAILURE]);
|
|
|
|
// TODO(das): Peer sent invalid data, penalize and try again from different peer
|
|
// TODO(das): Count individual failures
|
|
for column_index in column_indexes {
|
|
let Some(request) = self.column_requests.get_mut(column_index) else {
|
|
warn!(
|
|
self.log,
|
|
"Active column sample request not found"; "block_root" => %self.block_root, "column_index" => column_index
|
|
);
|
|
continue;
|
|
};
|
|
let peer_id = request.on_sampling_error()?;
|
|
cx.report_peer(
|
|
peer_id,
|
|
PeerAction::LowToleranceError,
|
|
"invalid data column",
|
|
);
|
|
}
|
|
}
|
|
}
|
|
|
|
self.continue_sampling(cx)
|
|
}
|
|
|
|
pub(crate) fn continue_sampling(
|
|
&mut self,
|
|
cx: &mut SyncNetworkContext<T>,
|
|
) -> Result<Option<()>, SamplingError> {
|
|
// First check if sampling is completed, by computing `required_successes`
|
|
let mut successes = 0;
|
|
let mut failures = 0;
|
|
let mut ongoings = 0;
|
|
|
|
for request in self.column_requests.values() {
|
|
if request.is_completed() {
|
|
successes += 1;
|
|
}
|
|
if request.is_failed() {
|
|
failures += 1;
|
|
}
|
|
if request.is_ongoing() {
|
|
ongoings += 1;
|
|
}
|
|
}
|
|
|
|
// If there are too many failures, consider the sampling failed
|
|
let Some(required_successes) = self.required_successes.get(failures) else {
|
|
return Err(SamplingError::TooManyFailures);
|
|
};
|
|
|
|
// If there are enough successes, consider the sampling complete
|
|
if successes >= *required_successes {
|
|
return Ok(Some(()));
|
|
}
|
|
|
|
// First, attempt to progress sampling by requesting more columns, so that request failures
|
|
// are accounted for below.
|
|
|
|
// Group the requested column indexes by the destination peer to batch sampling requests.
|
|
let mut column_indexes_to_request = FnvHashMap::default();
|
|
for idx in 0..*required_successes {
|
|
// Re-request columns. Note: out of bounds error should never happen, inputs are hardcoded
|
|
let column_index = *self
|
|
.column_shuffle
|
|
.get(idx)
|
|
.ok_or(SamplingError::ColumnIndexOutOfBounds)?;
|
|
let request = self
|
|
.column_requests
|
|
.entry(column_index)
|
|
.or_insert(ActiveColumnSampleRequest::new(column_index));
|
|
|
|
if request.is_ready_to_request() {
|
|
if let Some(peer_id) = request.choose_peer(cx) {
|
|
let indexes = column_indexes_to_request.entry(peer_id).or_insert(vec![]);
|
|
indexes.push(column_index);
|
|
}
|
|
}
|
|
}
|
|
|
|
// Send requests.
|
|
let mut sent_request = false;
|
|
for (peer_id, column_indexes) in column_indexes_to_request {
|
|
cx.data_column_lookup_request(
|
|
DataColumnsByRootRequester::Sampling(SamplingId {
|
|
id: self.requester_id,
|
|
sampling_request_id: self.current_sampling_request_id,
|
|
}),
|
|
peer_id,
|
|
DataColumnsByRootSingleBlockRequest {
|
|
block_root: self.block_root,
|
|
indices: column_indexes.clone(),
|
|
},
|
|
)
|
|
.map_err(SamplingError::SendFailed)?;
|
|
self.column_indexes_by_sampling_request
|
|
.insert(self.current_sampling_request_id, column_indexes.clone());
|
|
self.current_sampling_request_id.0 += 1;
|
|
sent_request = true;
|
|
|
|
// Update request status.
|
|
for column_index in column_indexes {
|
|
let Some(request) = self.column_requests.get_mut(&column_index) else {
|
|
continue;
|
|
};
|
|
request.on_start_sampling(peer_id)?;
|
|
}
|
|
}
|
|
|
|
// Make sure that sampling doesn't stall, by ensuring that this sampling request will
|
|
// receive a new event of some type. If there are no ongoing requests, and no new
|
|
// request was sent, loop to increase the required_successes until the sampling fails if
|
|
// there are no peers.
|
|
if ongoings == 0 && !sent_request {
|
|
debug!(self.log, "Sampling request stalled"; "block_root" => %self.block_root);
|
|
}
|
|
|
|
Ok(None)
|
|
}
|
|
}
|
|
|
|
mod request {
|
|
use super::SamplingError;
|
|
use crate::sync::network_context::SyncNetworkContext;
|
|
use beacon_chain::BeaconChainTypes;
|
|
use lighthouse_network::PeerId;
|
|
use rand::seq::SliceRandom;
|
|
use rand::thread_rng;
|
|
use std::collections::HashSet;
|
|
use types::data_column_sidecar::ColumnIndex;
|
|
|
|
pub(crate) struct ActiveColumnSampleRequest {
|
|
column_index: ColumnIndex,
|
|
status: Status,
|
|
// TODO(das): Should downscore peers that claim to not have the sample?
|
|
peers_dont_have: HashSet<PeerId>,
|
|
}
|
|
|
|
#[derive(Debug, Clone)]
|
|
enum Status {
|
|
NoPeers,
|
|
NotStarted,
|
|
Sampling(PeerId),
|
|
Verified,
|
|
}
|
|
|
|
impl ActiveColumnSampleRequest {
|
|
pub(crate) fn new(column_index: ColumnIndex) -> Self {
|
|
Self {
|
|
column_index,
|
|
status: Status::NotStarted,
|
|
peers_dont_have: <_>::default(),
|
|
}
|
|
}
|
|
|
|
pub(crate) fn is_completed(&self) -> bool {
|
|
match self.status {
|
|
Status::NoPeers | Status::NotStarted | Status::Sampling(_) => false,
|
|
Status::Verified => true,
|
|
}
|
|
}
|
|
|
|
pub(crate) fn is_failed(&self) -> bool {
|
|
match self.status {
|
|
Status::NotStarted | Status::Sampling(_) | Status::Verified => false,
|
|
Status::NoPeers => true,
|
|
}
|
|
}
|
|
|
|
pub(crate) fn is_ongoing(&self) -> bool {
|
|
match self.status {
|
|
Status::NotStarted | Status::NoPeers | Status::Verified => false,
|
|
Status::Sampling(_) => true,
|
|
}
|
|
}
|
|
|
|
pub(crate) fn is_ready_to_request(&self) -> bool {
|
|
match self.status {
|
|
Status::NoPeers | Status::NotStarted => true,
|
|
Status::Sampling(_) | Status::Verified => false,
|
|
}
|
|
}
|
|
|
|
pub(crate) fn choose_peer<T: BeaconChainTypes>(
|
|
&mut self,
|
|
cx: &SyncNetworkContext<T>,
|
|
) -> Option<PeerId> {
|
|
// TODO: When is a fork and only a subset of your peers know about a block, sampling should only
|
|
// be queried on the peers on that fork. Should this case be handled? How to handle it?
|
|
let mut peer_ids = cx.get_custodial_peers(self.column_index);
|
|
|
|
peer_ids.retain(|peer_id| !self.peers_dont_have.contains(peer_id));
|
|
|
|
if let Some(peer_id) = peer_ids.choose(&mut thread_rng()) {
|
|
Some(*peer_id)
|
|
} else {
|
|
self.status = Status::NoPeers;
|
|
None
|
|
}
|
|
}
|
|
|
|
pub(crate) fn on_start_sampling(&mut self, peer_id: PeerId) -> Result<(), SamplingError> {
|
|
match self.status.clone() {
|
|
Status::NoPeers | Status::NotStarted => {
|
|
self.status = Status::Sampling(peer_id);
|
|
Ok(())
|
|
}
|
|
other => Err(SamplingError::BadState(format!(
|
|
"bad state on_start_sampling expected NoPeers|NotStarted got {other:?}. column_index:{}",
|
|
self.column_index
|
|
))),
|
|
}
|
|
}
|
|
|
|
pub(crate) fn on_sampling_error(&mut self) -> Result<PeerId, SamplingError> {
|
|
match self.status.clone() {
|
|
Status::Sampling(peer_id) => {
|
|
self.peers_dont_have.insert(peer_id);
|
|
self.status = Status::NotStarted;
|
|
Ok(peer_id)
|
|
}
|
|
other => Err(SamplingError::BadState(format!(
|
|
"bad state on_sampling_error expected Sampling got {other:?}. column_index:{}",
|
|
self.column_index
|
|
))),
|
|
}
|
|
}
|
|
|
|
pub(crate) fn on_sampling_success(&mut self) -> Result<(), SamplingError> {
|
|
match &self.status {
|
|
Status::Sampling(_) => {
|
|
self.status = Status::Verified;
|
|
Ok(())
|
|
}
|
|
other => Err(SamplingError::BadState(format!(
|
|
"bad state on_sampling_success expected Sampling got {other:?}. column_index:{}",
|
|
self.column_index
|
|
))),
|
|
}
|
|
}
|
|
}
|
|
}
|