mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-10 04:01:51 +00:00
Initial work towards v0.2.0 (#924)
* Remove ping protocol
* Initial renaming of network services
* Correct rebasing relative to latest master
* Start updating types
* Adds HashMapDelay struct to utils
* Initial network restructure
* Network restructure. Adds new types for v0.2.0
* Removes build artefacts
* Shift validation to beacon chain
* Temporarily remove gossip validation
This is to be updated to match current optimisation efforts.
* Adds AggregateAndProof
* Begin rebuilding pubsub encoding/decoding
* Signature hacking
* Shift gossipsup decoding into eth2_libp2p
* Existing EF tests passing with fake_crypto
* Shifts block encoding/decoding into RPC
* Delete outdated API spec
* All release tests passing bar genesis state parsing
* Update and test YamlConfig
* Update to spec v0.10 compatible BLS
* Updates to BLS EF tests
* Add EF test for AggregateVerify
And delete unused hash2curve tests for uncompressed points
* Update EF tests to v0.10.1
* Use optional block root correctly in block proc
* Use genesis fork in deposit domain. All tests pass
* Fast aggregate verify test
* Update REST API docs
* Fix unused import
* Bump spec tags to v0.10.1
* Add `seconds_per_eth1_block` to chainspec
* Update to timestamp based eth1 voting scheme
* Return None from `get_votes_to_consider` if block cache is empty
* Handle overflows in `is_candidate_block`
* Revert to failing tests
* Fix eth1 data sets test
* Choose default vote according to spec
* Fix collect_valid_votes tests
* Fix `get_votes_to_consider` to choose all eligible blocks
* Uncomment winning_vote tests
* Add comments; remove unused code
* Reduce seconds_per_eth1_block for simulation
* Addressed review comments
* Add test for default vote case
* Fix logs
* Remove unused functions
* Meter default eth1 votes
* Fix comments
* Progress on attestation service
* Address review comments; remove unused dependency
* Initial work on removing libp2p lock
* Add LRU caches to store (rollup)
* Update attestation validation for DB changes (WIP)
* Initial version of should_forward_block
* Scaffold
* Progress on attestation validation
Also, consolidate prod+testing slot clocks so that they share much
of the same implementation and can both handle sub-slot time changes.
* Removes lock from libp2p service
* Completed network lock removal
* Finish(?) attestation processing
* Correct network termination future
* Add slot check to block check
* Correct fmt issues
* Remove Drop implementation for network service
* Add first attempt at attestation proc. re-write
* Add version 2 of attestation processing
* Minor fixes
* Add validator pubkey cache
* Make get_indexed_attestation take a committee
* Link signature processing into new attn verification
* First working version
* Ensure pubkey cache is updated
* Add more metrics, slight optimizations
* Clone committee cache during attestation processing
* Update shuffling cache during block processing
* Remove old commented-out code
* Fix shuffling cache insert bug
* Used indexed attestation in fork choice
* Restructure attn processing, add metrics
* Add more detailed metrics
* Tidy, fix failing tests
* Fix failing tests, tidy
* Address reviewers suggestions
* Disable/delete two outdated tests
* Modification of validator for subscriptions
* Add slot signing to validator client
* Further progress on validation subscription
* Adds necessary validator subscription functionality
* Add new Pubkeys struct to signature_sets
* Refactor with functional approach
* Update beacon chain
* Clean up validator <-> beacon node http types
* Add aggregator status to ValidatorDuty
* Impl Clone for manual slot clock
* Fix minor errors
* Further progress validator client subscription
* Initial subscription and aggregation handling
* Remove decompressed member from pubkey bytes
* Progress to modifying val client for attestation aggregation
* First draft of validator client upgrade for aggregate attestations
* Add hashmap for indices lookup
* Add state cache, remove store cache
* Only build the head committee cache
* Removes lock on a network channel
* Partially implement beacon node subscription http api
* Correct compilation issues
* Change `get_attesting_indices` to use Vec
* Fix failing test
* Partial implementation of timer
* Adds timer, removes exit_future, http api to op pool
* Partial multiple aggregate attestation handling
* Permits bulk messages accross gossipsub network channel
* Correct compile issues
* Improve gosispsub messaging and correct rest api helpers
* Added global gossipsub subscriptions
* Update validator subscriptions data structs
* Tidy
* Re-structure validator subscriptions
* Initial handling of subscriptions
* Re-structure network service
* Add pubkey cache persistence file
* Add more comments
* Integrate persistence file into builder
* Add pubkey cache tests
* Add HashSetDelay and introduce into attestation service
* Handles validator subscriptions
* Add data_dir to beacon chain builder
* Remove Option in pubkey cache persistence file
* Ensure consistency between datadir/data_dir
* Fix failing network test
* Peer subnet discovery gets queued for future subscriptions
* Reorganise attestation service functions
* Initial wiring of attestation service
* First draft of attestation service timing logic
* Correct minor typos
* Tidy
* Fix todos
* Improve tests
* Add PeerInfo to connected peers mapping
* Fix compile error
* Fix compile error from merge
* Split up block processing metrics
* Tidy
* Refactor get_pubkey_from_state
* Remove commented-out code
* Rename state_cache -> checkpoint_cache
* Rename Checkpoint -> Snapshot
* Tidy, add comments
* Tidy up find_head function
* Change some checkpoint -> snapshot
* Add tests
* Expose max_len
* Remove dead code
* Tidy
* Fix bug
* Add sync-speed metric
* Add first attempt at VerifiableBlock
* Start integrating into beacon chain
* Integrate VerifiableBlock
* Rename VerifableBlock -> PartialBlockVerification
* Add start of typed methods
* Add progress
* Add further progress
* Rename structs
* Add full block verification to block_processing.rs
* Further beacon chain integration
* Update checks for gossip
* Add todo
* Start adding segement verification
* Add passing chain segement test
* Initial integration with batch sync
* Minor changes
* Tidy, add more error checking
* Start adding chain_segment tests
* Finish invalid signature tests
* Include single and gossip verified blocks in tests
* Add gossip verification tests
* Start adding docs
* Finish adding comments to block_processing.rs
* Rename block_processing.rs -> block_verification
* Start removing old block processing code
* Fixes beacon_chain compilation
* Fix project-wide compile errors
* Remove old code
* Correct code to pass all tests
* Fix bug with beacon proposer index
* Fix shim for BlockProcessingError
* Only process one epoch at a time
* Fix loop in chain segment processing
* Correct tests from master merge
* Add caching for state.eth1_data_votes
* Add BeaconChain::validator_pubkey
* Revert "Add caching for state.eth1_data_votes"
This reverts commit cd73dcd643.
Co-authored-by: Grant Wuerker <gwuerker@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
701
beacon_node/network/src/router/processor.rs
Normal file
701
beacon_node/network/src/router/processor.rs
Normal file
@@ -0,0 +1,701 @@
|
||||
use crate::service::NetworkMessage;
|
||||
use crate::sync::SyncMessage;
|
||||
use beacon_chain::{
|
||||
AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, BlockProcessingOutcome,
|
||||
};
|
||||
use eth2_libp2p::rpc::methods::*;
|
||||
use eth2_libp2p::rpc::{RPCEvent, RPCRequest, RPCResponse, RequestId};
|
||||
use eth2_libp2p::PeerId;
|
||||
use slog::{debug, error, o, trace, warn};
|
||||
use ssz::Encode;
|
||||
use std::sync::Arc;
|
||||
use store::Store;
|
||||
use tokio::sync::{mpsc, oneshot};
|
||||
use types::{Attestation, Epoch, EthSpec, Hash256, SignedBeaconBlock, Slot};
|
||||
|
||||
//TODO: Rate limit requests
|
||||
|
||||
/// If a block is more than `FUTURE_SLOT_TOLERANCE` slots ahead of our slot clock, we drop it.
|
||||
/// Otherwise we queue it.
|
||||
pub(crate) const FUTURE_SLOT_TOLERANCE: u64 = 1;
|
||||
|
||||
/// Keeps track of syncing information for known connected peers.
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
pub struct PeerSyncInfo {
|
||||
fork_version: [u8; 4],
|
||||
pub finalized_root: Hash256,
|
||||
pub finalized_epoch: Epoch,
|
||||
pub head_root: Hash256,
|
||||
pub head_slot: Slot,
|
||||
}
|
||||
|
||||
impl From<StatusMessage> for PeerSyncInfo {
|
||||
fn from(status: StatusMessage) -> PeerSyncInfo {
|
||||
PeerSyncInfo {
|
||||
fork_version: status.fork_version,
|
||||
finalized_root: status.finalized_root,
|
||||
finalized_epoch: status.finalized_epoch,
|
||||
head_root: status.head_root,
|
||||
head_slot: status.head_slot,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl PeerSyncInfo {
|
||||
pub fn from_chain<T: BeaconChainTypes>(chain: &Arc<BeaconChain<T>>) -> Option<PeerSyncInfo> {
|
||||
Some(Self::from(status_message(chain)?))
|
||||
}
|
||||
}
|
||||
|
||||
/// Processes validated messages from the network. It relays necessary data to the syncing thread
|
||||
/// and processes blocks from the pubsub network.
|
||||
pub struct Processor<T: BeaconChainTypes> {
|
||||
/// A reference to the underlying beacon chain.
|
||||
chain: Arc<BeaconChain<T>>,
|
||||
/// A channel to the syncing thread.
|
||||
sync_send: mpsc::UnboundedSender<SyncMessage<T::EthSpec>>,
|
||||
/// A oneshot channel for destroying the sync thread.
|
||||
_sync_exit: oneshot::Sender<()>,
|
||||
/// A network context to return and handle RPC requests.
|
||||
network: HandlerNetworkContext<T::EthSpec>,
|
||||
/// The `RPCHandler` logger.
|
||||
log: slog::Logger,
|
||||
}
|
||||
|
||||
impl<T: BeaconChainTypes> Processor<T> {
|
||||
/// Instantiate a `Processor` instance
|
||||
pub fn new(
|
||||
executor: &tokio::runtime::TaskExecutor,
|
||||
beacon_chain: Arc<BeaconChain<T>>,
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
|
||||
log: &slog::Logger,
|
||||
) -> Self {
|
||||
let sync_logger = log.new(o!("service"=> "sync"));
|
||||
|
||||
// spawn the sync thread
|
||||
let (sync_send, _sync_exit) = crate::sync::manager::spawn(
|
||||
executor,
|
||||
Arc::downgrade(&beacon_chain),
|
||||
network_send.clone(),
|
||||
sync_logger,
|
||||
);
|
||||
|
||||
Processor {
|
||||
chain: beacon_chain,
|
||||
sync_send,
|
||||
_sync_exit,
|
||||
network: HandlerNetworkContext::new(network_send, log.clone()),
|
||||
log: log.clone(),
|
||||
}
|
||||
}
|
||||
|
||||
fn send_to_sync(&mut self, message: SyncMessage<T::EthSpec>) {
|
||||
self.sync_send.try_send(message).unwrap_or_else(|_| {
|
||||
warn!(
|
||||
self.log,
|
||||
"Could not send message to the sync service";
|
||||
)
|
||||
});
|
||||
}
|
||||
|
||||
/// Handle a peer disconnect.
|
||||
///
|
||||
/// Removes the peer from the manager.
|
||||
pub fn on_disconnect(&mut self, peer_id: PeerId) {
|
||||
self.send_to_sync(SyncMessage::Disconnect(peer_id));
|
||||
}
|
||||
|
||||
/// An error occurred during an RPC request. The state is maintained by the sync manager, so
|
||||
/// this function notifies the sync manager of the error.
|
||||
pub fn on_rpc_error(&mut self, peer_id: PeerId, request_id: RequestId) {
|
||||
self.send_to_sync(SyncMessage::RPCError(peer_id, request_id));
|
||||
}
|
||||
|
||||
/// Handle the connection of a new peer.
|
||||
///
|
||||
/// Sends a `Status` message to the peer.
|
||||
pub fn on_connect(&mut self, peer_id: PeerId) {
|
||||
if let Some(status_message) = status_message(&self.chain) {
|
||||
debug!(
|
||||
self.log,
|
||||
"Sending Status Request";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"fork_version" => format!("{:?}", status_message.fork_version),
|
||||
"finalized_root" => format!("{:?}", status_message.finalized_root),
|
||||
"finalized_epoch" => format!("{:?}", status_message.finalized_epoch),
|
||||
"head_root" => format!("{}", status_message.head_root),
|
||||
"head_slot" => format!("{}", status_message.head_slot),
|
||||
);
|
||||
self.network
|
||||
.send_rpc_request(peer_id, RPCRequest::Status(status_message));
|
||||
}
|
||||
}
|
||||
|
||||
/// Handle a `Status` request.
|
||||
///
|
||||
/// Processes the `Status` from the remote peer and sends back our `Status`.
|
||||
pub fn on_status_request(
|
||||
&mut self,
|
||||
peer_id: PeerId,
|
||||
request_id: RequestId,
|
||||
status: StatusMessage,
|
||||
) {
|
||||
debug!(
|
||||
self.log,
|
||||
"Received Status Request";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"fork_version" => format!("{:?}", status.fork_version),
|
||||
"finalized_root" => format!("{:?}", status.finalized_root),
|
||||
"finalized_epoch" => format!("{:?}", status.finalized_epoch),
|
||||
"head_root" => format!("{}", status.head_root),
|
||||
"head_slot" => format!("{}", status.head_slot),
|
||||
);
|
||||
|
||||
// ignore status responses if we are shutting down
|
||||
if let Some(status_message) = status_message(&self.chain) {
|
||||
// Say status back.
|
||||
self.network.send_rpc_response(
|
||||
peer_id.clone(),
|
||||
request_id,
|
||||
RPCResponse::Status(status_message),
|
||||
);
|
||||
}
|
||||
|
||||
self.process_status(peer_id, status);
|
||||
}
|
||||
|
||||
/// Process a `Status` response from a peer.
|
||||
pub fn on_status_response(&mut self, peer_id: PeerId, status: StatusMessage) {
|
||||
trace!(self.log, "StatusResponse"; "peer" => format!("{:?}", peer_id));
|
||||
|
||||
// Process the status message, without sending back another status.
|
||||
self.process_status(peer_id, status);
|
||||
}
|
||||
|
||||
/// Process a `Status` message, requesting new blocks if appropriate.
|
||||
///
|
||||
/// Disconnects the peer if required.
|
||||
fn process_status(&mut self, peer_id: PeerId, status: StatusMessage) {
|
||||
let remote = PeerSyncInfo::from(status);
|
||||
let local = match PeerSyncInfo::from_chain(&self.chain) {
|
||||
Some(local) => local,
|
||||
None => {
|
||||
return error!(
|
||||
self.log,
|
||||
"Failed to get peer sync info";
|
||||
"msg" => "likely due to head lock contention"
|
||||
)
|
||||
}
|
||||
};
|
||||
|
||||
let start_slot = |epoch: Epoch| epoch.start_slot(T::EthSpec::slots_per_epoch());
|
||||
|
||||
if local.fork_version != remote.fork_version {
|
||||
// The node is on a different network/fork, disconnect them.
|
||||
debug!(
|
||||
self.log, "Handshake Failure";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"reason" => "network_id"
|
||||
);
|
||||
|
||||
self.network
|
||||
.disconnect(peer_id, GoodbyeReason::IrrelevantNetwork);
|
||||
} else if remote.head_slot
|
||||
> self.chain.slot().unwrap_or_else(|_| Slot::from(0u64)) + FUTURE_SLOT_TOLERANCE
|
||||
{
|
||||
// Note: If the slot_clock cannot be read, this will not error. Other system
|
||||
// components will deal with an invalid slot clock error.
|
||||
|
||||
// The remotes head is on a slot that is significantly ahead of ours. This could be
|
||||
// because they are using a different genesis time, or that theirs or our system
|
||||
// clock is incorrect.
|
||||
debug!(
|
||||
self.log, "Handshake Failure";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"reason" => "different system clocks or genesis time"
|
||||
);
|
||||
self.network
|
||||
.disconnect(peer_id, GoodbyeReason::IrrelevantNetwork);
|
||||
} else if remote.finalized_epoch <= local.finalized_epoch
|
||||
&& remote.finalized_root != Hash256::zero()
|
||||
&& local.finalized_root != Hash256::zero()
|
||||
&& self
|
||||
.chain
|
||||
.root_at_slot(start_slot(remote.finalized_epoch))
|
||||
.map(|root_opt| root_opt != Some(remote.finalized_root))
|
||||
.unwrap_or_else(|_| false)
|
||||
{
|
||||
// The remotes finalized epoch is less than or greater than ours, but the block root is
|
||||
// different to the one in our chain.
|
||||
//
|
||||
// Therefore, the node is on a different chain and we should not communicate with them.
|
||||
debug!(
|
||||
self.log, "Handshake Failure";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"reason" => "different finalized chain"
|
||||
);
|
||||
self.network
|
||||
.disconnect(peer_id, GoodbyeReason::IrrelevantNetwork);
|
||||
} else if remote.finalized_epoch < local.finalized_epoch {
|
||||
// The node has a lower finalized epoch, their chain is not useful to us. There are two
|
||||
// cases where a node can have a lower finalized epoch:
|
||||
//
|
||||
// ## The node is on the same chain
|
||||
//
|
||||
// If a node is on the same chain but has a lower finalized epoch, their head must be
|
||||
// lower than ours. Therefore, we have nothing to request from them.
|
||||
//
|
||||
// ## The node is on a fork
|
||||
//
|
||||
// If a node is on a fork that has a lower finalized epoch, switching to that fork would
|
||||
// cause us to revert a finalized block. This is not permitted, therefore we have no
|
||||
// interest in their blocks.
|
||||
debug!(
|
||||
self.log,
|
||||
"NaivePeer";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"reason" => "lower finalized epoch"
|
||||
);
|
||||
} else if self
|
||||
.chain
|
||||
.store
|
||||
.exists::<SignedBeaconBlock<T::EthSpec>>(&remote.head_root)
|
||||
.unwrap_or_else(|_| false)
|
||||
{
|
||||
trace!(
|
||||
self.log, "Peer with known chain found";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"remote_head_slot" => remote.head_slot,
|
||||
"remote_latest_finalized_epoch" => remote.finalized_epoch,
|
||||
);
|
||||
|
||||
// If the node's best-block is already known to us and they are close to our current
|
||||
// head, treat them as a fully sync'd peer.
|
||||
self.send_to_sync(SyncMessage::AddPeer(peer_id, remote));
|
||||
} else {
|
||||
// The remote node has an equal or great finalized epoch and we don't know it's head.
|
||||
//
|
||||
// Therefore, there are some blocks between the local finalized epoch and the remote
|
||||
// head that are worth downloading.
|
||||
debug!(
|
||||
self.log, "UsefulPeer";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"local_finalized_epoch" => local.finalized_epoch,
|
||||
"remote_latest_finalized_epoch" => remote.finalized_epoch,
|
||||
);
|
||||
self.send_to_sync(SyncMessage::AddPeer(peer_id, remote));
|
||||
}
|
||||
}
|
||||
|
||||
/// Handle a `BlocksByRoot` request from the peer.
|
||||
pub fn on_blocks_by_root_request(
|
||||
&mut self,
|
||||
peer_id: PeerId,
|
||||
request_id: RequestId,
|
||||
request: BlocksByRootRequest,
|
||||
) {
|
||||
let mut send_block_count = 0;
|
||||
for root in request.block_roots.iter() {
|
||||
if let Ok(Some(block)) = self.chain.store.get_block(root) {
|
||||
self.network.send_rpc_response(
|
||||
peer_id.clone(),
|
||||
request_id,
|
||||
RPCResponse::BlocksByRoot(Box::new(block)),
|
||||
);
|
||||
send_block_count += 1;
|
||||
} else {
|
||||
debug!(
|
||||
self.log,
|
||||
"Peer requested unknown block";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"request_root" => format!("{:}", root),
|
||||
);
|
||||
}
|
||||
}
|
||||
debug!(
|
||||
self.log,
|
||||
"Received BlocksByRoot Request";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"requested" => request.block_roots.len(),
|
||||
"returned" => send_block_count,
|
||||
);
|
||||
|
||||
// send stream termination
|
||||
self.network.send_rpc_error_response(
|
||||
peer_id,
|
||||
request_id,
|
||||
RPCErrorResponse::StreamTermination(ResponseTermination::BlocksByRoot),
|
||||
);
|
||||
}
|
||||
|
||||
/// Handle a `BlocksByRange` request from the peer.
|
||||
pub fn on_blocks_by_range_request(
|
||||
&mut self,
|
||||
peer_id: PeerId,
|
||||
request_id: RequestId,
|
||||
req: BlocksByRangeRequest,
|
||||
) {
|
||||
debug!(
|
||||
self.log,
|
||||
"Received BlocksByRange Request";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"count" => req.count,
|
||||
"start_slot" => req.start_slot,
|
||||
"step" => req.step,
|
||||
);
|
||||
|
||||
if req.step == 0 {
|
||||
warn!(self.log,
|
||||
"Peer sent invalid range request";
|
||||
"error" => "Step sent was 0");
|
||||
self.network.disconnect(peer_id, GoodbyeReason::Fault);
|
||||
return;
|
||||
}
|
||||
|
||||
let forwards_block_root_iter = match self
|
||||
.chain
|
||||
.forwards_iter_block_roots(Slot::from(req.start_slot))
|
||||
{
|
||||
Ok(iter) => iter,
|
||||
Err(e) => {
|
||||
return error!(
|
||||
self.log,
|
||||
"Unable to obtain root iter";
|
||||
"error" => format!("{:?}", e)
|
||||
)
|
||||
}
|
||||
};
|
||||
|
||||
let mut block_roots = forwards_block_root_iter
|
||||
.take_while(|(_root, slot)| slot.as_u64() < req.start_slot + req.count * req.step)
|
||||
.step_by(req.step as usize)
|
||||
.map(|(root, _slot)| root)
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
block_roots.dedup();
|
||||
|
||||
let mut blocks_sent = 0;
|
||||
for root in block_roots {
|
||||
if let Ok(Some(block)) = self.chain.store.get_block(&root) {
|
||||
// Due to skip slots, blocks could be out of the range, we ensure they are in the
|
||||
// range before sending
|
||||
if block.slot() >= req.start_slot
|
||||
&& block.slot() < req.start_slot + req.count * req.step
|
||||
{
|
||||
blocks_sent += 1;
|
||||
self.network.send_rpc_response(
|
||||
peer_id.clone(),
|
||||
request_id,
|
||||
RPCResponse::BlocksByRange(Box::new(block)),
|
||||
);
|
||||
}
|
||||
} else {
|
||||
error!(
|
||||
self.log,
|
||||
"Block in the chain is not in the store";
|
||||
"request_root" => format!("{:}", root),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if blocks_sent < (req.count as usize) {
|
||||
debug!(
|
||||
self.log,
|
||||
"BlocksByRange Response Sent";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"msg" => "Failed to return all requested blocks",
|
||||
"start_slot" => req.start_slot,
|
||||
"current_slot" => self.chain.slot().unwrap_or_else(|_| Slot::from(0_u64)).as_u64(),
|
||||
"requested" => req.count,
|
||||
"returned" => blocks_sent);
|
||||
} else {
|
||||
debug!(
|
||||
self.log,
|
||||
"Sending BlocksByRange Response";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"start_slot" => req.start_slot,
|
||||
"current_slot" => self.chain.slot().unwrap_or_else(|_| Slot::from(0_u64)).as_u64(),
|
||||
"requested" => req.count,
|
||||
"returned" => blocks_sent);
|
||||
}
|
||||
|
||||
// send the stream terminator
|
||||
self.network.send_rpc_error_response(
|
||||
peer_id,
|
||||
request_id,
|
||||
RPCErrorResponse::StreamTermination(ResponseTermination::BlocksByRange),
|
||||
);
|
||||
}
|
||||
|
||||
/// Handle a `BlocksByRange` response from the peer.
|
||||
/// A `beacon_block` behaves as a stream which is terminated on a `None` response.
|
||||
pub fn on_blocks_by_range_response(
|
||||
&mut self,
|
||||
peer_id: PeerId,
|
||||
request_id: RequestId,
|
||||
beacon_block: Option<Box<SignedBeaconBlock<T::EthSpec>>>,
|
||||
) {
|
||||
trace!(
|
||||
self.log,
|
||||
"Received BlocksByRange Response";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
);
|
||||
|
||||
self.send_to_sync(SyncMessage::BlocksByRangeResponse {
|
||||
peer_id,
|
||||
request_id,
|
||||
beacon_block,
|
||||
});
|
||||
}
|
||||
|
||||
/// Handle a `BlocksByRoot` response from the peer.
|
||||
pub fn on_blocks_by_root_response(
|
||||
&mut self,
|
||||
peer_id: PeerId,
|
||||
request_id: RequestId,
|
||||
beacon_block: Option<Box<SignedBeaconBlock<T::EthSpec>>>,
|
||||
) {
|
||||
trace!(
|
||||
self.log,
|
||||
"Received BlocksByRoot Response";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
);
|
||||
|
||||
self.send_to_sync(SyncMessage::BlocksByRootResponse {
|
||||
peer_id,
|
||||
request_id,
|
||||
beacon_block,
|
||||
});
|
||||
}
|
||||
|
||||
/// Template function to be called on a block to determine if the block should be propagated
|
||||
/// across the network.
|
||||
pub fn should_forward_block(&mut self, _block: &Box<SignedBeaconBlock<T::EthSpec>>) -> bool {
|
||||
// TODO: Propagate error once complete
|
||||
// self.chain.should_forward_block(block).is_ok()
|
||||
true
|
||||
}
|
||||
|
||||
/// Template function to be called on an attestation to determine if the attestation should be propagated
|
||||
/// across the network.
|
||||
pub fn _should_forward_attestation(&mut self, _attestation: &Attestation<T::EthSpec>) -> bool {
|
||||
// TODO: Propagate error once complete
|
||||
//self.chain.should_forward_attestation(attestation).is_ok()
|
||||
true
|
||||
}
|
||||
|
||||
/// Process a gossip message declaring a new block.
|
||||
///
|
||||
/// Attempts to apply to block to the beacon chain. May queue the block for later processing.
|
||||
///
|
||||
/// Returns a `bool` which, if `true`, indicates we should forward the block to our peers.
|
||||
pub fn on_block_gossip(
|
||||
&mut self,
|
||||
peer_id: PeerId,
|
||||
block: Box<SignedBeaconBlock<T::EthSpec>>,
|
||||
) -> bool {
|
||||
match BlockProcessingOutcome::shim(self.chain.process_block(*block.clone())) {
|
||||
Ok(outcome) => match outcome {
|
||||
BlockProcessingOutcome::Processed { .. } => {
|
||||
trace!(self.log, "Gossipsub block processed";
|
||||
"peer_id" => format!("{:?}",peer_id));
|
||||
|
||||
// TODO: It would be better if we can run this _after_ we publish the block to
|
||||
// reduce block propagation latency.
|
||||
//
|
||||
// The `MessageHandler` would be the place to put this, however it doesn't seem
|
||||
// to have a reference to the `BeaconChain`. I will leave this for future
|
||||
// works.
|
||||
match self.chain.fork_choice() {
|
||||
Ok(()) => trace!(
|
||||
self.log,
|
||||
"Fork choice success";
|
||||
"location" => "block gossip"
|
||||
),
|
||||
Err(e) => error!(
|
||||
self.log,
|
||||
"Fork choice failed";
|
||||
"error" => format!("{:?}", e),
|
||||
"location" => "block gossip"
|
||||
),
|
||||
}
|
||||
}
|
||||
BlockProcessingOutcome::ParentUnknown { .. } => {
|
||||
// Inform the sync manager to find parents for this block
|
||||
trace!(self.log, "Block with unknown parent received";
|
||||
"peer_id" => format!("{:?}",peer_id));
|
||||
self.send_to_sync(SyncMessage::UnknownBlock(peer_id, block));
|
||||
}
|
||||
other => {
|
||||
warn!(
|
||||
self.log,
|
||||
"Invalid gossip beacon block";
|
||||
"outcome" => format!("{:?}", other),
|
||||
"block root" => format!("{}", block.canonical_root()),
|
||||
"block slot" => block.slot()
|
||||
);
|
||||
trace!(
|
||||
self.log,
|
||||
"Invalid gossip beacon block ssz";
|
||||
"ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())),
|
||||
);
|
||||
}
|
||||
},
|
||||
Err(_) => {
|
||||
// error is logged during the processing therefore no error is logged here
|
||||
trace!(
|
||||
self.log,
|
||||
"Erroneous gossip beacon block ssz";
|
||||
"ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())),
|
||||
);
|
||||
}
|
||||
}
|
||||
// TODO: Update with correct block gossip checking
|
||||
true
|
||||
}
|
||||
|
||||
/// Process a gossip message declaring a new attestation.
|
||||
///
|
||||
/// Not currently implemented.
|
||||
pub fn on_attestation_gossip(&mut self, _peer_id: PeerId, _msg: Attestation<T::EthSpec>) {
|
||||
// TODO: Handle subnet gossip
|
||||
/*
|
||||
match self.chain.process_attestation(msg.clone()) {
|
||||
Ok(outcome) => match outcome {
|
||||
AttestationProcessingOutcome::Processed => {
|
||||
debug!(
|
||||
self.log,
|
||||
"Processed attestation";
|
||||
"source" => "gossip",
|
||||
"peer" => format!("{:?}",peer_id),
|
||||
"block_root" => format!("{}", msg.data.beacon_block_root),
|
||||
"slot" => format!("{}", msg.data.slot),
|
||||
);
|
||||
}
|
||||
AttestationProcessingOutcome::UnknownHeadBlock { beacon_block_root } => {
|
||||
// TODO: Maintain this attestation and re-process once sync completes
|
||||
trace!(
|
||||
self.log,
|
||||
"Attestation for unknown block";
|
||||
"peer_id" => format!("{:?}", peer_id),
|
||||
"block" => format!("{}", beacon_block_root)
|
||||
);
|
||||
// we don't know the block, get the sync manager to handle the block lookup
|
||||
self.send_to_sync(SyncMessage::UnknownBlockHash(peer_id, beacon_block_root));
|
||||
}
|
||||
AttestationProcessingOutcome::FutureEpoch { .. }
|
||||
| AttestationProcessingOutcome::PastEpoch { .. }
|
||||
| AttestationProcessingOutcome::UnknownTargetRoot { .. }
|
||||
| AttestationProcessingOutcome::FinalizedSlot { .. } => {} // ignore the attestation
|
||||
AttestationProcessingOutcome::Invalid { .. }
|
||||
| AttestationProcessingOutcome::EmptyAggregationBitfield { .. }
|
||||
| AttestationProcessingOutcome::AttestsToFutureBlock { .. }
|
||||
| AttestationProcessingOutcome::InvalidSignature
|
||||
| AttestationProcessingOutcome::NoCommitteeForSlotAndIndex { .. }
|
||||
| AttestationProcessingOutcome::BadTargetEpoch { .. } => {
|
||||
// the peer has sent a bad attestation. Remove them.
|
||||
self.network.disconnect(peer_id, GoodbyeReason::Fault);
|
||||
}
|
||||
},
|
||||
Err(_) => {
|
||||
// error is logged during the processing therefore no error is logged here
|
||||
trace!(
|
||||
self.log,
|
||||
"Erroneous gossip attestation ssz";
|
||||
"ssz" => format!("0x{}", hex::encode(msg.as_ssz_bytes())),
|
||||
);
|
||||
}
|
||||
};
|
||||
*/
|
||||
}
|
||||
}
|
||||
|
||||
/// Build a `StatusMessage` representing the state of the given `beacon_chain`.
|
||||
pub(crate) fn status_message<T: BeaconChainTypes>(
|
||||
beacon_chain: &BeaconChain<T>,
|
||||
) -> Option<StatusMessage> {
|
||||
let head_info = beacon_chain.head_info().ok()?;
|
||||
|
||||
Some(StatusMessage {
|
||||
fork_version: head_info.fork.current_version,
|
||||
finalized_root: head_info.finalized_checkpoint.root,
|
||||
finalized_epoch: head_info.finalized_checkpoint.epoch,
|
||||
head_root: head_info.block_root,
|
||||
head_slot: head_info.slot,
|
||||
})
|
||||
}
|
||||
|
||||
/// Wraps a Network Channel to employ various RPC related network functionality for the message
|
||||
/// handler. The handler doesn't manage it's own request Id's and can therefore only send
|
||||
/// responses or requests with 0 request Ids.
|
||||
pub struct HandlerNetworkContext<T: EthSpec> {
|
||||
/// The network channel to relay messages to the Network service.
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage<T>>,
|
||||
/// Logger for the `NetworkContext`.
|
||||
log: slog::Logger,
|
||||
}
|
||||
|
||||
impl<T: EthSpec> HandlerNetworkContext<T> {
|
||||
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage<T>>, log: slog::Logger) -> Self {
|
||||
Self { network_send, log }
|
||||
}
|
||||
|
||||
pub fn disconnect(&mut self, peer_id: PeerId, reason: GoodbyeReason) {
|
||||
warn!(
|
||||
&self.log,
|
||||
"Disconnecting peer (RPC)";
|
||||
"reason" => format!("{:?}", reason),
|
||||
"peer_id" => format!("{:?}", peer_id),
|
||||
);
|
||||
self.send_rpc_request(peer_id.clone(), RPCRequest::Goodbye(reason));
|
||||
self.network_send
|
||||
.try_send(NetworkMessage::Disconnect { peer_id })
|
||||
.unwrap_or_else(|_| {
|
||||
warn!(
|
||||
self.log,
|
||||
"Could not send a Disconnect to the network service"
|
||||
)
|
||||
});
|
||||
}
|
||||
|
||||
pub fn send_rpc_request(&mut self, peer_id: PeerId, rpc_request: RPCRequest<T>) {
|
||||
// the message handler cannot send requests with ids. Id's are managed by the sync
|
||||
// manager.
|
||||
let request_id = 0;
|
||||
self.send_rpc_event(peer_id, RPCEvent::Request(request_id, rpc_request));
|
||||
}
|
||||
|
||||
/// Convenience function to wrap successful RPC Responses.
|
||||
pub fn send_rpc_response(
|
||||
&mut self,
|
||||
peer_id: PeerId,
|
||||
request_id: RequestId,
|
||||
rpc_response: RPCResponse<T>,
|
||||
) {
|
||||
self.send_rpc_event(
|
||||
peer_id,
|
||||
RPCEvent::Response(request_id, RPCErrorResponse::Success(rpc_response)),
|
||||
);
|
||||
}
|
||||
|
||||
/// Send an RPCErrorResponse. This handles errors and stream terminations.
|
||||
pub fn send_rpc_error_response(
|
||||
&mut self,
|
||||
peer_id: PeerId,
|
||||
request_id: RequestId,
|
||||
rpc_error_response: RPCErrorResponse<T>,
|
||||
) {
|
||||
self.send_rpc_event(peer_id, RPCEvent::Response(request_id, rpc_error_response));
|
||||
}
|
||||
|
||||
fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent<T>) {
|
||||
self.network_send
|
||||
.try_send(NetworkMessage::RPC(peer_id, rpc_event))
|
||||
.unwrap_or_else(|_| {
|
||||
warn!(
|
||||
self.log,
|
||||
"Could not send RPC message to the network service"
|
||||
)
|
||||
});
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user