mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-15 10:52:43 +00:00
* Remove ping protocol
* Initial renaming of network services
* Correct rebasing relative to latest master
* Start updating types
* Adds HashMapDelay struct to utils
* Initial network restructure
* Network restructure. Adds new types for v0.2.0
* Removes build artefacts
* Shift validation to beacon chain
* Temporarily remove gossip validation
This is to be updated to match current optimisation efforts.
* Adds AggregateAndProof
* Begin rebuilding pubsub encoding/decoding
* Signature hacking
* Shift gossipsup decoding into eth2_libp2p
* Existing EF tests passing with fake_crypto
* Shifts block encoding/decoding into RPC
* Delete outdated API spec
* All release tests passing bar genesis state parsing
* Update and test YamlConfig
* Update to spec v0.10 compatible BLS
* Updates to BLS EF tests
* Add EF test for AggregateVerify
And delete unused hash2curve tests for uncompressed points
* Update EF tests to v0.10.1
* Use optional block root correctly in block proc
* Use genesis fork in deposit domain. All tests pass
* Fast aggregate verify test
* Update REST API docs
* Fix unused import
* Bump spec tags to v0.10.1
* Add `seconds_per_eth1_block` to chainspec
* Update to timestamp based eth1 voting scheme
* Return None from `get_votes_to_consider` if block cache is empty
* Handle overflows in `is_candidate_block`
* Revert to failing tests
* Fix eth1 data sets test
* Choose default vote according to spec
* Fix collect_valid_votes tests
* Fix `get_votes_to_consider` to choose all eligible blocks
* Uncomment winning_vote tests
* Add comments; remove unused code
* Reduce seconds_per_eth1_block for simulation
* Addressed review comments
* Add test for default vote case
* Fix logs
* Remove unused functions
* Meter default eth1 votes
* Fix comments
* Progress on attestation service
* Address review comments; remove unused dependency
* Initial work on removing libp2p lock
* Add LRU caches to store (rollup)
* Update attestation validation for DB changes (WIP)
* Initial version of should_forward_block
* Scaffold
* Progress on attestation validation
Also, consolidate prod+testing slot clocks so that they share much
of the same implementation and can both handle sub-slot time changes.
* Removes lock from libp2p service
* Completed network lock removal
* Finish(?) attestation processing
* Correct network termination future
* Add slot check to block check
* Correct fmt issues
* Remove Drop implementation for network service
* Add first attempt at attestation proc. re-write
* Add version 2 of attestation processing
* Minor fixes
* Add validator pubkey cache
* Make get_indexed_attestation take a committee
* Link signature processing into new attn verification
* First working version
* Ensure pubkey cache is updated
* Add more metrics, slight optimizations
* Clone committee cache during attestation processing
* Update shuffling cache during block processing
* Remove old commented-out code
* Fix shuffling cache insert bug
* Used indexed attestation in fork choice
* Restructure attn processing, add metrics
* Add more detailed metrics
* Tidy, fix failing tests
* Fix failing tests, tidy
* Address reviewers suggestions
* Disable/delete two outdated tests
* Modification of validator for subscriptions
* Add slot signing to validator client
* Further progress on validation subscription
* Adds necessary validator subscription functionality
* Add new Pubkeys struct to signature_sets
* Refactor with functional approach
* Update beacon chain
* Clean up validator <-> beacon node http types
* Add aggregator status to ValidatorDuty
* Impl Clone for manual slot clock
* Fix minor errors
* Further progress validator client subscription
* Initial subscription and aggregation handling
* Remove decompressed member from pubkey bytes
* Progress to modifying val client for attestation aggregation
* First draft of validator client upgrade for aggregate attestations
* Add hashmap for indices lookup
* Add state cache, remove store cache
* Only build the head committee cache
* Removes lock on a network channel
* Partially implement beacon node subscription http api
* Correct compilation issues
* Change `get_attesting_indices` to use Vec
* Fix failing test
* Partial implementation of timer
* Adds timer, removes exit_future, http api to op pool
* Partial multiple aggregate attestation handling
* Permits bulk messages accross gossipsub network channel
* Correct compile issues
* Improve gosispsub messaging and correct rest api helpers
* Added global gossipsub subscriptions
* Update validator subscriptions data structs
* Tidy
* Re-structure validator subscriptions
* Initial handling of subscriptions
* Re-structure network service
* Add pubkey cache persistence file
* Add more comments
* Integrate persistence file into builder
* Add pubkey cache tests
* Add HashSetDelay and introduce into attestation service
* Handles validator subscriptions
* Add data_dir to beacon chain builder
* Remove Option in pubkey cache persistence file
* Ensure consistency between datadir/data_dir
* Fix failing network test
* Peer subnet discovery gets queued for future subscriptions
* Reorganise attestation service functions
* Initial wiring of attestation service
* First draft of attestation service timing logic
* Correct minor typos
* Tidy
* Fix todos
* Improve tests
* Add PeerInfo to connected peers mapping
* Fix compile error
* Fix compile error from merge
* Split up block processing metrics
* Tidy
* Refactor get_pubkey_from_state
* Remove commented-out code
* Rename state_cache -> checkpoint_cache
* Rename Checkpoint -> Snapshot
* Tidy, add comments
* Tidy up find_head function
* Change some checkpoint -> snapshot
* Add tests
* Expose max_len
* Remove dead code
* Tidy
* Fix bug
* Add sync-speed metric
* Add first attempt at VerifiableBlock
* Start integrating into beacon chain
* Integrate VerifiableBlock
* Rename VerifableBlock -> PartialBlockVerification
* Add start of typed methods
* Add progress
* Add further progress
* Rename structs
* Add full block verification to block_processing.rs
* Further beacon chain integration
* Update checks for gossip
* Add todo
* Start adding segement verification
* Add passing chain segement test
* Initial integration with batch sync
* Minor changes
* Tidy, add more error checking
* Start adding chain_segment tests
* Finish invalid signature tests
* Include single and gossip verified blocks in tests
* Add gossip verification tests
* Start adding docs
* Finish adding comments to block_processing.rs
* Rename block_processing.rs -> block_verification
* Start removing old block processing code
* Fixes beacon_chain compilation
* Fix project-wide compile errors
* Remove old code
* Correct code to pass all tests
* Fix bug with beacon proposer index
* Fix shim for BlockProcessingError
* Only process one epoch at a time
* Fix loop in chain segment processing
* Correct tests from master merge
* Add caching for state.eth1_data_votes
* Add BeaconChain::validator_pubkey
* Revert "Add caching for state.eth1_data_votes"
This reverts commit cd73dcd643.
Co-authored-by: Grant Wuerker <gwuerker@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
218 lines
7.5 KiB
Rust
218 lines
7.5 KiB
Rust
use crate::BeaconSnapshot;
|
|
use std::cmp;
|
|
use types::{Epoch, EthSpec, Hash256};
|
|
|
|
/// The default size of the cache.
|
|
pub const DEFAULT_SNAPSHOT_CACHE_SIZE: usize = 4;
|
|
|
|
/// Provides a cache of `BeaconSnapshot` that is intended primarily for block processing.
|
|
///
|
|
/// ## Cache Queuing
|
|
///
|
|
/// The cache has a non-standard queue mechanism (specifically, it is not LRU).
|
|
///
|
|
/// The cache has a max number of elements (`max_len`). Until `max_len` is achieved, all snapshots
|
|
/// are simply added to the queue. Once `max_len` is achieved, adding a new snapshot will cause an
|
|
/// existing snapshot to be ejected. The ejected snapshot will:
|
|
///
|
|
/// - Never be the `head_block_root`.
|
|
/// - Be the snapshot with the lowest `state.slot` (ties broken arbitrarily).
|
|
pub struct SnapshotCache<T: EthSpec> {
|
|
max_len: usize,
|
|
head_block_root: Hash256,
|
|
snapshots: Vec<BeaconSnapshot<T>>,
|
|
}
|
|
|
|
impl<T: EthSpec> SnapshotCache<T> {
|
|
/// Instantiate a new cache which contains the `head` snapshot.
|
|
///
|
|
/// Setting `max_len = 0` is equivalent to setting `max_len = 1`.
|
|
pub fn new(max_len: usize, head: BeaconSnapshot<T>) -> Self {
|
|
Self {
|
|
max_len: cmp::max(max_len, 1),
|
|
head_block_root: head.beacon_block_root,
|
|
snapshots: vec![head],
|
|
}
|
|
}
|
|
|
|
/// Insert a snapshot, potentially removing an existing snapshot if `self` is at capacity (see
|
|
/// struct-level documentation for more info).
|
|
pub fn insert(&mut self, snapshot: BeaconSnapshot<T>) {
|
|
if self.snapshots.len() < self.max_len {
|
|
self.snapshots.push(snapshot);
|
|
} else {
|
|
let insert_at = self
|
|
.snapshots
|
|
.iter()
|
|
.enumerate()
|
|
.filter_map(|(i, snapshot)| {
|
|
if snapshot.beacon_block_root != self.head_block_root {
|
|
Some((i, snapshot.beacon_state.slot))
|
|
} else {
|
|
None
|
|
}
|
|
})
|
|
.min_by_key(|(_i, slot)| *slot)
|
|
.map(|(i, _slot)| i);
|
|
|
|
if let Some(i) = insert_at {
|
|
self.snapshots[i] = snapshot;
|
|
}
|
|
}
|
|
}
|
|
|
|
/// If there is a snapshot with `block_root`, remove and return it.
|
|
pub fn try_remove(&mut self, block_root: Hash256) -> Option<BeaconSnapshot<T>> {
|
|
self.snapshots
|
|
.iter()
|
|
.position(|snapshot| snapshot.beacon_block_root == block_root)
|
|
.map(|i| self.snapshots.remove(i))
|
|
}
|
|
|
|
/// If there is a snapshot with `block_root`, clone it (with only the committee caches) and
|
|
/// return the clone.
|
|
pub fn get_cloned(&self, block_root: Hash256) -> Option<BeaconSnapshot<T>> {
|
|
self.snapshots
|
|
.iter()
|
|
.find(|snapshot| snapshot.beacon_block_root == block_root)
|
|
.map(|snapshot| snapshot.clone_with_only_committee_caches())
|
|
}
|
|
|
|
/// Removes all snapshots from the queue that are less than or equal to the finalized epoch.
|
|
pub fn prune(&mut self, finalized_epoch: Epoch) {
|
|
self.snapshots.retain(|snapshot| {
|
|
snapshot.beacon_state.slot > finalized_epoch.start_slot(T::slots_per_epoch())
|
|
})
|
|
}
|
|
|
|
/// Inform the cache that the head of the beacon chain has changed.
|
|
///
|
|
/// The snapshot that matches this `head_block_root` will never be ejected from the cache
|
|
/// during `Self::insert`.
|
|
pub fn update_head(&mut self, head_block_root: Hash256) {
|
|
self.head_block_root = head_block_root
|
|
}
|
|
}
|
|
|
|
#[cfg(test)]
|
|
mod test {
|
|
use super::*;
|
|
use types::{
|
|
test_utils::{generate_deterministic_keypair, TestingBeaconStateBuilder},
|
|
BeaconBlock, Epoch, MainnetEthSpec, Signature, SignedBeaconBlock, Slot,
|
|
};
|
|
|
|
const CACHE_SIZE: usize = 4;
|
|
|
|
fn get_snapshot(i: u64) -> BeaconSnapshot<MainnetEthSpec> {
|
|
let spec = MainnetEthSpec::default_spec();
|
|
|
|
let state_builder = TestingBeaconStateBuilder::from_deterministic_keypairs(1, &spec);
|
|
let (beacon_state, _keypairs) = state_builder.build();
|
|
|
|
BeaconSnapshot {
|
|
beacon_state,
|
|
beacon_state_root: Hash256::from_low_u64_be(i),
|
|
beacon_block: SignedBeaconBlock {
|
|
message: BeaconBlock::empty(&spec),
|
|
signature: Signature::new(&[42], &generate_deterministic_keypair(0).sk),
|
|
},
|
|
beacon_block_root: Hash256::from_low_u64_be(i),
|
|
}
|
|
}
|
|
|
|
#[test]
|
|
fn insert_get_prune_update() {
|
|
let mut cache = SnapshotCache::new(CACHE_SIZE, get_snapshot(0));
|
|
|
|
// Insert a bunch of entries in the cache. It should look like this:
|
|
//
|
|
// Index Root
|
|
// 0 0 <--head
|
|
// 1 1
|
|
// 2 2
|
|
// 3 3
|
|
for i in 1..CACHE_SIZE as u64 {
|
|
let mut snapshot = get_snapshot(i);
|
|
|
|
// Each snapshot should be one slot into an epoch, with each snapshot one epoch apart.
|
|
snapshot.beacon_state.slot = Slot::from(i * MainnetEthSpec::slots_per_epoch() + 1);
|
|
|
|
cache.insert(snapshot);
|
|
|
|
assert_eq!(
|
|
cache.snapshots.len(),
|
|
i as usize + 1,
|
|
"cache length should be as expected"
|
|
);
|
|
assert_eq!(cache.head_block_root, Hash256::from_low_u64_be(0));
|
|
}
|
|
|
|
// Insert a new value in the cache. Afterwards it should look like:
|
|
//
|
|
// Index Root
|
|
// 0 0 <--head
|
|
// 1 42
|
|
// 2 2
|
|
// 3 3
|
|
assert_eq!(cache.snapshots.len(), CACHE_SIZE);
|
|
cache.insert(get_snapshot(42));
|
|
assert_eq!(cache.snapshots.len(), CACHE_SIZE);
|
|
|
|
assert!(
|
|
cache.try_remove(Hash256::from_low_u64_be(1)).is_none(),
|
|
"the snapshot with the lowest slot should have been removed during the insert function"
|
|
);
|
|
assert!(cache.get_cloned(Hash256::from_low_u64_be(1)).is_none());
|
|
|
|
assert!(
|
|
cache
|
|
.get_cloned(Hash256::from_low_u64_be(0))
|
|
.expect("the head should still be in the cache")
|
|
.beacon_block_root
|
|
== Hash256::from_low_u64_be(0),
|
|
"get_cloned should get the correct snapshot"
|
|
);
|
|
assert!(
|
|
cache
|
|
.try_remove(Hash256::from_low_u64_be(0))
|
|
.expect("the head should still be in the cache")
|
|
.beacon_block_root
|
|
== Hash256::from_low_u64_be(0),
|
|
"try_remove should get the correct snapshot"
|
|
);
|
|
|
|
assert_eq!(
|
|
cache.snapshots.len(),
|
|
CACHE_SIZE - 1,
|
|
"try_remove should shorten the cache"
|
|
);
|
|
|
|
// Prune the cache. Afterwards it should look like:
|
|
//
|
|
// Index Root
|
|
// 0 2
|
|
// 1 3
|
|
cache.prune(Epoch::new(2));
|
|
|
|
assert_eq!(cache.snapshots.len(), 2);
|
|
|
|
cache.update_head(Hash256::from_low_u64_be(2));
|
|
|
|
// Over-fill the cache so it needs to eject some old values on insert.
|
|
for i in 0..CACHE_SIZE as u64 {
|
|
cache.insert(get_snapshot(u64::max_value() - i));
|
|
}
|
|
|
|
// Ensure that the new head value was not removed from the cache.
|
|
assert!(
|
|
cache
|
|
.try_remove(Hash256::from_low_u64_be(2))
|
|
.expect("the new head should still be in the cache")
|
|
.beacon_block_root
|
|
== Hash256::from_low_u64_be(2),
|
|
"try_remove should get the correct snapshot"
|
|
);
|
|
}
|
|
}
|