Update to frozen spec ❄️ (v0.8.1) (#444)

* types: first updates for v0.8

* state_processing: epoch processing v0.8.0

* state_processing: block processing v0.8.0

* tree_hash_derive: support generics in SignedRoot

* types v0.8: update to use ssz_types

* state_processing v0.8: use ssz_types

* ssz_types: add bitwise methods and from_elem

* types: fix v0.8 FIXMEs

* ssz_types: add bitfield shift_up

* ssz_types: iterators and DerefMut for VariableList

* types,state_processing: use VariableList

* ssz_types: fix BitVector Decode impl

Fixed a typo in the implementation of ssz::Decode for BitVector, which caused it
to be considered variable length!

* types: fix test modules for v0.8 update

* types: remove slow type-level arithmetic

* state_processing: fix tests for v0.8

* op_pool: update for v0.8

* ssz_types: Bitfield difference length-independent

Allow computing the difference of two bitfields of different lengths.

* Implement compact committee support

* epoch_processing: committee & active index roots

* state_processing: genesis state builder v0.8

* state_processing: implement v0.8.1

* Further improve tree_hash

* Strip examples, tests from cached_tree_hash

* Update TreeHash, un-impl CachedTreeHash

* Update bitfield TreeHash, un-impl CachedTreeHash

* Update FixedLenVec TreeHash, unimpl CachedTreeHash

* Update update tree_hash_derive for new TreeHash

* Fix TreeHash, un-impl CachedTreeHash for ssz_types

* Remove fixed_len_vec, ssz benches

SSZ benches relied upon fixed_len_vec -- it is easier to just delete
them and rebuild them later (when necessary)

* Remove boolean_bitfield crate

* Fix fake_crypto BLS compile errors

* Update ef_tests for new v.8 type params

* Update ef_tests submodule to v0.8.1 tag

* Make fixes to support parsing ssz ef_tests

* `compact_committee...` to `compact_committees...`

* Derive more traits for `CompactCommittee`

* Flip bitfield byte-endianness

* Fix tree_hash for bitfields

* Modify CLI output for ef_tests

* Bump ssz crate version

* Update ssz_types doc comment

* Del cached tree hash tests from ssz_static tests

* Tidy SSZ dependencies

* Rename ssz_types crate to eth2_ssz_types

* validator_client: update for v0.8

* ssz_types: update union/difference for bit order swap

* beacon_node: update for v0.8, EthSpec

* types: disable cached tree hash, update min spec

* state_processing: fix slot bug in committee update

* tests: temporarily disable fork choice harness test

See #447

* committee cache: prevent out-of-bounds access

In the case where we tried to access the committee of a shard that didn't have a committee in the
current epoch, we were accessing elements beyond the end of the shuffling vector and panicking! This
commit adds a check to make the failure safe and explicit.

* fix bug in get_indexed_attestation and simplify

There was a bug in our implementation of get_indexed_attestation whereby
incorrect "committee indices" were used to index into the custody bitfield. The
bug was only observable in the case where some bits of the custody bitfield were
set to 1. The implementation has been simplified to remove the bug, and a test
added.

* state_proc: workaround for compact committees bug

https://github.com/ethereum/eth2.0-specs/issues/1315

* v0.8: updates to make the EF tests pass

* Remove redundant max operation checks.
* Always supply both messages when checking attestation signatures -- allowing
  verification of an attestation with no signatures.
* Swap the order of the fork and domain constant in `get_domain`, to match
  the spec.

* rustfmt

* ef_tests: add new epoch processing tests

* Integrate v0.8 into master (compiles)

* Remove unused crates, fix clippy lints

* Replace v0.6.3 tags w/ v0.8.1

* Remove old comment

* Ensure lmd ghost tests only run in release

* Update readme
This commit is contained in:
Michael Sproul
2019-07-30 12:44:51 +10:00
committed by Paul Hauner
parent 177df12149
commit a236003a7b
184 changed files with 3332 additions and 4542 deletions

View File

@@ -1,16 +1,18 @@
use crate::max_cover::MaxCover;
use boolean_bitfield::BooleanBitfield;
use types::{Attestation, BeaconState, EthSpec};
use types::{Attestation, BeaconState, BitList, EthSpec};
pub struct AttMaxCover<'a> {
pub struct AttMaxCover<'a, T: EthSpec> {
/// Underlying attestation.
att: &'a Attestation,
att: &'a Attestation<T>,
/// Bitfield of validators that are covered by this attestation.
fresh_validators: BooleanBitfield,
fresh_validators: BitList<T::MaxValidatorsPerCommittee>,
}
impl<'a> AttMaxCover<'a> {
pub fn new(att: &'a Attestation, fresh_validators: BooleanBitfield) -> Self {
impl<'a, T: EthSpec> AttMaxCover<'a, T> {
pub fn new(
att: &'a Attestation<T>,
fresh_validators: BitList<T::MaxValidatorsPerCommittee>,
) -> Self {
Self {
att,
fresh_validators,
@@ -18,15 +20,15 @@ impl<'a> AttMaxCover<'a> {
}
}
impl<'a> MaxCover for AttMaxCover<'a> {
type Object = Attestation;
type Set = BooleanBitfield;
impl<'a, T: EthSpec> MaxCover for AttMaxCover<'a, T> {
type Object = Attestation<T>;
type Set = BitList<T::MaxValidatorsPerCommittee>;
fn object(&self) -> Attestation {
fn object(&self) -> Attestation<T> {
self.att.clone()
}
fn covering_set(&self) -> &BooleanBitfield {
fn covering_set(&self) -> &BitList<T::MaxValidatorsPerCommittee> {
&self.fresh_validators
}
@@ -37,11 +39,11 @@ impl<'a> MaxCover for AttMaxCover<'a> {
/// that a shard and epoch uniquely identify a committee.
fn update_covering_set(
&mut self,
best_att: &Attestation,
covered_validators: &BooleanBitfield,
best_att: &Attestation<T>,
covered_validators: &BitList<T::MaxValidatorsPerCommittee>,
) {
if self.att.data.shard == best_att.data.shard
&& self.att.data.target_epoch == best_att.data.target_epoch
if self.att.data.crosslink.shard == best_att.data.crosslink.shard
&& self.att.data.target.epoch == best_att.data.target.epoch
{
self.fresh_validators.difference_inplace(covered_validators);
}
@@ -58,22 +60,22 @@ impl<'a> MaxCover for AttMaxCover<'a> {
/// of validators for which the included attestation is their first in the epoch. The attestation
/// is judged against the state's `current_epoch_attestations` or `previous_epoch_attestations`
/// depending on when it was created, and all those validators who have already attested are
/// removed from the `aggregation_bitfield` before returning it.
/// removed from the `aggregation_bits` before returning it.
// TODO: This could be optimised with a map from validator index to whether that validator has
// attested in each of the current and previous epochs. Currently quadratic in number of validators.
pub fn earliest_attestation_validators<T: EthSpec>(
attestation: &Attestation,
attestation: &Attestation<T>,
state: &BeaconState<T>,
) -> BooleanBitfield {
) -> BitList<T::MaxValidatorsPerCommittee> {
// Bitfield of validators whose attestations are new/fresh.
let mut new_validators = attestation.aggregation_bitfield.clone();
let mut new_validators = attestation.aggregation_bits.clone();
let state_attestations = if attestation.data.target_epoch == state.current_epoch() {
let state_attestations = if attestation.data.target.epoch == state.current_epoch() {
&state.current_epoch_attestations
} else if attestation.data.target_epoch == state.previous_epoch() {
} else if attestation.data.target.epoch == state.previous_epoch() {
&state.previous_epoch_attestations
} else {
return BooleanBitfield::from_elem(attestation.aggregation_bitfield.len(), false);
return BitList::with_capacity(0).unwrap();
};
state_attestations
@@ -81,10 +83,12 @@ pub fn earliest_attestation_validators<T: EthSpec>(
// In a single epoch, an attester should only be attesting for one shard.
// TODO: we avoid including slashable attestations in the state here,
// but maybe we should do something else with them (like construct slashings).
.filter(|existing_attestation| existing_attestation.data.shard == attestation.data.shard)
.filter(|existing_attestation| {
existing_attestation.data.crosslink.shard == attestation.data.crosslink.shard
})
.for_each(|existing_attestation| {
// Remove the validators who have signed the existing attestation (they are not new)
new_validators.difference_inplace(&existing_attestation.aggregation_bitfield);
new_validators.difference_inplace(&existing_attestation.aggregation_bits);
});
new_validators

View File

@@ -19,7 +19,7 @@ impl AttestationId {
spec: &ChainSpec,
) -> Self {
let mut bytes = ssz_encode(attestation);
let epoch = attestation.target_epoch;
let epoch = attestation.target.epoch;
bytes.extend_from_slice(&AttestationId::compute_domain_bytes(epoch, state, spec));
AttestationId { v: bytes }
}

View File

@@ -15,22 +15,21 @@ use state_processing::per_block_processing::errors::{
ExitValidationError, ProposerSlashingValidationError, TransferValidationError,
};
use state_processing::per_block_processing::{
get_slashable_indices_modular, validate_attestation,
validate_attestation_time_independent_only, verify_attester_slashing, verify_exit,
verify_exit_time_independent_only, verify_proposer_slashing, verify_transfer,
verify_transfer_time_independent_only,
get_slashable_indices_modular, verify_attestation, verify_attestation_time_independent_only,
verify_attester_slashing, verify_exit, verify_exit_time_independent_only,
verify_proposer_slashing, verify_transfer, verify_transfer_time_independent_only,
};
use std::collections::{btree_map::Entry, hash_map, BTreeMap, HashMap, HashSet};
use std::marker::PhantomData;
use types::{
Attestation, AttesterSlashing, BeaconState, ChainSpec, Deposit, EthSpec, ProposerSlashing,
Transfer, Validator, VoluntaryExit,
typenum::Unsigned, Attestation, AttesterSlashing, BeaconState, ChainSpec, Deposit, EthSpec,
ProposerSlashing, Transfer, Validator, VoluntaryExit,
};
#[derive(Default, Debug)]
pub struct OperationPool<T: EthSpec + Default> {
/// Map from attestation ID (see below) to vectors of attestations.
attestations: RwLock<HashMap<AttestationId, Vec<Attestation>>>,
attestations: RwLock<HashMap<AttestationId, Vec<Attestation<T>>>>,
/// Map from deposit index to deposit data.
// NOTE: We assume that there is only one deposit per index
// because the Eth1 data is updated (at most) once per epoch,
@@ -38,7 +37,7 @@ pub struct OperationPool<T: EthSpec + Default> {
// longer than an epoch
deposits: RwLock<BTreeMap<u64, Deposit>>,
/// Map from two attestation IDs to a slashing for those IDs.
attester_slashings: RwLock<HashMap<(AttestationId, AttestationId), AttesterSlashing>>,
attester_slashings: RwLock<HashMap<(AttestationId, AttestationId), AttesterSlashing<T>>>,
/// Map from proposer index to slashing.
proposer_slashings: RwLock<HashMap<u64, ProposerSlashing>>,
/// Map from exiting validator to their exit data.
@@ -67,12 +66,12 @@ impl<T: EthSpec> OperationPool<T> {
/// Insert an attestation into the pool, aggregating it with existing attestations if possible.
pub fn insert_attestation(
&self,
attestation: Attestation,
attestation: Attestation<T>,
state: &BeaconState<T>,
spec: &ChainSpec,
) -> Result<(), AttestationValidationError> {
// Check that attestation signatures are valid.
validate_attestation_time_independent_only(state, &attestation, spec)?;
verify_attestation_time_independent_only(state, &attestation, spec)?;
let id = AttestationId::from_data(&attestation.data, state, spec);
@@ -110,7 +109,11 @@ impl<T: EthSpec> OperationPool<T> {
}
/// Get a list of attestations for inclusion in a block.
pub fn get_attestations(&self, state: &BeaconState<T>, spec: &ChainSpec) -> Vec<Attestation> {
pub fn get_attestations(
&self,
state: &BeaconState<T>,
spec: &ChainSpec,
) -> Vec<Attestation<T>> {
// Attestations for the current fork, which may be from the current or previous epoch.
let prev_epoch = state.previous_epoch();
let current_epoch = state.current_epoch();
@@ -125,10 +128,10 @@ impl<T: EthSpec> OperationPool<T> {
})
.flat_map(|(_, attestations)| attestations)
// That are valid...
.filter(|attestation| validate_attestation(state, attestation, spec).is_ok())
.filter(|attestation| verify_attestation(state, attestation, spec).is_ok())
.map(|att| AttMaxCover::new(att, earliest_attestation_validators(att, state)));
maximum_cover(valid_attestations, spec.max_attestations as usize)
maximum_cover(valid_attestations, T::MaxAttestations::to_usize())
}
/// Remove attestations which are too old to be included in a block.
@@ -141,7 +144,7 @@ impl<T: EthSpec> OperationPool<T> {
// All the attestations in this bucket have the same data, so we only need to
// check the first one.
attestations.first().map_or(false, |att| {
finalized_state.current_epoch() <= att.data.target_epoch + 1
finalized_state.current_epoch() <= att.data.target.epoch + 1
})
});
}
@@ -149,13 +152,15 @@ impl<T: EthSpec> OperationPool<T> {
/// Add a deposit to the pool.
///
/// No two distinct deposits should be added with the same index.
// TODO: we need to rethink this entirely
pub fn insert_deposit(
&self,
index: u64,
deposit: Deposit,
) -> Result<DepositInsertStatus, DepositValidationError> {
use DepositInsertStatus::*;
match self.deposits.write().entry(deposit.index) {
match self.deposits.write().entry(index) {
Entry::Vacant(entry) => {
entry.insert(deposit);
Ok(Fresh)
@@ -173,12 +178,12 @@ impl<T: EthSpec> OperationPool<T> {
/// Get an ordered list of deposits for inclusion in a block.
///
/// Take at most the maximum number of deposits, beginning from the current deposit index.
pub fn get_deposits(&self, state: &BeaconState<T>, spec: &ChainSpec) -> Vec<Deposit> {
pub fn get_deposits(&self, state: &BeaconState<T>) -> Vec<Deposit> {
// TODO: We need to update the Merkle proofs for existing deposits as more deposits
// are added. It probably makes sense to construct the proofs from scratch when forming
// a block, using fresh info from the ETH1 chain for the current deposit root.
let start_idx = state.deposit_index;
(start_idx..start_idx + spec.max_deposits)
let start_idx = state.eth1_deposit_index;
(start_idx..start_idx + T::MaxDeposits::to_u64())
.map(|idx| self.deposits.read().get(&idx).cloned())
.take_while(Option::is_some)
.flatten()
@@ -187,7 +192,7 @@ impl<T: EthSpec> OperationPool<T> {
/// Remove all deposits with index less than the deposit index of the latest finalised block.
pub fn prune_deposits(&self, state: &BeaconState<T>) -> BTreeMap<u64, Deposit> {
let deposits_keep = self.deposits.write().split_off(&state.deposit_index);
let deposits_keep = self.deposits.write().split_off(&state.eth1_deposit_index);
std::mem::replace(&mut self.deposits.write(), deposits_keep)
}
@@ -216,7 +221,7 @@ impl<T: EthSpec> OperationPool<T> {
///
/// Depends on the fork field of the state, but not on the state's epoch.
fn attester_slashing_id(
slashing: &AttesterSlashing,
slashing: &AttesterSlashing<T>,
state: &BeaconState<T>,
spec: &ChainSpec,
) -> (AttestationId, AttestationId) {
@@ -229,7 +234,7 @@ impl<T: EthSpec> OperationPool<T> {
/// Insert an attester slashing into the pool.
pub fn insert_attester_slashing(
&self,
slashing: AttesterSlashing,
slashing: AttesterSlashing<T>,
state: &BeaconState<T>,
spec: &ChainSpec,
) -> Result<(), AttesterSlashingValidationError> {
@@ -248,16 +253,16 @@ impl<T: EthSpec> OperationPool<T> {
&self,
state: &BeaconState<T>,
spec: &ChainSpec,
) -> (Vec<ProposerSlashing>, Vec<AttesterSlashing>) {
) -> (Vec<ProposerSlashing>, Vec<AttesterSlashing<T>>) {
let proposer_slashings = filter_limit_operations(
self.proposer_slashings.read().values(),
|slashing| {
state
.validator_registry
.validators
.get(slashing.proposer_index as usize)
.map_or(false, |validator| !validator.slashed)
},
spec.max_proposer_slashings,
T::MaxProposerSlashings::to_usize(),
);
// Set of validators to be slashed, so we don't attempt to construct invalid attester
@@ -291,7 +296,7 @@ impl<T: EthSpec> OperationPool<T> {
false
}
})
.take(spec.max_attester_slashings as usize)
.take(T::MaxAttesterSlashings::to_usize())
.map(|(_, slashing)| slashing.clone())
.collect();
@@ -347,7 +352,7 @@ impl<T: EthSpec> OperationPool<T> {
filter_limit_operations(
self.voluntary_exits.read().values(),
|exit| verify_exit(state, exit, spec).is_ok(),
spec.max_voluntary_exits,
T::MaxVoluntaryExits::to_usize(),
)
}
@@ -384,7 +389,7 @@ impl<T: EthSpec> OperationPool<T> {
.iter()
.filter(|transfer| verify_transfer(state, transfer, spec).is_ok())
.sorted_by_key(|transfer| std::cmp::Reverse(transfer.fee))
.take(spec.max_transfers as usize)
.take(T::MaxTransfers::to_usize())
.cloned()
.collect()
}
@@ -408,7 +413,7 @@ impl<T: EthSpec> OperationPool<T> {
}
/// Filter up to a maximum number of operations out of an iterator.
fn filter_limit_operations<'a, T: 'a, I, F>(operations: I, filter: F, limit: u64) -> Vec<T>
fn filter_limit_operations<'a, T: 'a, I, F>(operations: I, filter: F, limit: usize) -> Vec<T>
where
I: IntoIterator<Item = &'a T>,
F: Fn(&T) -> bool,
@@ -417,7 +422,7 @@ where
operations
.into_iter()
.filter(|x| filter(*x))
.take(limit as usize)
.take(limit)
.cloned()
.collect()
}
@@ -436,7 +441,7 @@ fn prune_validator_hash_map<T, F, E: EthSpec>(
{
map.retain(|&validator_index, _| {
finalized_state
.validator_registry
.validators
.get(validator_index as usize)
.map_or(true, |validator| !prune_if(validator))
});
@@ -458,6 +463,7 @@ impl<T: EthSpec + Default> PartialEq for OperationPool<T> {
mod tests {
use super::DepositInsertStatus::*;
use super::*;
use rand::Rng;
use types::test_utils::*;
use types::*;
@@ -466,13 +472,16 @@ mod tests {
let rng = &mut XorShiftRng::from_seed([42; 16]);
let op_pool = OperationPool::<MinimalEthSpec>::new();
let deposit1 = make_deposit(rng);
let mut deposit2 = make_deposit(rng);
deposit2.index = deposit1.index;
let deposit2 = make_deposit(rng);
let index = rng.gen();
assert_eq!(op_pool.insert_deposit(deposit1.clone()), Ok(Fresh));
assert_eq!(op_pool.insert_deposit(deposit1.clone()), Ok(Duplicate));
assert_eq!(op_pool.insert_deposit(index, deposit1.clone()), Ok(Fresh));
assert_eq!(
op_pool.insert_deposit(deposit2),
op_pool.insert_deposit(index, deposit1.clone()),
Ok(Duplicate)
);
assert_eq!(
op_pool.insert_deposit(index, deposit2),
Ok(Replaced(Box::new(deposit1)))
);
}
@@ -480,28 +489,29 @@ mod tests {
#[test]
fn get_deposits_max() {
let rng = &mut XorShiftRng::from_seed([42; 16]);
let (spec, mut state) = test_state(rng);
let (_, mut state) = test_state(rng);
let op_pool = OperationPool::new();
let start = 10000;
let max_deposits = spec.max_deposits;
let max_deposits = <MainnetEthSpec as EthSpec>::MaxDeposits::to_u64();
let extra = 5;
let offset = 1;
assert!(offset <= extra);
let deposits = dummy_deposits(rng, start, max_deposits + extra);
for deposit in &deposits {
assert_eq!(op_pool.insert_deposit(deposit.clone()), Ok(Fresh));
for (i, deposit) in &deposits {
assert_eq!(op_pool.insert_deposit(*i, deposit.clone()), Ok(Fresh));
}
state.deposit_index = start + offset;
let deposits_for_block = op_pool.get_deposits(&state, &spec);
state.eth1_deposit_index = start + offset;
let deposits_for_block = op_pool.get_deposits(&state);
assert_eq!(deposits_for_block.len() as u64, max_deposits);
assert_eq!(
deposits_for_block[..],
deposits[offset as usize..(offset + max_deposits) as usize]
);
let expected = deposits[offset as usize..(offset + max_deposits) as usize]
.iter()
.map(|(_, d)| d.clone())
.collect::<Vec<_>>();
assert_eq!(deposits_for_block[..], expected[..]);
}
#[test]
@@ -518,20 +528,20 @@ mod tests {
let deposits1 = dummy_deposits(rng, start1, count);
let deposits2 = dummy_deposits(rng, start2, count);
for d in deposits1.into_iter().chain(deposits2) {
assert!(op_pool.insert_deposit(d).is_ok());
for (i, d) in deposits1.into_iter().chain(deposits2) {
assert!(op_pool.insert_deposit(i, d).is_ok());
}
assert_eq!(op_pool.num_deposits(), 2 * count as usize);
let mut state = BeaconState::random_for_test(rng);
state.deposit_index = start1;
state.eth1_deposit_index = start1;
// Pruning the first bunch of deposits in batches of 5 should work.
let step = 5;
let mut pool_size = step + 2 * count as usize;
for i in (start1..=(start1 + count)).step_by(step) {
state.deposit_index = i;
state.eth1_deposit_index = i;
op_pool.prune_deposits(&state);
pool_size -= step;
assert_eq!(op_pool.num_deposits(), pool_size);
@@ -539,14 +549,14 @@ mod tests {
assert_eq!(pool_size, count as usize);
// Pruning in the gap should do nothing.
for i in (start1 + count..start2).step_by(step) {
state.deposit_index = i;
state.eth1_deposit_index = i;
op_pool.prune_deposits(&state);
assert_eq!(op_pool.num_deposits(), count as usize);
}
// Same again for the later deposits.
pool_size += step;
for i in (start2..=(start2 + count)).step_by(step) {
state.deposit_index = i;
state.eth1_deposit_index = i;
op_pool.prune_deposits(&state);
pool_size -= step;
assert_eq!(op_pool.num_deposits(), pool_size);
@@ -560,13 +570,13 @@ mod tests {
}
// Create `count` dummy deposits with sequential deposit IDs beginning from `start`.
fn dummy_deposits(rng: &mut XorShiftRng, start: u64, count: u64) -> Vec<Deposit> {
fn dummy_deposits(rng: &mut XorShiftRng, start: u64, count: u64) -> Vec<(u64, Deposit)> {
let proto_deposit = make_deposit(rng);
(start..start + count)
.map(|index| {
let mut deposit = proto_deposit.clone();
deposit.index = index;
deposit
deposit.data.amount = index * 1000;
(index, deposit)
})
.collect()
}
@@ -596,11 +606,11 @@ mod tests {
state: &BeaconState<E>,
spec: &ChainSpec,
extra_signer: Option<usize>,
) -> Attestation {
) -> Attestation<E> {
let mut builder = TestingAttestationBuilder::new(state, committee, slot, shard, spec);
let signers = &committee[signing_range];
let committee_keys = signers.iter().map(|&i| &keypairs[i].sk).collect::<Vec<_>>();
builder.sign(signers, &committee_keys, &state.fork, spec);
builder.sign(signers, &committee_keys, &state.fork, spec, false);
extra_signer.map(|c_idx| {
let validator_index = committee[c_idx];
builder.sign(
@@ -608,6 +618,7 @@ mod tests {
&[&keypairs[validator_index].sk],
&state.fork,
spec,
false,
)
});
builder.build()
@@ -668,15 +679,18 @@ mod tests {
);
assert_eq!(
att1.aggregation_bitfield.num_set_bits(),
att1.aggregation_bits.num_set_bits(),
earliest_attestation_validators(&att1, state).num_set_bits()
);
state.current_epoch_attestations.push(PendingAttestation {
aggregation_bitfield: att1.aggregation_bitfield.clone(),
data: att1.data.clone(),
inclusion_delay: 0,
proposer_index: 0,
});
state
.current_epoch_attestations
.push(PendingAttestation {
aggregation_bits: att1.aggregation_bits.clone(),
data: att1.data.clone(),
inclusion_delay: 0,
proposer_index: 0,
})
.unwrap();
assert_eq!(
cc.committee.len() - 2,
@@ -728,6 +742,7 @@ mod tests {
assert_eq!(op_pool.num_attestations(), committees.len());
// Before the min attestation inclusion delay, get_attestations shouldn't return anything.
state.slot -= 1;
assert_eq!(op_pool.get_attestations(state, spec).len(), 0);
// Then once the delay has elapsed, we should get a single aggregated attestation.
@@ -738,7 +753,7 @@ mod tests {
let agg_att = &block_attestations[0];
assert_eq!(
agg_att.aggregation_bitfield.num_set_bits(),
agg_att.aggregation_bits.num_set_bits(),
spec.target_committee_size as usize
);
@@ -854,7 +869,7 @@ mod tests {
.map(CrosslinkCommittee::into_owned)
.collect::<Vec<_>>();
let max_attestations = spec.max_attestations as usize;
let max_attestations = <MainnetEthSpec as EthSpec>::MaxAttestations::to_usize();
let target_committee_size = spec.target_committee_size as usize;
let insert_attestations = |cc: &OwnedCrosslinkCommittee, step_size| {
@@ -897,7 +912,7 @@ mod tests {
// All the best attestations should be signed by at least `big_step_size` (4) validators.
for att in &best_attestations {
assert!(att.aggregation_bitfield.num_set_bits() >= big_step_size);
assert!(att.aggregation_bits.num_set_bits() >= big_step_size);
}
}
}

View File

@@ -42,7 +42,7 @@ impl<T> MaxCoverItem<T> {
///
/// * Time complexity: `O(limit * items_iter.len())`
/// * Space complexity: `O(item_iter.len())`
pub fn maximum_cover<'a, I, T>(items_iter: I, limit: usize) -> Vec<T::Object>
pub fn maximum_cover<I, T>(items_iter: I, limit: usize) -> Vec<T::Object>
where
I: IntoIterator<Item = T>,
T: MaxCover,

View File

@@ -9,14 +9,14 @@ use types::*;
/// Operations are stored in arbitrary order, so it's not a good idea to compare instances
/// of this type (or its encoded form) for equality. Convert back to an `OperationPool` first.
#[derive(Encode, Decode)]
pub struct PersistedOperationPool {
pub struct PersistedOperationPool<T: EthSpec> {
/// Mapping from attestation ID to attestation mappings.
// We could save space by not storing the attestation ID, but it might
// be difficult to make that roundtrip due to eager aggregation.
attestations: Vec<(AttestationId, Vec<Attestation>)>,
deposits: Vec<Deposit>,
attestations: Vec<(AttestationId, Vec<Attestation<T>>)>,
deposits: Vec<(u64, Deposit)>,
/// Attester slashings.
attester_slashings: Vec<AttesterSlashing>,
attester_slashings: Vec<AttesterSlashing<T>>,
/// Proposer slashings.
proposer_slashings: Vec<ProposerSlashing>,
/// Voluntary exits.
@@ -25,9 +25,9 @@ pub struct PersistedOperationPool {
transfers: Vec<Transfer>,
}
impl PersistedOperationPool {
impl<T: EthSpec> PersistedOperationPool<T> {
/// Convert an `OperationPool` into serializable form.
pub fn from_operation_pool<T: EthSpec>(operation_pool: &OperationPool<T>) -> Self {
pub fn from_operation_pool(operation_pool: &OperationPool<T>) -> Self {
let attestations = operation_pool
.attestations
.read()
@@ -39,7 +39,7 @@ impl PersistedOperationPool {
.deposits
.read()
.iter()
.map(|(_, d)| d.clone())
.map(|(index, d)| (*index, d.clone()))
.collect();
let attester_slashings = operation_pool
@@ -76,13 +76,9 @@ impl PersistedOperationPool {
}
/// Reconstruct an `OperationPool`.
pub fn into_operation_pool<T: EthSpec>(
self,
state: &BeaconState<T>,
spec: &ChainSpec,
) -> OperationPool<T> {
pub fn into_operation_pool(self, state: &BeaconState<T>, spec: &ChainSpec) -> OperationPool<T> {
let attestations = RwLock::new(self.attestations.into_iter().collect());
let deposits = RwLock::new(self.deposits.into_iter().map(|d| (d.index, d)).collect());
let deposits = RwLock::new(self.deposits.into_iter().collect());
let attester_slashings = RwLock::new(
self.attester_slashings
.into_iter()