diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md new file mode 100644 index 0000000000..7ecb9e5b0f --- /dev/null +++ b/.github/ISSUE_TEMPLATE.md @@ -0,0 +1,16 @@ +## Description + +Please provide a brief description of the issue. + +## Present Behaviour + +Describe the present behaviour of the application, with regards to this +issue. + +## Expected Behaviour + +How _should_ the application behave? + +## Steps to resolve + +Please describe the steps required to resolve this issue, if known. diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md new file mode 100644 index 0000000000..01ca90a794 --- /dev/null +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -0,0 +1,12 @@ +## Issue Addressed + +Which issue # does this PR address? + +## Proposed Changes + +Please list or describe the changes introduced by this PR. + +## Additional Info + +Please provide any additional information. For example, future considerations +or information useful for reviewers. diff --git a/.gitmodules b/.gitmodules deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000000..e5b34f083f --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,121 @@ +# Contributors Guide + +Lighthouse is an open-source Ethereum 2.0 client. We we're community driven and +welcome all contribution. We aim to provide a constructive, respectful and fun +environment for collaboration. + +We are active contributors to the [Ethereum 2.0 specification](https://github.com/ethereum/eth2.0-specs) and attend all [Eth +2.0 implementers calls](https://github.com/ethereum/eth2.0-pm). + +This guide is geared towards beginners. If you're an open-source veteran feel +free to just skim this document and get straight into crushing issues. + +## Why Contribute + +There are many reasons you might contribute to Lighthouse. For example, you may +wish to: + +- contribute to the Ethereum ecosystem. +- establish yourself as a layer-1 Ethereum developer. +- work in the amazing Rust programming language. +- learn how to participate in open-source projects. +- expand your software development skills. +- flex your skills in a public forum to expand your career + opportunities (or simply for the fun of it). +- grow your network by working with core Ethereum developers. + +## How to Contribute + +Regardless of the reason, the process to begin contributing is very much the +same. We operate like a typical open-source project operating on GitHub: the +repository [Issues](https://github.com/sigp/lighthouse/issues) is where we +track what needs to be done and [Pull +Requests](https://github.com/sigp/lighthouse/pulls) is where code gets +reviewed. We use [gitter](https://gitter.im/sigp/lighthouse) to chat +informally. + +### General Work-Flow + +We recommend the following work-flow for contributors: + +1. **Find an issue** to work on, either because it's interesting or suitable to + your skill-set. Use comments to communicate your intentions and ask +questions. +2. **Work in a feature branch** of your personal fork + (github.com/YOUR_NAME/lighthouse) of the main repository + (github.com/sigp/lighthouse). +3. Once you feel you have addressed the issue, **create a pull-request** to merge + your changes in to the main repository. +4. Wait for the repository maintainers to **review your changes** to ensure the + issue is addressed satisfactorily. Optionally, mention your PR on +[gitter](https://gitter.im/sigp/lighthouse). +5. If the issue is addressed the repository maintainers will **merge your + pull-request** and you'll be an official contributor! + +Generally, you find an issue you'd like to work on and announce your intentions +to start work in a comment on the issue. Then, do your work on a separate +branch (a "feature branch") in your own fork of the main repository. Once +you're happy and you think the issue has been addressed, create a pull request +into the main repository. + +### First-time Set-up + +First time contributors can get their git environment up and running with these +steps: + +1. [Create a + fork](https://help.github.com/articles/fork-a-repo/#fork-an-example-repository) +and [clone +it](https://help.github.com/articles/fork-a-repo/#step-2-create-a-local-clone-of-your-fork) +to your local machine. +2. [Add an _"upstream"_ + branch](https://help.github.com/articles/fork-a-repo/#step-3-configure-git-to-sync-your-fork-with-the-original-spoon-knife-repository) +that tracks github.com/sigp/lighthouse using `$ git remote add upstream +https://github.com/sigp/lighthouse.git` (pro-tip: [use SSH](https://help.github.com/articles/connecting-to-github-with-ssh/) instead of HTTPS). +3. Create a new feature branch with `$ git checkout -b your_feature_name`. The + name of your branch isn't critical but it should be short and instructive. +E.g., if you're fixing a bug with serialization, you could name your branch +`fix_serialization_bug`. +4. Commit your changes and push them to your fork with `$ git push origin + your_feature_name`. +5. Go to your fork on github.com and use the web interface to create a pull + request into the sigp/lighthouse repo. + +From there, the repository maintainers will review the PR and either accept it +or provide some constructive feedback. + +There's great +[guide](https://akrabat.com/the-beginners-guide-to-contributing-to-a-github-project/) +by Rob Allen that provides much more detail on each of these steps, if you're +having trouble. As always, jump on [gitter](https://gitter.im/sigp/lighthouse) +if you get stuck. + + +## FAQs + +### I don't think I have anything to add + +There's lots to be done and there's all sorts of tasks. You can do anything +from correcting typos through to writing core consensus code. If you reach out, +we'll include you. + +### I'm not sure my Rust is good enough + +We're open to developers of all levels. If you create a PR and your code +doesn't meet our standards, we'll help you fix it and we'll share the reasoning +with you. Contributing to open-source is a great way to learn. + +### I'm not sure I know enough about Ethereum 2.0 + +No problems, there's plenty of tasks that don't require extensive Ethereum +knowledge. You can learn about Ethereum as you go. + +### I'm afraid of making a mistake and looking silly + +Don't be. We're all about personal development and constructive feedback. If you +make a mistake and learn from it, everyone wins. + +### I don't like the way you do things + +Please, make an issue and explain why. We're open to constructive criticism and +will happily change our ways. diff --git a/Cargo.toml b/Cargo.toml index d2c23b2a77..da5d1b9050 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -33,9 +33,11 @@ name = "lighthouse" [workspace] members = [ "beacon_chain/types", + "beacon_chain/transition", "beacon_chain/utils/bls", "beacon_chain/utils/boolean-bitfield", "beacon_chain/utils/hashing", + "beacon_chain/utils/honey-badger-split", "beacon_chain/utils/shuffling", "beacon_chain/utils/ssz", "beacon_chain/utils/ssz_helpers", diff --git a/README.md b/README.md index 01ce24b590..7417005763 100644 --- a/README.md +++ b/README.md @@ -2,8 +2,8 @@ [![Build Status](https://travis-ci.org/sigp/lighthouse.svg?branch=master)](https://travis-ci.org/sigp/lighthouse) [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/sigp/lighthouse?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) -A work-in-progress, open-source implementation of the Ethereum 2.0 Beacon Chain, maintained -by Sigma Prime. +A work-in-progress, open-source implementation of the Ethereum 2.0 Beacon +Chain, maintained by Sigma Prime. ## Introduction @@ -14,180 +14,184 @@ This readme is split into two major sections: - [What is Ethereum 2.0](#what-is-ethereum-20): an introduction to Ethereum 2.0. If you'd like some background on Sigma Prime, please see the [Lighthouse Update -\#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or our -[website](https://sigmaprime.io). +\#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or the +[company website](https://sigmaprime.io). ## Lighthouse Client -Lighthouse is an open-source Ethereum 2.0 client, in development. Designed as -an Ethereum 2.0-only client, Lighthouse will not re-implement the existing -proof-of-work protocol. Maintaining a forward-focus on Ethereum 2.0 ensures -that Lighthouse will avoid reproducing the high-quality work already undertaken -by existing clients. For present-Ethereum functionality, Lighthouse will -connect to existing clients like +Lighthouse is an open-source Ethereum 2.0 client that is currently under +development. Designed as an Ethereum 2.0-only client, Lighthouse will not +re-implement the existing proof-of-work protocol. Maintaining a forward-focus +on Ethereum 2.0 ensures that Lighthouse avoids reproducing the high-quality +work already undertaken by existing projects. As such, Lighthouse will connect +to existing clients, such as [Geth](https://github.com/ethereum/go-ethereum) or -[Parity-Ethereum](https://github.com/paritytech/parity-ethereum) via RPC. +[Parity-Ethereum](https://github.com/paritytech/parity-ethereum), via RPC to enable +present-Ethereum functionality. ### Goals -We aim to contribute to the research and development of a secure, efficient and -decentralised Ethereum protocol through the development of an open-source -Ethereum 2.0 client. +The purpose of this project is to further research and development towards a +secure, efficient, and decentralized Ethereum protocol, facilitated by a new +open-source Ethereum 2.0 client. -In addition to building an implementation, we seek to help maintain and improve -the protocol wherever possible. +In addition to implementing a new client, the project seeks to maintain and +improve the Ethereum protocol wherever possible. ### Components The following list describes some of the components actively under development by the team: -- **BLS cryptography**: we presently use the [Apache +- **BLS cryptography**: Lighthouse presently use the [Apache Milagro](https://milagro.apache.org/) cryptography library to create and -verify BLS aggregate signatures. BLS signatures are core to Eth 2.0 as they -allow the signatures of many validators to be compressed into a constant 96 -bytes and verified efficiently.. We're presently maintaining our own [BLS -aggregates library](https://github.com/sigp/signature-schemes), gratefully -forked from @lovesh. -- **DoS-resistant block pre-processing**: processing blocks in proof-of-stake + verify BLS aggregate signatures. BLS signatures are core to Eth 2.0 as they + allow the signatures of many validators to be compressed into a constant 96 + bytes and efficiently verified. The Lighthouse project is presently + maintaining its own [BLS aggregates + library](https://github.com/sigp/signature-schemes), gratefully forked from + [@lovesh](https://github.com/lovesh). +- **DoS-resistant block pre-processing**: Processing blocks in proof-of-stake is more resource intensive than proof-of-work. As such, clients need to -ensure that bad blocks can be rejected as efficiently as possible. We can -presently process a block with 10 million ETH staked in 0.006 seconds and -reject invalid blocks even quicker. See the -[issue](https://github.com/ethereum/beacon_chain/issues/103) on -[ethereum/beacon_chain](https://github.com/ethereum/beacon_chain) + ensure that bad blocks can be rejected as efficiently as possible. At + present, blocks having 10 million ETH staked can be processed in 0.006 + seconds, and invalid blocks are rejected even more quickly. See [issue + #103](https://github.com/ethereum/beacon_chain/issues/103) on + [ethereum/beacon_chain](https://github.com/ethereum/beacon_chain). . - **P2P networking**: Eth 2.0 will likely use the [libp2p framework](https://libp2p.io/). Lighthouse aims to work alongside -[Parity](https://www.parity.io/) to get -[libp2p-rust](https://github.com/libp2p/rust-libp2p) fit-for-purpose. -- **Validator duties** : the project involves the development of "validator" - services for users who wish to stake ETH. To fulfil their duties, validators -require a consistent view of the chain and the ability to vote upon both shard -and beacon chain blocks.. -- **New serialization formats**: lighthouse is working alongside EF researchers - to develop "simpleserialize" a purpose-built serialization format for sending -information across the network. Check out our [SSZ +[Parity](https://www.parity.io/) to ensure +[libp2p-rust](https://github.com/libp2p/rust-libp2p) is fit-for-purpose. +- **Validator duties** : The project involves development of "validator + services" for users who wish to stake ETH. To fulfill their duties, + validators require a consistent view of the chain and the ability to vote + upon blocks from both shard and beacon chains. +- **New serialization formats**: Lighthouse is working alongside researchers + from the Ethereum Foundation to develop *simpleserialize* (SSZ), a + purpose-built serialization format for sending information across a network. + Check out the [SSZ implementation](https://github.com/sigp/lighthouse/tree/master/beacon_chain/utils/ssz) -and our +and this [research](https://github.com/sigp/serialization_sandbox/blob/report/report/serialization_report.md) -on serialization formats. -- **Casper FFG fork-choice**: the [Casper +on serialization formats for more information. +- **Casper FFG fork-choice**: The [Casper FFG](https://arxiv.org/abs/1710.09437) fork-choice rules allow the chain to select a canonical chain in the case of a fork. -- **Efficient state transition logic**: "state transition" logic governs - updates to the validator set as validators log in/out, penalises/rewards +- **Efficient state transition logic**: State transition logic governs + updates to the validator set as validators log in/out, penalizes/rewards validators, rotates validators across shards, and implements other core tasks. -- **Fuzzing and testing environments**: we are preparing to implement lab -environments with CI work-flows to provide automated security analysis.. +- **Fuzzing and testing environments**: Implementation of lab environments with + continuous integration (CI) workflows, providing automated security analysis. -In addition to these components we're also working on database schemas, RPC +In addition to these components we are also working on database schemas, RPC frameworks, specification development, database optimizations (e.g., -bloom-filters) and tons of other interesting stuff (at least we think so). +bloom-filters), and tons of other interesting stuff (at least we think so). ### Contributing **Lighthouse welcomes contributors with open-arms.** -Layer-1 infrastructure is a critical component of the ecosystem and relies -heavily on community contribution. Building Ethereum 2.0 is a huge task and we -refuse to conduct an inappropriate ICO or charge licensing fees. Instead, we -fund development through grants and support from Sigma Prime. +Layer-1 infrastructure is a critical component for the ecosystem and relies +heavily on contributions from the community. Building Ethereum 2.0 is a huge +task and we refuse to conduct an inappropriate ICO or charge licensing fees. +Instead, we fund development through grants and support from Sigma Prime. If you would like to learn more about Ethereum 2.0 and/or -[Rust](https://www.rust-lang.org/), we would be more than happy to on-board you -and assign you to some tasks. We aim to be as accepting and understanding as +[Rust](https://www.rust-lang.org/), we are more than happy to on-board you +and assign you some tasks. We aim to be as accepting and understanding as possible; we are more than happy to up-skill contributors in exchange for their -help on the project. +assistance with the project. -Alternatively, if you an ETH/Rust veteran we'd love to have your input. We're -always looking for the best way to implement things and will consider any -respectful criticism. +Alternatively, if you are an ETH/Rust veteran, we'd love your input. We're +always looking for the best way to implement things and welcome all +respectful criticisms. If you'd like to contribute, try having a look through the [open issues](https://github.com/sigp/lighthouse/issues) (tip: look for the [good first issue](https://github.com/sigp/lighthouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) -tag) and ping us on the [gitter](https://gitter.im/sigp/lighthouse). We need +tag) and ping us on the [gitter](https://gitter.im/sigp/lighthouse) channel. We need your support! ### Running -**NOTE: the cryptography libraries used in this implementation are -experimental and as such all cryptography should be assumed to be insecure.** +**NOTE: The cryptography libraries used in this implementation are +experimental. As such all cryptography is assumed to be insecure.** -The code-base is still under-development and does not provide any user-facing -functionality. For developers and researchers, there are tests and benchmarks -which could be of interest. +This code-base is still very much under-development and does not provide any +user-facing functionality. For developers and researchers, there are several +tests and benchmarks which may be of interest. -To run tests, use +To run tests, use: ``` $ cargo test --all ``` -To run benchmarks, use +To run benchmarks, use: ``` $ cargo bench --all ``` -Lighthouse presently runs on Rust `stable`, however, benchmarks require the +Lighthouse presently runs on Rust `stable`, however, benchmarks currently require the `nightly` version. ### Engineering Ethos -Lighthouse aims to produce many small, easily-tested components, each separated +Lighthouse aims to produce many small easily-tested components, each separated into individual crates wherever possible. Generally, tests can be kept in the same file, as is typical in Rust. -Integration tests should be placed in the `tests` directory in the crates root. -Particularity large (line-count) tests should be separated into another file. +Integration tests should be placed in the `tests` directory in the crate's +root. Particularity large (line-count) tests should be placed into a separate +file. -A function is not complete until it is tested. We produce tests to protect -against regression (accidentally breaking things) and to help those who read -our code to understand how the function should (or shouldn't) be used. +A function is not considered complete until a test exists for it. We produce +tests to protect against regression (accidentally breaking things) and to +provide examples that help readers of the code base understand how functions +should (or should not) be used. -Each PR is to be reviewed by at-least one "core developer" (i.e., someone with -write-access to the repository). This helps to detect bugs, improve consistency -and relieves any one individual of the responsibility of an error. +Each pull request is to be reviewed by at least one "core developer" (i.e., +someone with write-access to the repository). This helps to ensure bugs are +detected, consistency is maintained, and responsibility of errors is dispersed. -Discussion should be respectful and intellectual. Have fun, make jokes but -respect other people's limits. +Discussion must be respectful and intellectual. Have fun and make jokes, but +always respect the limits of other people. ### Directory Structure Here we provide an overview of the directory structure: -- `\beacon_chain`: contains logic derived directly from the specification. +- `/beacon_chain`: contains logic derived directly from the specification. E.g., shuffling algorithms, state transition logic and structs, block validation, BLS crypto, etc. -- `\lighthouse`: contains logic specific to this client implementation. E.g., +- `/lighthouse`: contains logic specific to this client implementation. E.g., CLI parsing, RPC end-points, databases, etc. -- `\network-libp2p`: contains a proof-of-concept libp2p implementation. Will be - replaced once research around p2p has been finalized. ## Contact -The best place for discussion is the [sigp/lighthouse](https://gitter.im/sigp/lighthouse) gitter. +The best place for discussion is the [sigp/lighthouse gitter](https://gitter.im/sigp/lighthouse). Ping @paulhauner or @AgeManning to get the quickest response. # What is Ethereum 2.0 -Ethereum 2.0 refers to a new blockchain currently under development -by the Ethereum Foundation and the Ethereum community. The Ethereum 2.0 blockchain -consists of 1,025 proof-of-stake blockchains; the "beacon chain" and 1,024 -"shard chains". +Ethereum 2.0 refers to a new blockchain system currently under development by +the Ethereum Foundation and the Ethereum community. The Ethereum 2.0 blockchain +consists of 1,025 proof-of-stake blockchains. This includes the "beacon chain" +and 1,024 "shard chains". ## Beacon Chain -The Beacon Chain differs from existing blockchains such as Bitcoin and -Ethereum, in that it doesn't process "transactions", per say. Instead, it -maintains a set of bonded (staked) validators and co-ordinates these to provide -services to a static set of "sub-blockchains" (shards). These shards process -normal transactions, such as "5 ETH from A to B", in parallel whilst deferring -consensus to the Beacon Chain. +The concept of a beacon chain differs from existing blockchains, such as +Bitcoin and Ethereum, in that it doesn't process transactions per se. Instead, +it maintains a set of bonded (staked) validators and coordinates these to +provide services to a static set of *sub-blockchains* (i.e. shards). Each of +these shard blockchains processes normal transactions (e.g. "Transfer 5 ETH +from A to B") in parallel whilst deferring consensus mechanisms to the beacon +chain. Major services provided by the beacon chain to its shards include the following: @@ -195,53 +199,54 @@ Major services provided by the beacon chain to its shards include the following: scheme](https://ethresear.ch/t/minimal-vdf-randomness-beacon/3566). - Validator management, including: - Inducting and ejecting validators. - - Delegating randomly-shuffled subsets of validators to validate shards. - - Penalising and rewarding validators. + - Assigning randomly-shuffled subsets of validators to particular shards. + - Penalizing and rewarding validators. - Proof-of-stake consensus for shard chain blocks. ## Shard Chains -Shards can be thought of like CPU cores - they're a lane where transactions can +Shards are analogous to CPU cores - they're a resource where transactions can execute in series (one-after-another). Presently, Ethereum is single-core and -can only _fully_ process one transaction at a time. Sharding allows multiple -transactions to happen in parallel, greatly increasing the per-second +can only _fully_ process one transaction at a time. Sharding allows processing +of multiple transactions simultaneously, greatly increasing the per-second transaction capacity of Ethereum. -Each shard uses proof-of-stake and shares its validators (stakers) with the other -shards as the beacon chain rotates validators pseudo-randomly across shards. -Shards will likely be the basis of very interesting layer-2 transaction -processing schemes, however, we won't get into that here. +Each shard uses a proof-of-stake consensus mechanism and shares its validators +(stakers) with other shards. The beacon chain rotates validators +pseudo-randomly between different shards. Shards will likely be the basis of +layer-2 transaction processing schemes, however, that is not in scope of this +discussion. ## The Proof-of-Work Chain -The proof-of-work chain will hold a contract that allows accounts to deposit 32 -ETH, a BLS public key and some [other -parameters](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md#pow-chain-changes) -to allow them to become Beacon Chain validators. Each Beacon Chain will -reference a PoW block hash allowing PoW clients to use the Beacon Chain as a +The present-Ethereum proof-of-work (PoW) chain will host a smart contract that +enables accounts to deposit 32 ETH, a BLS public key, and some [other +parameters](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md#pow-chain-changes), +allowing them to become beacon chain validators. Each beacon chain will +reference a PoW block hash allowing PoW clients to use the beacon chain as a source of [Casper FFG finality](https://arxiv.org/abs/1710.09437), if desired. -It is a requirement that ETH can move freely between shard chains and between -Eth 2.0 and present-Ethereum. The exact mechanics of these transfers are still -a topic of research and their details are yet to be confirmed. +It is a requirement that ETH can move freely between shard chains, as well as between +Eth 2.0 and present-Ethereum blockchains. The exact mechanics of these transfers remain +an active topic of research and their details are yet to be confirmed. ## Ethereum 2.0 Progress -Ethereum 2.0 is not fully specified and there's no working implementation. Some -teams have demos available which indicate progress, but not a complete product. -We look forward to providing user functionality once we are ready to provide a -minimum-viable user experience. +Ethereum 2.0 is not fully specified and a working implementation does not yet +exist. Some teams have demos available which indicate progress, but do not +constitute a complete product. We look forward to providing user functionality +once we are ready to provide a minimum-viable user experience. -The work-in-progress specification lives +The work-in-progress Eth 2.0 specification lives [here](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md) in the [ethereum/eth2.0-specs](https://github.com/ethereum/eth2.0-specs) repository. The spec is still in a draft phase, however there are several teams -already implementing it whilst the Ethereum Foundation research team fill in -the gaps. There is active discussion about the spec in the +basing their Eth 2.0 implementations upon it while the Ethereum Foundation research +team continue to fill in the gaps. There is active discussion about the specification in the [ethereum/sharding](https://gitter.im/ethereum/sharding) gitter channel. A proof-of-concept implementation in Python is available at [ethereum/beacon_chain](https://github.com/ethereum/beacon_chain). -Presently, the spec almost exclusively defines the Beacon Chain as it -is the focus of present development efforts. Progress on shard chain +Presently, the specification focuses almost exclusively on the beacon chain, +as it is the focus of current development efforts. Progress on shard chain specification will soon follow. diff --git a/beacon_chain/transition/Cargo.toml b/beacon_chain/transition/Cargo.toml new file mode 100644 index 0000000000..c17d6994fd --- /dev/null +++ b/beacon_chain/transition/Cargo.toml @@ -0,0 +1,9 @@ +[package] +name = "transition" +version = "0.1.0" +authors = ["Age Manning "] + +[dependencies] +honey-badger-split = { path = "../utils/honey-badger-split" } +types = { path = "../types" } +shuffling = { path = "../utils/shuffling" } diff --git a/beacon_chain/transition/src/delegation/mod.rs b/beacon_chain/transition/src/delegation/mod.rs new file mode 100644 index 0000000000..66f3304f3a --- /dev/null +++ b/beacon_chain/transition/src/delegation/mod.rs @@ -0,0 +1,6 @@ +use super::honey_badger_split; +use super::types; +use super::TransitionError; +use super::shuffling::shuffle; + +pub mod validator; diff --git a/beacon_chain/transition/src/delegation/validator.rs b/beacon_chain/transition/src/delegation/validator.rs new file mode 100644 index 0000000000..4c33d00817 --- /dev/null +++ b/beacon_chain/transition/src/delegation/validator.rs @@ -0,0 +1,263 @@ +use super::honey_badger_split::SplitExt; +use super::types::{ShardAndCommittee, ValidatorRecord, ChainConfig}; +use super::TransitionError; +use super::shuffle; +use std::cmp::min; + +type DelegatedCycle = Vec>; + +/// Produce a vector of validators indicies where those validators start and end +/// dynasties are within the supplied `dynasty`. +fn active_validator_indicies( + dynasty: u64, + validators: &[ValidatorRecord]) + -> Vec +{ + validators.iter() + .enumerate() + .filter_map(|(i, validator)| { + if (validator.start_dynasty >= dynasty) & + (validator.end_dynasty < dynasty) + { + Some(i) + } else { + None + } + }) + .collect() +} + + +/// Delegates active validators into slots for a given cycle, given a random seed. +/// Returns a vector or ShardAndComitte vectors representing the shards and committiees for +/// each slot. +/// References get_new_shuffling (ethereum 2.1 specification) +pub fn delegate_validators( + seed: &[u8], + validators: &[ValidatorRecord], + dynasty: u64, + crosslinking_shard_start: u16, + config: &ChainConfig) + -> Result +{ + let shuffled_validator_indices = { + let mut validator_indices = active_validator_indicies(dynasty, validators); + match shuffle(seed, validator_indices) { + Ok(shuffled) => shuffled, + _ => return Err(TransitionError::InvalidInput( + String::from("Shuffle list length exceed."))) + } + }; + let shard_indices: Vec = (0_usize..config.shard_count as usize).into_iter().collect(); + let crosslinking_shard_start = crosslinking_shard_start as usize; + let cycle_length = config.cycle_length as usize; + let min_committee_size = config.min_committee_size as usize; + generate_cycle( + &shuffled_validator_indices, + &shard_indices, + crosslinking_shard_start, + cycle_length, + min_committee_size) +} + +/// Given the validator list, delegates the validators into slots and comittees for a given cycle. +fn generate_cycle( + validator_indices: &[usize], + shard_indices: &[usize], + crosslinking_shard_start: usize, + cycle_length: usize, + min_committee_size: usize) + -> Result +{ + + let validator_count = validator_indices.len(); + let shard_count = shard_indices.len(); + + if shard_count / cycle_length == 0 { + return Err(TransitionError::InvalidInput(String::from("Number of + shards needs to be greater than + cycle length"))); + + } + + let (committees_per_slot, slots_per_committee) = { + if validator_count >= cycle_length * min_committee_size { + let committees_per_slot = min(validator_count / cycle_length / + (min_committee_size * 2) + 1, shard_count / + cycle_length); + let slots_per_committee = 1; + (committees_per_slot, slots_per_committee) + } else { + let committees_per_slot = 1; + let mut slots_per_committee = 1; + while (validator_count * slots_per_committee < cycle_length * min_committee_size) & + (slots_per_committee < cycle_length) { + slots_per_committee *= 2; + } + (committees_per_slot, slots_per_committee) + } + }; + + let cycle = validator_indices.honey_badger_split(cycle_length) + .enumerate() + .map(|(i, slot_indices)| { + let shard_id_start = crosslinking_shard_start + i * committees_per_slot / slots_per_committee; + slot_indices.honey_badger_split(committees_per_slot) + .enumerate() + .map(|(j, shard_indices)| { + ShardAndCommittee{ + shard_id: ((shard_id_start + j) % shard_count) as u16, + committee: shard_indices.to_vec(), + } + }) + .collect() + }) + .collect(); + Ok(cycle) +} + +#[cfg(test)] +mod tests { + use super::*; + + fn generate_cycle_helper( + validator_count: &usize, + shard_count: &usize, + crosslinking_shard_start: usize, + cycle_length: usize, + min_committee_size: usize) + -> (Vec, Vec, Result) + { + let validator_indices: Vec = (0_usize..*validator_count).into_iter().collect(); + let shard_indices: Vec = (0_usize..*shard_count).into_iter().collect(); + let result = generate_cycle( + &validator_indices, + &shard_indices, + crosslinking_shard_start, + cycle_length, + min_committee_size); + (validator_indices, shard_indices, result) + } + + #[allow(dead_code)] + fn print_cycle(cycle: &DelegatedCycle) { + cycle.iter() + .enumerate() + .for_each(|(i, slot)| { + println!("slot {:?}", &i); + slot.iter() + .enumerate() + .for_each(|(i, sac)| { + println!("#{:?}\tshard_id={}\tcommittee.len()={}", + &i, &sac.shard_id, &sac.committee.len()) + }) + }); + } + + fn flatten_validators(cycle: &DelegatedCycle) + -> Vec + { + let mut flattened = vec![]; + for slot in cycle.iter() { + for sac in slot.iter() { + for validator in sac.committee.iter() { + flattened.push(*validator); + } + } + } + flattened + } + + fn flatten_and_dedup_shards(cycle: &DelegatedCycle) + -> Vec + { + let mut flattened = vec![]; + for slot in cycle.iter() { + for sac in slot.iter() { + flattened.push(sac.shard_id as usize); + } + } + flattened.dedup(); + flattened + } + + fn flatten_shards_in_slots(cycle: &DelegatedCycle) + -> Vec> + { + let mut shards_in_slots: Vec> = vec![]; + for slot in cycle.iter() { + let mut shards: Vec = vec![]; + for sac in slot.iter() { + shards.push(sac.shard_id as usize); + } + shards_in_slots.push(shards); + } + shards_in_slots + } + + // TODO: Improve these tests to check committee lengths + #[test] + fn test_generate_cycle() { + let validator_count: usize = 100; + let shard_count: usize = 20; + let crosslinking_shard_start: usize = 0; + let cycle_length: usize = 20; + let min_committee_size: usize = 10; + let (validators, shards, result) = generate_cycle_helper( + &validator_count, + &shard_count, + crosslinking_shard_start, + cycle_length, + min_committee_size); + let cycle = result.unwrap(); + + let assigned_validators = flatten_validators(&cycle); + let assigned_shards = flatten_and_dedup_shards(&cycle); + let shards_in_slots = flatten_shards_in_slots(&cycle); + let expected_shards = shards.get(0..10).unwrap(); + assert_eq!(assigned_validators, validators, "Validator assignment incorrect"); + assert_eq!(assigned_shards, expected_shards, "Shard assignment incorrect"); + + let expected_shards_in_slots: Vec> = vec![ + vec![0], vec![0], // Each line is 2 slots.. + vec![1], vec![1], + vec![2], vec![2], + vec![3], vec![3], + vec![4], vec![4], + vec![5], vec![5], + vec![6], vec![6], + vec![7], vec![7], + vec![8], vec![8], + vec![9], vec![9], + ]; + // assert!(compare_shards_in_slots(&cycle, &expected_shards_in_slots)); + assert_eq!(expected_shards_in_slots, shards_in_slots, "Shard assignment incorrect.") + } + + #[test] + // Check that the committees per slot is upper bounded by shard count + fn test_generate_cycle_committees_bounded() { + let validator_count: usize = 523; + let shard_count: usize = 31; + let crosslinking_shard_start: usize = 0; + let cycle_length: usize = 11; + let min_committee_size: usize = 5; + let (validators, shards, result) = generate_cycle_helper( + &validator_count, + &shard_count, + crosslinking_shard_start, + cycle_length, + min_committee_size); + let cycle = result.unwrap(); + let assigned_validators = flatten_validators(&cycle); + let assigned_shards = flatten_and_dedup_shards(&cycle); + let shards_in_slots = flatten_shards_in_slots(&cycle); + let expected_shards = shards.get(0..22).unwrap(); + let expected_shards_in_slots: Vec> = + (0_usize..11_usize) .map(|x| vec![2*x,2*x+1]).collect(); + assert_eq!(assigned_validators, validators, "Validator assignment incorrect"); + assert_eq!(assigned_shards, expected_shards, "Shard assignment incorrect"); + // assert!(compare_shards_in_slots(&cycle, &expected_shards_in_slots)); + assert_eq!(expected_shards_in_slots, shards_in_slots, "Shard assignment incorrect.") + } +} diff --git a/beacon_chain/transition/src/lib.rs b/beacon_chain/transition/src/lib.rs new file mode 100644 index 0000000000..ccac525291 --- /dev/null +++ b/beacon_chain/transition/src/lib.rs @@ -0,0 +1,10 @@ +extern crate honey_badger_split; +extern crate types; +extern crate shuffling; + +pub mod delegation; + +#[derive(Debug)] +pub enum TransitionError { + InvalidInput(String), +} diff --git a/beacon_chain/types/src/chain_config.rs b/beacon_chain/types/src/chain_config.rs index 750081aad7..4cdc91a6da 100644 --- a/beacon_chain/types/src/chain_config.rs +++ b/beacon_chain/types/src/chain_config.rs @@ -20,6 +20,20 @@ impl ChainConfig { } } + pub fn validate(&self) -> bool { + // criteria that ensure the config is valid + + // shard_count / cycle_length > 0 otherwise validator delegation + // will fail. + if self.shard_count / self.cycle_length as u16 == 0 { + return false; + } + + true + } + + + #[cfg(test)] pub fn super_fast_tests() -> Self { Self { diff --git a/beacon_chain/types/src/common/delegation/block_hash.rs b/beacon_chain/types/src/common/delegation/block_hash.rs deleted file mode 100644 index 3d0939d29c..0000000000 --- a/beacon_chain/types/src/common/delegation/block_hash.rs +++ /dev/null @@ -1,62 +0,0 @@ -use super::utils::errors::ParameterError; -use super::utils::types::Hash256; - -/* - * Work-in-progress function: not ready for review. - */ - -pub fn get_block_hash( - active_state_recent_block_hashes: &[Hash256], - current_block_slot: u64, - slot: u64, - cycle_length: u64, // convert from standard u8 -) -> Result { - // active_state must have at 2*cycle_length hashes - assert_error!( - active_state_recent_block_hashes.len() as u64 == cycle_length * 2, - ParameterError::InvalidInput(String::from( - "active state has incorrect number of block hashes" - )) - ); - - let state_start_slot = (current_block_slot) - .checked_sub(cycle_length * 2) - .unwrap_or(0); - - assert_error!( - (state_start_slot <= slot) && (slot < current_block_slot), - ParameterError::InvalidInput(String::from("incorrect slot number")) - ); - - let index = 2 * cycle_length + slot - current_block_slot; // should always be positive - Ok(active_state_recent_block_hashes[index as usize]) -} - -#[cfg(test)] -mod tests { - use super::*; - - #[test] - fn test_get_block_hash() { - let block_slot: u64 = 10; - let slot: u64 = 3; - let cycle_length: u64 = 8; - - let mut block_hashes: Vec = Vec::new(); - for _i in 0..2 * cycle_length { - block_hashes.push(Hash256::random()); - } - - let result = get_block_hash( - &block_hashes, - block_slot, - slot, - cycle_length) - .unwrap(); - - assert_eq!( - result, - block_hashes[(2 * cycle_length + slot - block_slot) as usize] - ); - } -} diff --git a/beacon_chain/types/src/common/delegation/mod.rs b/beacon_chain/types/src/common/delegation/mod.rs deleted file mode 100644 index da9746f635..0000000000 --- a/beacon_chain/types/src/common/delegation/mod.rs +++ /dev/null @@ -1,3 +0,0 @@ -mod block_hash; - -use super::utils; diff --git a/beacon_chain/utils/honey-badger-split/Cargo.toml b/beacon_chain/utils/honey-badger-split/Cargo.toml new file mode 100644 index 0000000000..e9721efd44 --- /dev/null +++ b/beacon_chain/utils/honey-badger-split/Cargo.toml @@ -0,0 +1,6 @@ +[package] +name = "honey-badger-split" +version = "0.1.0" +authors = ["Paul Hauner "] + +[dependencies] diff --git a/beacon_chain/utils/honey-badger-split/src/lib.rs b/beacon_chain/utils/honey-badger-split/src/lib.rs new file mode 100644 index 0000000000..890391036c --- /dev/null +++ b/beacon_chain/utils/honey-badger-split/src/lib.rs @@ -0,0 +1,85 @@ +/// A function for splitting a list into N pieces. +/// +/// We have titled it the "honey badger split" because of its robustness. It don't care. + + +/// Iterator for the honey_badger_split function +pub struct Split<'a, T: 'a> { + n: usize, + current_pos: usize, + list: &'a [T], + list_length: usize +} + +impl<'a,T> Iterator for Split<'a, T> { + type Item = &'a [T]; + + fn next(&mut self) -> Option { + self.current_pos +=1; + if self.current_pos <= self.n { + match self.list.get(self.list_length*(self.current_pos-1)/self.n..self.list_length*self.current_pos/self.n) { + Some(v) => Some(v), + None => unreachable!() + } + } + else { + None + } + } +} + +/// Splits a slice into chunks of size n. All postive n values are applicable, +/// hence the honey_badger prefix. +/// +/// Returns an iterator over the original list. +pub trait SplitExt { + fn honey_badger_split(&self, n: usize) -> Split; +} + +impl SplitExt for [T] { + + fn honey_badger_split(&self, n: usize) -> Split { + Split { + n, + current_pos: 0, + list: &self, + list_length: self.len(), + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_honey_badger_split() { + /* + * These test cases are generated from the eth2.0 spec `split()` + * function at commit cbd254a. + */ + let input: Vec = vec![0, 1, 2, 3]; + let output: Vec<&[usize]> = input.honey_badger_split(2).collect(); + assert_eq!(output, vec![&[0, 1], &[2, 3]]); + + let input: Vec = vec![0, 1, 2, 3]; + let output: Vec<&[usize]> = input.honey_badger_split(6).collect(); + let expected: Vec<&[usize]> = vec![&[], &[0], &[1], &[], &[2], &[3]]; + assert_eq!(output, expected); + + let input: Vec = vec![0, 1, 2, 3]; + let output: Vec<&[usize]> = input.honey_badger_split(10).collect(); + let expected: Vec<&[usize]> = vec![&[], &[], &[0], &[], &[1], &[], &[], &[2], &[], &[3]]; + assert_eq!(output, expected); + + let input: Vec = vec![0]; + let output: Vec<&[usize]> = input.honey_badger_split(5).collect(); + let expected: Vec<&[usize]> = vec![&[], &[], &[], &[], &[0]]; + assert_eq!(output, expected); + + let input: Vec = vec![0, 1, 2]; + let output: Vec<&[usize]> = input.honey_badger_split(2).collect(); + let expected: Vec<&[usize]> = vec![&[0], &[1, 2]]; + assert_eq!(output, expected); + } +} diff --git a/beacon_chain/validation/src/attestation_validation.rs b/beacon_chain/validation/src/attestation_validation.rs index 94e3fcac92..53b45cad8e 100644 --- a/beacon_chain/validation/src/attestation_validation.rs +++ b/beacon_chain/validation/src/attestation_validation.rs @@ -29,6 +29,7 @@ use super::signature_verification::{ #[derive(Debug,PartialEq)] pub enum AttestationValidationError { ParentSlotTooHigh, + ParentSlotTooLow, BlockSlotTooHigh, BlockSlotTooLow, JustifiedSlotIncorrect, @@ -94,11 +95,11 @@ impl AttestationValidationContext /* * The slot of this attestation must not be more than cycle_length + 1 distance - * from the block that contained it. + * from the parent_slot of block that contained it. */ - if a.slot < self.block_slot + if a.slot < self.parent_block_slot .saturating_sub(u64::from(self.cycle_length).saturating_add(1)) { - return Err(AttestationValidationError::BlockSlotTooLow); + return Err(AttestationValidationError::ParentSlotTooLow); } /* diff --git a/beacon_chain/validation/tests/attestation_validation/tests.rs b/beacon_chain/validation/tests/attestation_validation/tests.rs index f09da33136..013750af3c 100644 --- a/beacon_chain/validation/tests/attestation_validation/tests.rs +++ b/beacon_chain/validation/tests/attestation_validation/tests.rs @@ -42,6 +42,15 @@ fn test_attestation_validation_invalid_parent_slot_too_high() { assert_eq!(result, Err(AttestationValidationError::ParentSlotTooHigh)); } +#[test] +fn test_attestation_validation_invalid_parent_slot_too_low() { + let mut rig = generic_rig(); + + rig.attestation.slot = rig.context.parent_block_slot - u64::from(rig.context.cycle_length) - 2; + let result = rig.context.validate_attestation(&rig.attestation); + assert_eq!(result, Err(AttestationValidationError::ParentSlotTooLow)); +} + #[test] fn test_attestation_validation_invalid_block_slot_too_high() { let mut rig = generic_rig(); @@ -56,7 +65,7 @@ fn test_attestation_validation_invalid_block_slot_too_high() { fn test_attestation_validation_invalid_block_slot_too_low() { let mut rig = generic_rig(); - rig.attestation.slot = rig.context.block_slot - u64::from(rig.context.cycle_length) - 2; + rig.context.block_slot = rig.context.block_slot + u64::from(rig.context.cycle_length); let result = rig.context.validate_attestation(&rig.attestation); assert_eq!(result, Err(AttestationValidationError::BlockSlotTooLow)); }