mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-15 02:42:38 +00:00
## Overview
This rather extensive PR achieves two primary goals:
1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.
Additionally, it achieves:
- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
- I had to do this to deal with sending blocks into spawned tasks.
- Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
- We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
- Avoids cloning *all the blocks* in *every chain segment* during sync.
- It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.
For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273
## Changes to `canonical_head` and `fork_choice`
Previously, the `BeaconChain` had two separate fields:
```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```
Now, we have grouped these values under a single struct:
```
canonical_head: CanonicalHead {
cached_head: RwLock<Arc<Snapshot>>,
fork_choice: RwLock<BeaconForkChoice>
}
```
Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.
## Breaking Changes
### The `state` (root) field in the `finalized_checkpoint` SSE event
Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:
1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.
Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).
I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.
## Notes for Reviewers
I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.
I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".
I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.
I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.
Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.
You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.
I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.
Co-authored-by: Mac L <mjladson@pm.me>
282 lines
8.2 KiB
Rust
282 lines
8.2 KiB
Rust
use beacon_chain::{
|
|
builder::Witness, eth1_chain::CachingEth1Backend, schema_change::migrate_schema,
|
|
slot_clock::SystemTimeSlotClock,
|
|
};
|
|
use beacon_node::{get_data_dir, get_slots_per_restore_point, ClientConfig};
|
|
use clap::{App, Arg, ArgMatches};
|
|
use environment::{Environment, RuntimeContext};
|
|
use slog::{info, Logger};
|
|
use store::{
|
|
errors::Error,
|
|
metadata::{SchemaVersion, CURRENT_SCHEMA_VERSION},
|
|
DBColumn, HotColdDB, KeyValueStore, LevelDB,
|
|
};
|
|
use strum::{EnumString, EnumVariantNames, VariantNames};
|
|
use types::EthSpec;
|
|
|
|
pub const CMD: &str = "database_manager";
|
|
|
|
pub fn version_cli_app<'a, 'b>() -> App<'a, 'b> {
|
|
App::new("version")
|
|
.visible_aliases(&["v"])
|
|
.setting(clap::AppSettings::ColoredHelp)
|
|
.about("Display database schema version")
|
|
}
|
|
|
|
pub fn migrate_cli_app<'a, 'b>() -> App<'a, 'b> {
|
|
App::new("migrate")
|
|
.setting(clap::AppSettings::ColoredHelp)
|
|
.about("Migrate the database to a specific schema version")
|
|
.arg(
|
|
Arg::with_name("to")
|
|
.long("to")
|
|
.value_name("VERSION")
|
|
.help("Schema version to migrate to")
|
|
.takes_value(true)
|
|
.required(true),
|
|
)
|
|
}
|
|
|
|
pub fn inspect_cli_app<'a, 'b>() -> App<'a, 'b> {
|
|
App::new("inspect")
|
|
.setting(clap::AppSettings::ColoredHelp)
|
|
.about("Inspect raw database values")
|
|
.arg(
|
|
Arg::with_name("column")
|
|
.long("column")
|
|
.value_name("TAG")
|
|
.help("3-byte column ID (see `DBColumn`)")
|
|
.takes_value(true)
|
|
.required(true),
|
|
)
|
|
.arg(
|
|
Arg::with_name("output")
|
|
.long("output")
|
|
.value_name("TARGET")
|
|
.help("Select the type of output to show")
|
|
.default_value("sizes")
|
|
.possible_values(InspectTarget::VARIANTS),
|
|
)
|
|
}
|
|
|
|
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
|
|
App::new(CMD)
|
|
.visible_aliases(&["db"])
|
|
.setting(clap::AppSettings::ColoredHelp)
|
|
.about("Manage a beacon node database")
|
|
.arg(
|
|
Arg::with_name("slots-per-restore-point")
|
|
.long("slots-per-restore-point")
|
|
.value_name("SLOT_COUNT")
|
|
.help(
|
|
"Specifies how often a freezer DB restore point should be stored. \
|
|
Cannot be changed after initialization. \
|
|
[default: 2048 (mainnet) or 64 (minimal)]",
|
|
)
|
|
.takes_value(true),
|
|
)
|
|
.arg(
|
|
Arg::with_name("freezer-dir")
|
|
.long("freezer-dir")
|
|
.value_name("DIR")
|
|
.help("Data directory for the freezer database.")
|
|
.takes_value(true),
|
|
)
|
|
.subcommand(migrate_cli_app())
|
|
.subcommand(version_cli_app())
|
|
.subcommand(inspect_cli_app())
|
|
}
|
|
|
|
fn parse_client_config<E: EthSpec>(
|
|
cli_args: &ArgMatches,
|
|
_env: &Environment<E>,
|
|
) -> Result<ClientConfig, String> {
|
|
let mut client_config = ClientConfig {
|
|
data_dir: get_data_dir(cli_args),
|
|
..Default::default()
|
|
};
|
|
|
|
if let Some(freezer_dir) = clap_utils::parse_optional(cli_args, "freezer-dir")? {
|
|
client_config.freezer_db_path = Some(freezer_dir);
|
|
}
|
|
|
|
let (sprp, sprp_explicit) = get_slots_per_restore_point::<E>(cli_args)?;
|
|
client_config.store.slots_per_restore_point = sprp;
|
|
client_config.store.slots_per_restore_point_set_explicitly = sprp_explicit;
|
|
|
|
Ok(client_config)
|
|
}
|
|
|
|
pub fn display_db_version<E: EthSpec>(
|
|
client_config: ClientConfig,
|
|
runtime_context: &RuntimeContext<E>,
|
|
log: Logger,
|
|
) -> Result<(), Error> {
|
|
let spec = runtime_context.eth2_config.spec.clone();
|
|
let hot_path = client_config.get_db_path();
|
|
let cold_path = client_config.get_freezer_db_path();
|
|
|
|
let mut version = CURRENT_SCHEMA_VERSION;
|
|
HotColdDB::<E, LevelDB<E>, LevelDB<E>>::open(
|
|
&hot_path,
|
|
&cold_path,
|
|
|_, from, _| {
|
|
version = from;
|
|
Ok(())
|
|
},
|
|
client_config.store,
|
|
spec,
|
|
log.clone(),
|
|
)?;
|
|
|
|
info!(log, "Database version: {}", version.as_u64());
|
|
|
|
if version != CURRENT_SCHEMA_VERSION {
|
|
info!(
|
|
log,
|
|
"Latest schema version: {}",
|
|
CURRENT_SCHEMA_VERSION.as_u64(),
|
|
);
|
|
}
|
|
|
|
Ok(())
|
|
}
|
|
|
|
#[derive(Debug, EnumString, EnumVariantNames)]
|
|
pub enum InspectTarget {
|
|
#[strum(serialize = "sizes")]
|
|
ValueSizes,
|
|
#[strum(serialize = "total")]
|
|
ValueTotal,
|
|
}
|
|
|
|
pub struct InspectConfig {
|
|
column: DBColumn,
|
|
target: InspectTarget,
|
|
}
|
|
|
|
fn parse_inspect_config(cli_args: &ArgMatches) -> Result<InspectConfig, String> {
|
|
let column = clap_utils::parse_required(cli_args, "column")?;
|
|
let target = clap_utils::parse_required(cli_args, "output")?;
|
|
|
|
Ok(InspectConfig { column, target })
|
|
}
|
|
|
|
pub fn inspect_db<E: EthSpec>(
|
|
inspect_config: InspectConfig,
|
|
client_config: ClientConfig,
|
|
runtime_context: &RuntimeContext<E>,
|
|
log: Logger,
|
|
) -> Result<(), Error> {
|
|
let spec = runtime_context.eth2_config.spec.clone();
|
|
let hot_path = client_config.get_db_path();
|
|
let cold_path = client_config.get_freezer_db_path();
|
|
|
|
let db = HotColdDB::<E, LevelDB<E>, LevelDB<E>>::open(
|
|
&hot_path,
|
|
&cold_path,
|
|
|_, _, _| Ok(()),
|
|
client_config.store,
|
|
spec,
|
|
log,
|
|
)?;
|
|
|
|
let mut total = 0;
|
|
|
|
for res in db.hot_db.iter_column(inspect_config.column) {
|
|
let (key, value) = res?;
|
|
|
|
match inspect_config.target {
|
|
InspectTarget::ValueSizes => {
|
|
println!("{:?}: {} bytes", key, value.len());
|
|
total += value.len();
|
|
}
|
|
InspectTarget::ValueTotal => {
|
|
total += value.len();
|
|
}
|
|
}
|
|
}
|
|
|
|
match inspect_config.target {
|
|
InspectTarget::ValueSizes | InspectTarget::ValueTotal => {
|
|
println!("Total: {} bytes", total);
|
|
}
|
|
}
|
|
|
|
Ok(())
|
|
}
|
|
|
|
pub struct MigrateConfig {
|
|
to: SchemaVersion,
|
|
}
|
|
|
|
fn parse_migrate_config(cli_args: &ArgMatches) -> Result<MigrateConfig, String> {
|
|
let to = SchemaVersion(clap_utils::parse_required(cli_args, "to")?);
|
|
|
|
Ok(MigrateConfig { to })
|
|
}
|
|
|
|
pub fn migrate_db<E: EthSpec>(
|
|
migrate_config: MigrateConfig,
|
|
client_config: ClientConfig,
|
|
runtime_context: &RuntimeContext<E>,
|
|
log: Logger,
|
|
) -> Result<(), Error> {
|
|
let spec = &runtime_context.eth2_config.spec;
|
|
let hot_path = client_config.get_db_path();
|
|
let cold_path = client_config.get_freezer_db_path();
|
|
|
|
let mut from = CURRENT_SCHEMA_VERSION;
|
|
let to = migrate_config.to;
|
|
let db = HotColdDB::<E, LevelDB<E>, LevelDB<E>>::open(
|
|
&hot_path,
|
|
&cold_path,
|
|
|_, db_initial_version, _| {
|
|
from = db_initial_version;
|
|
Ok(())
|
|
},
|
|
client_config.store.clone(),
|
|
spec.clone(),
|
|
log.clone(),
|
|
)?;
|
|
|
|
info!(
|
|
log,
|
|
"Migrating database schema";
|
|
"from" => from.as_u64(),
|
|
"to" => to.as_u64(),
|
|
);
|
|
|
|
migrate_schema::<Witness<SystemTimeSlotClock, CachingEth1Backend<E>, _, _, _>>(
|
|
db,
|
|
&client_config.get_data_dir(),
|
|
from,
|
|
to,
|
|
log,
|
|
spec,
|
|
)
|
|
}
|
|
|
|
/// Run the database manager, returning an error string if the operation did not succeed.
|
|
pub fn run<T: EthSpec>(cli_args: &ArgMatches<'_>, mut env: Environment<T>) -> Result<(), String> {
|
|
let client_config = parse_client_config(cli_args, &env)?;
|
|
let context = env.core_context();
|
|
let log = context.log().clone();
|
|
|
|
match cli_args.subcommand() {
|
|
("version", Some(_)) => display_db_version(client_config, &context, log),
|
|
("migrate", Some(cli_args)) => {
|
|
let migrate_config = parse_migrate_config(cli_args)?;
|
|
migrate_db(migrate_config, client_config, &context, log)
|
|
}
|
|
("inspect", Some(cli_args)) => {
|
|
let inspect_config = parse_inspect_config(cli_args)?;
|
|
inspect_db(inspect_config, client_config, &context, log)
|
|
}
|
|
_ => {
|
|
return Err("Unknown subcommand, for help `lighthouse database_manager --help`".into())
|
|
}
|
|
}
|
|
.map_err(|e| format!("Fatal error: {:?}", e))
|
|
}
|