Update Lighthouse Book for Electra features (#7280)

* #7227
This commit is contained in:
chonghe
2025-04-17 17:31:26 +08:00
committed by GitHub
parent 410af7c5f5
commit 80fe133d2c
19 changed files with 95 additions and 127 deletions

View File

@@ -221,7 +221,7 @@ jobs:
|Non-Staking Users| <TODO>|---|
*See [Update
Priorities](https://lighthouse-book.sigmaprime.io/installation-priorities.html)
Priorities](https://lighthouse-book.sigmaprime.io/installation_priorities.html)
more information about this table.*
## All Changes
@@ -230,7 +230,7 @@ jobs:
## Binaries
[See pre-built binaries documentation.](https://lighthouse-book.sigmaprime.io/installation-binaries.html)
[See pre-built binaries documentation.](https://lighthouse-book.sigmaprime.io/installation_binaries.html)
The binaries are signed with Sigma Prime's PGP key: `15E66D941F697E28F49381F426416DC3F30674B0`

View File

@@ -27,7 +27,7 @@ pub const PASSWORD_PROMPT: &str = "Enter the keystore password";
pub const DEFAULT_BEACON_NODE: &str = "http://localhost:5052/";
pub const CONFIRMATION_PHRASE: &str = "Exit my validator";
pub const WEBSITE_URL: &str = "https://lighthouse-book.sigmaprime.io/voluntary-exit.html";
pub const WEBSITE_URL: &str = "https://lighthouse-book.sigmaprime.io/validator_voluntary_exit.html";
pub fn cli_app() -> Command {
Command::new("exit")

View File

@@ -353,7 +353,7 @@ async fn bellatrix_readiness_logging<T: BeaconChainTypes>(
if !beacon_chain.is_time_to_prepare_for_capella(current_slot) {
error!(
info = "you need an execution engine to validate blocks, see: \
https://lighthouse-book.sigmaprime.io/merge-migration.html",
https://lighthouse-book.sigmaprime.io/archived_merge_migration.html",
"Execution endpoint required"
);
}
@@ -433,7 +433,7 @@ async fn capella_readiness_logging<T: BeaconChainTypes>(
if !beacon_chain.is_time_to_prepare_for_deneb(current_slot) {
error!(
info = "you need a Capella enabled execution engine to validate blocks, see: \
https://lighthouse-book.sigmaprime.io/merge-migration.html",
https://lighthouse-book.sigmaprime.io/archived_merge_migration.html",
"Execution endpoint required"
);
}

View File

@@ -22,6 +22,7 @@
* [Doppelganger Protection](./validator_doppelganger.md)
* [Suggested Fee Recipient](./validator_fee_recipient.md)
* [Validator Graffiti](./validator_graffiti.md)
* [Consolidation](./validator_consolidation.md)
* [APIs](./api.md)
* [Beacon Node API](./api_bn.md)
* [Lighthouse API](./api_lighthouse.md)
@@ -61,6 +62,7 @@
* [Development Environment](./contributing_setup.md)
* [FAQs](./faq.md)
* [Protocol Developers](./developers.md)
* [Lighthouse Architecture](./developers_architecture.md)
* [Security Researchers](./security.md)
* [Archived](./archived.md)
* [Merge Migration](./archived_merge_migration.md)

View File

@@ -6,7 +6,7 @@ In the Deneb network upgrade, one of the changes is the implementation of EIP-48
1. What is the storage requirement for blobs?
We expect an additional increase of ~50 GB of storage requirement for blobs (on top of what is required by the consensus and execution clients database). The calculation is as below:
After Deneb, we expect an additional increase of ~50 GB of storage requirement for blobs (on top of what is required by the consensus and execution clients database). The calculation is as below:
One blob is 128 KB in size. Each block can carry a maximum of 6 blobs. Blobs will be kept for 4096 epochs and pruned afterwards. This means that the maximum increase in storage requirement will be:
@@ -16,6 +16,8 @@ In the Deneb network upgrade, one of the changes is the implementation of EIP-48
However, the blob base fee targets 3 blobs per block and it works similarly to how EIP-1559 operates in the Ethereum gas fee. Therefore, practically it is very likely to average to 3 blobs per blocks, which translates to a storage requirement of 48 GB.
After Electra, the target blobs is increased to 6 blobs per block. This means blobs storage is expected to use ~100GB of disk space.
1. Do I have to add any flags for blobs?
No, you can use the default values for blob-related flags, which means you do not need add or remove any flags.
@@ -25,7 +27,7 @@ In the Deneb network upgrade, one of the changes is the implementation of EIP-48
Use the flag `--prune-blobs false` in the beacon node. The storage requirement will be:
```text
2**17 bytes * 3 blobs / block * 7200 blocks / day * 30 days = 79GB / month or 948GB / year
2**17 bytes * 6 blobs / block * 7200 blocks / day * 30 days = 158GB / month or 1896GB / year
```
To keep blobs for a custom period, you may use the flag `--blob-prune-margin-epochs <EPOCHS>` which keeps blobs for 4096+EPOCHS specified in the flag.

View File

@@ -7,7 +7,8 @@ been applied automatically and in a _backwards compatible_ way.
However, backwards compatibility does not imply the ability to _downgrade_ to a prior version of
Lighthouse after upgrading. To facilitate smooth downgrades, Lighthouse v2.3.0 and above includes a
command for applying database downgrades.
command for applying database downgrades. If a downgrade is available _from_ a schema version,
it is listed in the table below under the "Downgrade available?" header.
**Everything on this page applies to the Lighthouse _beacon node_, not to the
validator client or the slasher**.
@@ -16,12 +17,8 @@ validator client or the slasher**.
| Lighthouse version | Release date | Schema version | Downgrade available? |
|--------------------|--------------|----------------|----------------------|
| v7.0.0 | Apr 2025 | v22 | no |
| v6.0.0 | Nov 2024 | v22 | no |
| v5.3.0 | Aug 2024 | v21 | yes |
| v5.2.0 | Jun 2024 | v19 | no |
| v5.1.0 | Mar 2024 | v19 | no |
| v5.0.0 | Feb 2024 | v19 | no |
| v4.6.0 | Dec 2023 | v19 | no |
> **Note**: All point releases (e.g. v4.4.1) are schema-compatible with the prior minor release
> (e.g. v4.4.0).
@@ -209,8 +206,9 @@ Here are the steps to prune historic states:
| Lighthouse version | Release date | Schema version | Downgrade available? |
|--------------------|--------------|----------------|-------------------------------------|
| v7.0.0 | Apr 2025 | v22 | no |
| v6.0.0 | Nov 2024 | v22 | no |
| v5.3.0 | Aug 2024 | v21 | yes |
| v5.3.0 | Aug 2024 | v21 | yes before Electra using <= v7.0.0 |
| v5.2.0 | Jun 2024 | v19 | yes before Deneb using <= v5.2.1 |
| v5.1.0 | Mar 2024 | v19 | yes before Deneb using <= v5.2.1 |
| v5.0.0 | Feb 2024 | v19 | yes before Deneb using <= v5.2.1 |

View File

@@ -1,3 +1,3 @@
# Archived
This section keeps the topics that are deprecated or less applicable for archived purposes.
This section keeps the topics that are deprecated. Documentation in this section is for informational purposes only and will not be maintained.

View File

@@ -0,0 +1,5 @@
# Lighthouse architecture
A technical walkthrough of Lighthouse's architecture can be found at: [Lighthouse technical walkthrough](https://www.youtube.com/watch?v=pLHhTh_vGZ0)
![Lighthouse architecture](imgs/developers_architecture.svg)

View File

@@ -17,7 +17,6 @@
## [Validator](#validator-1)
- [Why does it take so long for a validator to be activated?](#vc-activation)
- [Can I use redundancy in my staking setup?](#vc-redundancy)
- [I am missing attestations. Why?](#vc-missed-attestations)
- [Sometimes I miss the attestation head vote, resulting in penalty. Is this normal?](#vc-head-vote)
@@ -112,10 +111,7 @@ After checkpoint forwards sync completes, the beacon node will start to download
INFO Downloading historical blocks est_time: --, distance: 4524545 slots (89 weeks 5 days), service: slot_notifier
```
If the same log appears every minute and you do not see progress in downloading historical blocks, you can try one of the followings:
- Check the number of peers you are connected to. If you have low peers (less than 50), try to do port forwarding on the ports 9000 TCP/UDP and 9001 UDP to increase peer count.
- Restart the beacon node.
If the same log appears every minute and you do not see progress in downloading historical blocks, check the number of peers you are connected to. If you have low peers (less than 50), try to do port forwarding on the ports 9000 TCP/UDP and 9001 UDP to increase peer count.
### <a name="bn-duplicate"></a> I proposed a block but the beacon node shows `could not publish message` with error `duplicate` as below, should I be worried?
@@ -154,29 +150,13 @@ This is a normal behaviour. Since [v4.1.0](https://github.com/sigp/lighthouse/re
### <a name="bn-http"></a> My beacon node logs `WARN Error processing HTTP API request`, what should I do?
This warning usually comes with an http error code. Some examples are given below:
An example of the log is shown below
1. The log shows:
```text
WARN Error processing HTTP API request method: GET, path: /eth/v1/validator/attestation_data, status: 500 Internal Server Error, elapsed: 305.65µs
```
```text
WARN Error processing HTTP API request method: GET, path: /eth/v1/validator/attestation_data, status: 500 Internal Server Error, elapsed: 305.65µs
```
The error is `500 Internal Server Error`. This suggests that the execution client is not synced. Once the execution client is synced, the error will disappear.
1. The log shows:
```text
WARN Error processing HTTP API request method: POST, path: /eth/v1/validator/duties/attester/199565, status: 503 Service Unavailable, elapsed: 96.787µs
```
The error is `503 Service Unavailable`. This means that the beacon node is still syncing. When this happens, the validator client will log:
```text
ERRO Failed to download attester duties err: FailedToDownloadAttesters("Some endpoints failed, num_failed: 2 http://localhost:5052/ => Unavailable(NotSynced), http://localhost:5052/ => RequestFailed(ServerMessage(ErrorMessage { code: 503, message: \"SERVICE_UNAVAILABLE: beacon node is syncing
```
This means that the validator client is sending requests to the beacon node. However, as the beacon node is still syncing, it is therefore unable to fulfil the request. The error will disappear once the beacon node is synced.
This warning usually happens when the validator client sends a request to the beacon node, but the beacon node is unable to fulfil the request. This can be due to the execution client is not synced/is syncing and/or the beacon node is syncing. The error show go away when the node is synced.
### <a name="bn-fork-choice"></a> My beacon node logs `WARN Error signalling fork choice waiter`, what should I do?
@@ -190,13 +170,21 @@ This suggests that the computer resources are being overwhelmed. It could be due
### <a name="bn-queue-full"></a> My beacon node logs `ERRO Aggregate attestation queue full`, what should I do?
An example of the full log is shown below:
Some examples of the full log is shown below:
```text
ERRO Aggregate attestation queue full, queue_len: 4096, msg: the system has insufficient resources for load, module: network::beacon_processor:1542
ERRO Attestation delay queue is full msg: system resources may be saturated, queue_size: 16384, service: bproc
```
This suggests that the computer resources are being overwhelmed. It could be due to high CPU usage or high disk I/O usage. This can happen, e.g., when the beacon node is downloading historical blocks, or when the execution client is syncing. The error will disappear when the resources used return to normal or when the node is synced.
This suggests that the computer resources are being overwhelmed. It could be due to high CPU usage or high disk I/O usage. Some common reasons are:
- when the beacon node is downloading historical blocks
- the execution client is syncing
- disk IO is being overwhelmed
- parallel API queries to the beacon node
If the node is syncing or downloading historical blocks, the error should disappear when the resources used return to normal or when the node is synced.
### <a name="bn-deposit-cache"></a> My beacon node logs `WARN Failed to finalize deposit cache`, what should I do?
@@ -204,77 +192,6 @@ This is a known [bug](https://github.com/sigp/lighthouse/issues/3707) that will
## Validator
### <a name="vc-activation"></a> Why does it take so long for a validator to be activated?
After validators create their execution layer deposit transaction there are two waiting
periods before they can start producing blocks and attestations:
1. Waiting for the beacon chain to recognise the execution layer block containing the
deposit (generally takes ~13.6 hours).
1. Waiting in the queue for validator activation.
Detailed answers below:
#### 1. Waiting for the beacon chain to detect the execution layer deposit
Since the beacon chain uses the execution layer for validator on-boarding, beacon chain
validators must listen to event logs from the deposit contract. Since the
latest blocks of the execution chain are vulnerable to re-orgs due to minor network
partitions, beacon nodes follow the execution chain at a distance of 2048 blocks
(~6.8 hours) (see
[`ETH1_FOLLOW_DISTANCE`](https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/validator.md#process-deposit)).
This follow distance protects the beacon chain from on-boarding validators that
are likely to be removed due to an execution chain re-org.
Now we know there's a 6.8 hours delay before the beacon nodes even _consider_ an
execution layer block. Once they _are_ considering these blocks, there's a voting period
where beacon validators vote on which execution block hash to include in the beacon chain. This
period is defined as 64 epochs (~6.8 hours, see
[`ETH1_VOTING_PERIOD`](https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/beacon-chain.md#time-parameters)).
During this voting period, each beacon block producer includes an
[`Eth1Data`](https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/beacon-chain.md#eth1data)
in their block which counts as a vote towards what that validator considers to
be the head of the execution chain at the start of the voting period (with respect
to `ETH1_FOLLOW_DISTANCE`, of course). You can see the exact voting logic
[here](https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/validator.md#eth1-data).
These two delays combined represent the time between an execution layer deposit being
included in an execution data vote and that validator appearing in the beacon chain.
The `ETH1_FOLLOW_DISTANCE` delay causes a minimum delay of ~6.8 hours and
`ETH1_VOTING_PERIOD` means that if a validator deposit happens just _before_
the start of a new voting period then they might not notice this delay at all.
However, if the validator deposit happens just _after_ the start of the new
voting period the validator might have to wait ~6.8 hours for next voting
period. In times of very severe network issues, the network may even fail
to vote in new execution layer blocks, thus stopping all new validator deposits and causing the wait to be longer.
#### 2. Waiting for a validator to be activated
If a validator has provided an invalid public key or signature, they will
_never_ be activated.
They will simply be forgotten by the beacon chain! But, if those parameters were
correct, once the execution layer delays have elapsed and the validator appears in the
beacon chain, there's _another_ delay before the validator becomes "active"
(canonical definition
[here](https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/beacon-chain.md#is_active_validator)) and can start producing blocks and attestations.
Firstly, the validator won't become active until their beacon chain balance is
equal to or greater than
[`MAX_EFFECTIVE_BALANCE`](https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/beacon-chain.md#gwei-values)
(32 ETH on mainnet, usually 3.2 ETH on testnets). Once this balance is reached,
the validator must wait until the start of the next epoch (up to 6.4 minutes)
for the
[`process_registry_updates`](https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/beacon-chain.md#registry-updates)
routine to run. This routine activates validators with respect to a [churn
limit](https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/beacon-chain.md#get_validator_churn_limit);
it will only allow the number of validators to increase (churn) by a certain
amount. If a new validator isn't within the churn limit from the front of the queue,
they will need to wait another epoch (6.4 minutes) for their next chance. This
repeats until the queue is cleared. The churn limit for validators joining the beacon chain is capped at 8 per epoch or 1800 per day. If, for example, there are 9000 validators waiting to be activated, this means that the waiting time can take up to 5 days.
Once a validator has been activated, congratulations! It's time to
produce blocks and attestations!
### <a name="vc-redundancy"></a> Can I use redundancy in my staking setup?
You should **never** use duplicate/redundant validator keypairs or validator clients (i.e., don't
@@ -299,15 +216,15 @@ Another cause for missing attestations is the block arriving late, or there are
An example of the log: (debug logs can be found under `$datadir/beacon/logs`):
```text
Delayed head block, set_as_head_time_ms: 27, imported_time_ms: 168, attestable_delay_ms: 4209, available_delay_ms: 4186, execution_time_ms: 201, blob_delay_ms: 3815, observed_delay_ms: 3984, total_delay_ms: 4381, slot: 1886014, proposer_index: 733, block_root: 0xa7390baac88d50f1cbb5ad81691915f6402385a12521a670bbbd4cd5f8bf3934, service: beacon, module: beacon_chain::canonical_head:1441
DEBG Delayed head block, set_as_head_time_ms: 37, imported_time_ms: 1824, attestable_delay_ms: 3660, available_delay_ms: 3491, execution_time_ms: 78, consensus_time_ms: 161, blob_delay_ms: 3291, observed_delay_ms: 3250, total_delay_ms: 5352, slot: 11429888, proposer_index: 778696, block_root: 0x34cc0675ad5fd052699af2ff37b858c3eb8186c5b29fdadb1dabd246caf79e43, service: beacon, module: beacon_chain::canonical_head:1440
```
The field to look for is `attestable_delay`, which defines the time when a block is ready for the validator to attest. If the `attestable_delay` is greater than 4s which has past the window of attestation, the attestation will fail. In the above example, the delay is mostly caused by late block observed by the node, as shown in `observed_delay`. The `observed_delay` is determined mostly by the proposer and partly by your networking setup (e.g., how long it took for the node to receive the block). Ideally, `observed_delay` should be less than 3 seconds. In this example, the validator failed to attest the block due to the block arriving late.
The field to look for is `attestable_delay`, which defines the time when a block is ready for the validator to attest. If the `attestable_delay` is greater than 4s then it has missed the window for attestation, and the attestation will fail. In the above example, the delay is mostly caused by a late block observed by the node, as shown in `observed_delay`. The `observed_delay` is determined mostly by the proposer and partly by your networking setup (e.g., how long it took for the node to receive the block). Ideally, `observed_delay` should be less than 3 seconds. In this example, the validator failed to attest to the block due to the block arriving late.
Another example of log:
```
DEBG Delayed head block, set_as_head_time_ms: 22, imported_time_ms: 312, attestable_delay_ms: 7052, available_delay_ms: 6874, execution_time_ms: 4694, blob_delay_ms: 2159, observed_delay_ms: 2179, total_delay_ms: 7209, slot: 1885922, proposer_index: 606896, block_root: 0x9966df24d24e722d7133068186f0caa098428696e9f441ac416d0aca70cc0a23, service: beacon, module: beacon_chain::canonical_head:1441
DEBG Delayed head block, set_as_head_time_ms: 22, imported_time_ms: 312, attestable_delay_ms: 7052, available_delay_ms: 6874, execution_time_ms: 4694, consensus_time_ms: 232, blob_delay_ms: 2159, observed_delay_ms: 2179, total_delay_ms: 7209, slot: 1885922, proposer_index: 606896, block_root: 0x9966df24d24e722d7133068186f0caa098428696e9f441ac416d0aca70cc0a23, service: beacon, module: beacon_chain::canonical_head:1441
/159.69.68.247/tcp/9000, service: libp2p, module: lighthouse_network::service:1811
```
@@ -323,7 +240,7 @@ Another possible reason for missing the head vote is due to a chain "reorg". A r
### <a name="vc-exit"></a> Can I submit a voluntary exit message without running a beacon node?
Yes. Beaconcha.in provides the tool to broadcast the message. You can create the voluntary exit message file with [ethdo](https://github.com/wealdtech/ethdo/releases/tag/v1.30.0) and submit the message via the [beaconcha.in](https://beaconcha.in/tools/broadcast) website. A guide on how to use `ethdo` to perform voluntary exit can be found [here](https://github.com/eth-educators/ethstaker-guides/blob/main/docs/voluntary-exit.md).
Yes. Beaconcha.in provides the tool to broadcast the message. You can create the voluntary exit message file with [ethdo](https://github.com/wealdtech/ethdo/releases) and submit the message via the [beaconcha.in](https://beaconcha.in/tools/broadcast) website. A guide on how to use `ethdo` to perform voluntary exit can be found [here](https://github.com/eth-educators/ethstaker-guides/blob/main/docs/validator_voluntary_exit.md).
It is also noted that you can submit your BLS-to-execution-change message to update your withdrawal credentials from type `0x00` to `0x01` using the same link.
@@ -345,7 +262,7 @@ If you do not want to stop `lighthouse vc`, you can use the [key manager API](./
### <a name="vc-delete"></a> How can I delete my validator once it is imported?
Lighthouse supports the [KeyManager API](https://ethereum.github.io/keymanager-APIs/#/Local%20Key%20Manager/deleteKeys) to delete validators and remove them from the `validator_definitions.yml` file. To do so, start the validator client with the flag `--http` and call the API.
You can use the `lighthouse vm delete` command to delete validator keys, see [validator manager delete](./validator_manager_api.md#delete).
If you are looking to delete the validators in one node and import it to another, you can use the [validator-manager](./validator_manager_move.md) to move the validators across nodes without the hassle of deleting and importing the keys.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 88 KiB

View File

@@ -33,7 +33,7 @@ There are five primary steps to become a validator:
1. [Start an execution client and Lighthouse beacon node](#step-2-start-an-execution-client-and-lighthouse-beacon-node)
1. [Import validator keys into Lighthouse](#step-3-import-validator-keys-to-lighthouse)
1. [Start Lighthouse validator client](#step-4-start-lighthouse-validator-client)
1. [Submit deposit](#step-5-submit-deposit-32eth-per-validator)
1. [Submit deposit](#step-5-submit-deposit-a-minimum-of-32eth-to-activate-one-validator)
> **Important note**: The guide below contains both mainnet and testnet instructions. We highly recommend *all* users to **run a testnet validator** prior to staking mainnet ETH. By far, the best technical learning experience is to run a testnet validator. You can get hands-on experience with all the tools and it's a great way to test your staking
hardware. 32 ETH is a significant outlay and joining a testnet is a great way to "try before you buy".
@@ -151,13 +151,13 @@ Once this log appears (and there are no errors) the `lighthouse vc` application
will ensure that the validator starts performing its duties and being rewarded
by the protocol.
### Step 5: Submit deposit (32ETH per validator)
### Step 5: Submit deposit (a minimum of 32ETH to activate one validator)
After you have successfully run and synced the execution client, beacon node and validator client, you can now proceed to submit the deposit. Go to the mainnet [Staking launchpad](https://launchpad.ethereum.org/en/) (or [Holesky staking launchpad](https://holesky.launchpad.ethereum.org/en/) for testnet validator) and carefully go through the steps to becoming a validator. Once you are ready, you can submit the deposit by sending 32ETH per validator to the deposit contract. Upload the `deposit_data-*.json` file generated in [Step 1](#step-1-create-validator-keys) to the Staking launchpad.
After you have successfully run and synced the execution client, beacon node and validator client, you can now proceed to submit the deposit. Go to the mainnet [Staking launchpad](https://launchpad.ethereum.org/en/) (or [Holesky staking launchpad](https://holesky.launchpad.ethereum.org/en/) for testnet validator) and carefully go through the steps to becoming a validator. Once you are ready, you can submit the deposit by sending ETH to the deposit contract. Upload the `deposit_data-*.json` file generated in [Step 1](#step-1-create-validator-keys) to the Staking launchpad.
> **Important note:** Double check that the deposit contract for mainnet is `0x00000000219ab540356cBB839Cbe05303d7705Fa` before you confirm the transaction.
Once the deposit transaction is confirmed, it will take a minimum of ~16 hours to a few days/weeks for the beacon chain to process and activate your validator, depending on the queue. Refer to our [FAQ - Why does it take so long for a validator to be activated](./faq.md#why-does-it-take-so-long-for-a-validator-to-be-activated) for more info.
Once the deposit transaction is confirmed, it will take a minimum of ~13 minutes to a few days to activate your validator, depending on the queue.
Once your validator is activated, the validator client will start to publish attestations each epoch:

View File

@@ -0,0 +1,30 @@
# Consolidation
With the [Pectra](https://ethereum.org/en/history/#pectra) upgrade, a validator can hold a stake of up to 2048 ETH. This is done by updating the validator withdrawal credentials to type 0x02. With 0x02 withdrawal credentials, it is possible to consolidate two or more validators into a single validator with a higher stake.
Let's take a look at an example: Initially, validators A and B are both with 0x01 withdrawal credentials with 32 ETH. Let's say we want to consolidate the balance of validator B to validator A, so that the balance of validator A becomes 64 ETH. These are the steps:
1. Update the withdrawal credentials of validator A to 0x02. You can do this using [Siren](./ui.md) or the [staking launchpad](https://launchpad.ethereum.org/en/). Select:
- source validator: validator A
- target validator: validator A
> Note: After the update, the withdrawal credential type 0x02 cannot be reverted to 0x01, unless the validator exits and makes a fresh deposit.
2. Perform consolidation by selecting:
- source validator: validator B
- target validator: validator A
and then execute the transaction.
Depending on the exit queue and pending consolidations, the process could take from a day to weeks. The outcome is:
- validator A has 64 ETH
- validator B has 0 ETH (i.e., validator B has exited the beacon chain)
The consolidation process can be repeated to consolidate more validators into validator A.
It is important to note that there are some conditions required to perform consolidation, a few common ones are:
- the **withdrawal address** of the source and target validators **must be the same**.
- the _target validator_ **must** have a withdrawal credential **type 0x02**. The source validator could have a 0x01 or 0x02 withdrawal credential.
- the source validator must be active for at least 256 epochs to be able to perform consolidation.
Note that if a user were to send a consolidation transaction that does not meet the conditions, the transaction can still be accepted by the execution layer. However, the consolidation will fail once it reaches the consensus layer (where the checks are performed). Therefore, it is recommended to check that the conditions are fulfilled before sending a consolidation transaction.

View File

@@ -151,7 +151,7 @@ ensure their `secrets-dir` is organised as below:
### Manual configuration
The automatic validator discovery process works out-of-the-box with validators
that are created using the `lighthouse account validator new` command. The
that are created using the `lighthouse account validator create` command. The
details of this process are only interesting to those who are using keystores
generated with another tool or have a non-standard requirements.

View File

@@ -5,6 +5,10 @@ After the [Capella](https://ethereum.org/en/history/#capella) upgrade on 12<sup>
- if a validator has a withdrawal credential type `0x00`, the rewards will continue to accumulate and will be locked in the beacon chain.
- if a validator has a withdrawal credential type `0x01`, any rewards above 32ETH will be periodically withdrawn to the withdrawal address. This is also known as the "validator sweep", i.e., once the "validator sweep" reaches your validator's index, your rewards will be withdrawn to the withdrawal address. The validator sweep is automatic and it does not incur any fees to withdraw.
## Partial withdrawals via the execution layer
With the [Pectra](https://ethereum.org/en/history/#pectra) upgrade, validators with 0x02 withdrawal credentials can partially withdraw staked funds via the execution layer by sending a transaction using the withdrawal address. You can withdraw down to a validator balance of 32 ETH. For example, if the validator balance is 40 ETH, you can withdraw up to 8 ETH. You can use [Siren](./ui.md) or the [staking launchpad](https://launchpad.ethereum.org/en/) to execute partial withdrawals.
## FAQ
1. How to know if I have the withdrawal credentials type `0x00` or `0x01`?

View File

@@ -45,7 +45,7 @@ WARNING: WARNING: THIS IS AN IRREVERSIBLE OPERATION
PLEASE VISIT https://lighthouse-book.sigmaprime.io/voluntary-exit.html
PLEASE VISIT https://lighthouse-book.sigmaprime.io/validator_voluntary_exit.html
TO MAKE SURE YOU UNDERSTAND THE IMPLICATIONS OF A VOLUNTARY EXIT.
Enter the exit phrase from the above URL to confirm the voluntary exit:
@@ -58,6 +58,10 @@ Please keep your validator running till exit epoch
Exit epoch in approximately 1920 secs
```
## Exit via the execution layer
The voluntary exit above is via the consensus layer. With the [Pectra](https://ethereum.org/en/history/#pectra) upgrade, validators with 0x01 and 0x02 withdrawal credentials can also exit their validators via the execution layer by sending a transaction using the withdrawal address. You can use [Siren](./ui.md) or the [staking launchpad](https://launchpad.ethereum.org/en/) to send an exit transaction.
## Full withdrawal of staked fund
After the [Capella](https://ethereum.org/en/history/#capella) upgrade on 12<sup>th</sup> April 2023, if a user initiates a voluntary exit, they will receive the full staked funds to the withdrawal address, provided that the validator has withdrawal credentials of type `0x01`. For more information on how fund withdrawal works, please visit [Ethereum.org](https://ethereum.org/en/staking/withdrawals/#how-do-withdrawals-work) website.

View File

@@ -128,7 +128,7 @@ if [[ "$BEHAVIOR" == "success" ]]; then
# Sleep three epochs, then make sure all validators were active in epoch 2. Use
# `is_previous_epoch_target_attester` from epoch 3 for a complete view of epoch 2 inclusion.
#
# See: https://lighthouse-book.sigmaprime.io/validator-inclusion.html
# See: https://lighthouse-book.sigmaprime.io/api_validator_inclusion.html
echo "Waiting three epochs..."
sleep $(( $SECONDS_PER_SLOT * 32 * 3 ))
@@ -156,7 +156,7 @@ if [[ "$BEHAVIOR" == "success" ]]; then
# Sleep two epochs, then make sure all validators were active in epoch 4. Use
# `is_previous_epoch_target_attester` from epoch 5 for a complete view of epoch 4 inclusion.
#
# See: https://lighthouse-book.sigmaprime.io/validator-inclusion.html
# See: https://lighthouse-book.sigmaprime.io/api_validator_inclusion.html
echo "Waiting two more epochs..."
sleep $(( $SECONDS_PER_SLOT * 32 * 2 ))
for val in 0x*; do

View File

@@ -66,6 +66,7 @@ Nethermind
NodeJS
NullLogger
PathBuf
Pectra
PowerShell
PPA
Pre
@@ -236,6 +237,7 @@ validators
validator's
vc
virt
walkthrough
webapp
withdrawable
yaml