mirror of
https://github.com/sigp/lighthouse.git
synced 2026-05-07 16:55:46 +00:00
Merge remote-tracking branch 'origin/unstable' into tree-states
This commit is contained in:
@@ -23,15 +23,13 @@ states to slow down dramatically. A lower _slots per restore point_ value (SPRP)
|
||||
frequent restore points, while a higher SPRP corresponds to less frequent. The table below shows
|
||||
some example values.
|
||||
|
||||
| Use Case | SPRP | Yearly Disk Usage* | Load Historical State |
|
||||
| Use Case | SPRP | Yearly Disk Usage*| Load Historical State |
|
||||
|----------------------------|------|-------------------|-----------------------|
|
||||
| Research | 32 | 3.4 TB | 155 ms |
|
||||
| Block explorer/analysis | 128 | 851 GB | 620 ms |
|
||||
| Enthusiast (prev. default) | 2048 | 53.6 GB | 10.2 s |
|
||||
| Hobbyist | 4096 | 26.8 GB | 20.5 s |
|
||||
| Validator only (default) | 8192 | 12.7 GB | 41 s |
|
||||
| Research | 32 | more than 10 TB | 155 ms |
|
||||
| Enthusiast (prev. default) | 2048 | hundreds of GB | 10.2 s |
|
||||
| Validator only (default) | 8192 | tens of GB | 41 s |
|
||||
|
||||
*Last update: May 2023.
|
||||
*Last update: Dec 2023.
|
||||
|
||||
As we can see, it's a high-stakes trade-off! The relationships to disk usage and historical state
|
||||
load time are both linear – doubling SPRP halves disk usage and doubles load time. The minimum SPRP
|
||||
@@ -41,12 +39,12 @@ The default value is 8192 for databases synced from scratch using Lighthouse v2.
|
||||
2048 for prior versions. Please see the section on [Defaults](#defaults) below.
|
||||
|
||||
The values shown in the table are approximate, calculated using a simple heuristic: each
|
||||
`BeaconState` consumes around 18MB of disk space, and each block replayed takes around 5ms. The
|
||||
`BeaconState` consumes around 145MB of disk space, and each block replayed takes around 5ms. The
|
||||
**Yearly Disk Usage** column shows the approximate size of the freezer DB _alone_ (hot DB not included), calculated proportionally using the total freezer database disk usage.
|
||||
The **Load Historical State** time is the worst-case load time for a state in the last slot
|
||||
before a restore point.
|
||||
|
||||
As an example, we use an SPRP of 4096 to calculate the total size of the freezer database until May 2023. It has been about 900 days since the genesis, the total disk usage by the freezer database is therefore: 900/365*26.8 GB = 66 GB.
|
||||
To run a full archival node with fast access to beacon states and a SPRP of 32, the disk usage will be more than 10 TB per year, which is impractical for many users. As such, users may consider running the [tree-states](https://github.com/sigp/lighthouse/releases/tag/v4.5.444-exp) release, which only uses less than 150 GB for a full archival node. The caveat is that it is currently experimental and in alpha release (as of Dec 2023), thus not recommended for running mainnet validators. Nevertheless, it is suitable to be used for analysis purposes, and if you encounter any issues in tree-states, we do appreciate any feedback. We plan to have a stable release of tree-states in 1H 2024.
|
||||
|
||||
### Defaults
|
||||
|
||||
|
||||
@@ -40,7 +40,7 @@ drastically and use the (recommended) default.
|
||||
|
||||
### NAT Traversal (Port Forwarding)
|
||||
|
||||
Lighthouse, by default, uses port 9000 for both TCP and UDP. Lighthouse will
|
||||
Lighthouse, by default, uses port 9000 for both TCP and UDP. Since v4.5.0, Lighthouse will also attempt to make QUIC connections via UDP port 9001 by default. Lighthouse will
|
||||
still function if it is behind a NAT without any port mappings. Although
|
||||
Lighthouse still functions, we recommend that some mechanism is used to ensure
|
||||
that your Lighthouse node is publicly accessible. This will typically improve
|
||||
@@ -50,8 +50,8 @@ peers for your node and overall improve the Ethereum consensus network.
|
||||
Lighthouse currently supports UPnP. If UPnP is enabled on your router,
|
||||
Lighthouse will automatically establish the port mappings for you (the beacon
|
||||
node will inform you of established routes in this case). If UPnP is not
|
||||
enabled, we recommend you to manually set up port mappings to both of Lighthouse's
|
||||
TCP and UDP ports (9000 by default).
|
||||
enabled, we recommend you to manually set up port mappings to Lighthouse's
|
||||
TCP and UDP ports (9000 TCP/UDP, and 9001 UDP by default).
|
||||
|
||||
> Note: Lighthouse needs to advertise its publicly accessible ports in
|
||||
> order to inform its peers that it is contactable and how to connect to it.
|
||||
@@ -66,7 +66,7 @@ TCP and UDP ports (9000 by default).
|
||||
|
||||
The steps to do port forwarding depends on the router, but the general steps are given below:
|
||||
1. Determine the default gateway IP:
|
||||
- On Linux: open a terminal and run `ip route | grep default`, the result should look something similar to `default via 192.168.50.1 dev wlp2s0 proto dhcp metric 600`. The `192.168.50.1` is your router management default gateway IP.
|
||||
- On Linux: open a terminal and run `ip route | grep default`, the result should look something similar to `default via 192.168.50.1 dev wlp2s0 proto dhcp metric 600`. The `192.168.50.1` is your router management default gateway IP.
|
||||
- On MacOS: open a terminal and run `netstat -nr|grep default` and it should return the default gateway IP.
|
||||
- On Windows: open a command prompt and run `ipconfig` and look for the `Default Gateway` which will show you the gateway IP.
|
||||
|
||||
@@ -74,16 +74,22 @@ The steps to do port forwarding depends on the router, but the general steps are
|
||||
|
||||
2. Login to the router management page. The login credentials are usually available in the manual or the router, or it can be found on a sticker underneath the router. You can also try the login credentials for some common router brands listed [here](https://www.noip.com/support/knowledgebase/general-port-forwarding-guide/).
|
||||
|
||||
3. Navigate to the port forward settings in your router. The exact step depends on the router, but typically it will fall under the "Advanced" section, under the name "port forwarding" or "virtual server".
|
||||
3. Navigate to the port forward settings in your router. The exact step depends on the router, but typically it will fall under the "Advanced" section, under the name "port forwarding" or "virtual server".
|
||||
|
||||
4. Configure a port forwarding rule as below:
|
||||
- Protocol: select `TCP/UDP` or `BOTH`
|
||||
- External port: `9000`
|
||||
- Internal port: `9000`
|
||||
- IP address: Usually there is a dropdown list for you to select the device. Choose the device that is running Lighthouse
|
||||
- IP address: Usually there is a dropdown list for you to select the device. Choose the device that is running Lighthouse.
|
||||
|
||||
5. To check that you have successfully open the ports, go to [yougetsignal](https://www.yougetsignal.com/tools/open-ports/) and enter `9000` in the `port number`. If it shows "open", then you have successfully set up port forwarding. If it shows "closed", double check your settings, and also check that you have allowed firewall rules on port 9000.
|
||||
Since V4.5.0 port 9001/UDP is also used for QUIC support.
|
||||
|
||||
- Protocol: select `UDP`
|
||||
- External port: `9001`
|
||||
- Internal port: `9001`
|
||||
- IP address: Choose the device that is running Lighthouse.
|
||||
|
||||
5. To check that you have successfully opened the ports, go to [yougetsignal](https://www.yougetsignal.com/tools/open-ports/) and enter `9000` in the `port number`. If it shows "open", then you have successfully set up port forwarding. If it shows "closed", double check your settings, and also check that you have allowed firewall rules on port 9000. Note: this will only confirm if port 9000/TCP is open. You will need to ensure you have correctly setup port forwarding for the UDP ports (`9000` and `9001` by default).
|
||||
|
||||
### ENR Configuration
|
||||
|
||||
@@ -125,6 +131,9 @@ IPv4 only:
|
||||
TCP and UDP.
|
||||
- `--listen-address :: --port 9909 --discovery-port 9999` will listen over
|
||||
IPv6 using port `9909` for TCP and port `9999` for UDP.
|
||||
- By default, QUIC listens for UDP connections using a port number that is one greater than the specified port.
|
||||
If the specified port is 9909, QUIC will use port 9910 for IPv6 UDP connections.
|
||||
This can be configured with `--quic-port`.
|
||||
|
||||
To listen over both IPv4 and IPv6:
|
||||
- Set two listening addresses using the `--listen-address` flag twice ensuring
|
||||
@@ -133,18 +142,38 @@ To listen over both IPv4 and IPv6:
|
||||
that this behaviour differs from the Ipv6 only case described above.
|
||||
- If necessary, set the `--port6` flag to configure the port used for TCP and
|
||||
UDP over IPv6. This flag has no effect when listening over IPv6 only.
|
||||
- If necessary, set the `--discovery-port6` flag to configure the IPv6 UDP
|
||||
- If necessary, set the `--discovery-port6` flag to configure the IPv6 UDP
|
||||
port. This will default to the value given to `--port6` if not set. This flag
|
||||
has no effect when listening over IPv6 only.
|
||||
- If necessary, set the `--quic-port6` flag to configure the port used by QUIC for
|
||||
UDP over IPv6. This will default to the value given to `--port6` + 1. This flag
|
||||
has no effect when listening over IPv6 only.
|
||||
|
||||
##### Configuration Examples
|
||||
|
||||
- `--listen-address :: --listen-address 0.0.0.0 --port 9909` will listen
|
||||
over IPv4 using port `9909` for TCP and UDP. It will also listen over IPv6 but
|
||||
using the default value for `--port6` for UDP and TCP (`9090`).
|
||||
- `--listen-address :: --listen-address --port 9909 --discovery-port6 9999`
|
||||
will have the same configuration as before except for the IPv6 UDP socket,
|
||||
which will use port `9999`.
|
||||
> When using `--listen-address :: --listen-address 0.0.0.0 --port 9909`, listening will be set up as follows:
|
||||
>
|
||||
> **IPv4**:
|
||||
>
|
||||
> It listens on port `9909` for both TCP and UDP.
|
||||
> QUIC will use the next sequential port `9910` for UDP.
|
||||
>
|
||||
> **IPv6**:
|
||||
>
|
||||
> It listens on the default value of --port6 (`9090`) for both UDP and TCP.
|
||||
> QUIC will use port `9091` for UDP, which is the default `--port6` value (`9090`) + 1.
|
||||
|
||||
> When using `--listen-address :: --listen-address --port 9909 --discovery-port6 9999`, listening will be set up as follows:
|
||||
>
|
||||
> **IPv4**:
|
||||
>
|
||||
> It listens on port `9909` for both TCP and UDP.
|
||||
> QUIC will use the next sequential port `9910` for UDP.
|
||||
>
|
||||
> **IPv6**:
|
||||
>
|
||||
> It listens on the default value of `--port6` (`9090`) for TCP, and port `9999` for UDP.
|
||||
> QUIC will use port `9091` for UDP, which is the default `--port6` value (`9090`) + 1.
|
||||
|
||||
#### Configuring Lighthouse to advertise IPv6 reachable addresses
|
||||
Lighthouse supports IPv6 to connect to other nodes both over IPv6 exclusively,
|
||||
|
||||
@@ -100,7 +100,7 @@ The `jq` tool is used to format the JSON data properly. If it returns `jq: comma
|
||||
Shows the status of validator at index `1` at the `head` state.
|
||||
|
||||
```bash
|
||||
curl -X GET "http://localhost:5052/eth/v1/beacon/states/head/validators/1" -H "accept: application/json"
|
||||
curl -X GET "http://localhost:5052/eth/v1/beacon/states/head/validators/1" -H "accept: application/json" | jq
|
||||
```
|
||||
|
||||
```json
|
||||
|
||||
@@ -16,6 +16,7 @@ HTTP Path | Description |
|
||||
[`POST /lighthouse/validators/keystore`](#post-lighthousevalidatorskeystore) | Import a keystore.
|
||||
[`POST /lighthouse/validators/mnemonic`](#post-lighthousevalidatorsmnemonic) | Create a new validator from an existing mnemonic.
|
||||
[`POST /lighthouse/validators/web3signer`](#post-lighthousevalidatorsweb3signer) | Add web3signer validators.
|
||||
[`GET /lighthouse/logs`](#get-lighthouselogs) | Get logs
|
||||
|
||||
The query to Lighthouse API endpoints requires authorization, see [Authorization Header](./api-vc-auth-header.md).
|
||||
|
||||
@@ -745,19 +746,19 @@ Create any number of new validators, all of which will refer to a
|
||||
"graffiti": "Mr F was here",
|
||||
"suggested_fee_recipient": "0xa2e334e71511686bcfe38bb3ee1ad8f6babcc03d",
|
||||
"voting_public_key": "0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380",
|
||||
"builder_proposals": true,
|
||||
"url": "http://path-to-web3signer.com",
|
||||
"root_certificate_path": "/path/on/vc/filesystem/to/certificate.pem",
|
||||
"root_certificate_path": "/path/to/certificate.pem",
|
||||
"client_identity_path": "/path/to/identity.p12",
|
||||
"client_identity_password": "pass",
|
||||
"request_timeout_ms": 12000
|
||||
}
|
||||
]
|
||||
|
||||
```
|
||||
|
||||
The following fields may be omitted or nullified to obtain default values:
|
||||
Some of the fields above may be omitted or nullified to obtain default values (e.g., `graffiti`, `request_timeout_ms`).
|
||||
|
||||
- `graffiti`
|
||||
- `suggested_fee_recipient`
|
||||
- `root_certificate_path`
|
||||
- `request_timeout_ms`
|
||||
|
||||
Command:
|
||||
```bash
|
||||
@@ -765,7 +766,7 @@ DATADIR=/var/lib/lighthouse
|
||||
curl -X POST http://localhost:5062/lighthouse/validators/web3signer \
|
||||
-H "Authorization: Bearer $(cat ${DATADIR}/validators/api-token.txt)" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "[{\"enable\":true,\"description\":\"validator_one\",\"graffiti\":\"Mr F was here\",\"suggested_fee_recipient\":\"0xa2e334e71511686bcfe38bb3ee1ad8f6babcc03d\",\"voting_public_key\":\"0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380\",\"url\":\"http://path-to-web3signer.com\",\"request_timeout_ms\":12000}]"
|
||||
-d "[{\"enable\":true,\"description\":\"validator_one\",\"graffiti\":\"Mr F was here\",\"suggested_fee_recipient\":\"0xa2e334e71511686bcfe38bb3ee1ad8f6babcc03d\",\"voting_public_key\":\"0xa062f95fee747144d5e511940624bc6546509eeaeae9383257a9c43e7ddc58c17c2bab4ae62053122184c381b90db380\",\"builder_proposals\":true,\"url\":\"http://path-to-web3signer.com\",\"root_certificate_path\":\"/path/to/certificate.pem\",\"client_identity_path\":\"/path/to/identity.p12\",\"client_identity_password\":\"pass\",\"request_timeout_ms\":12000}]"
|
||||
```
|
||||
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ validator client or the slasher**.
|
||||
|
||||
| Lighthouse version | Release date | Schema version | Downgrade available? |
|
||||
|--------------------|--------------|----------------|----------------------|
|
||||
|
||||
| v4.6.0 | Dec 2023 | v19 | yes before Deneb |
|
||||
| v4.6.0-rc.0 | Dec 2023 | v18 | yes before Deneb |
|
||||
| v4.5.0 | Sep 2023 | v17 | yes |
|
||||
@@ -158,8 +159,7 @@ lighthouse db version --network mainnet
|
||||
|
||||
Pruning historic states helps in managing the disk space used by the Lighthouse beacon node by removing old beacon
|
||||
states from the freezer database. This can be especially useful when the database has accumulated a significant amount
|
||||
of historic data. This command is intended for nodes synced before 4.4.1, as newly synced node no longer store
|
||||
historic states by default.
|
||||
of historic data. This command is intended for nodes synced before 4.4.1, as newly synced nodes no longer store historic states by default.
|
||||
|
||||
Here are the steps to prune historic states:
|
||||
|
||||
@@ -175,14 +175,27 @@ Here are the steps to prune historic states:
|
||||
sudo -u "$LH_USER" lighthouse db prune-states --datadir "$LH_DATADIR" --network "$NET"
|
||||
```
|
||||
|
||||
If pruning is available, Lighthouse will log:
|
||||
|
||||
```
|
||||
INFO Ready to prune states
|
||||
WARN Pruning states is irreversible
|
||||
WARN Re-run this command with --confirm to commit to state deletion
|
||||
INFO Nothing has been pruned on this run
|
||||
```
|
||||
|
||||
3. If you are ready to prune the states irreversibly, add the `--confirm` flag to commit the changes:
|
||||
|
||||
```bash
|
||||
sudo -u "$LH_USER" lighthouse db prune-states --confirm --datadir "$LH_DATADIR" --network "$NET"
|
||||
```
|
||||
|
||||
The `--confirm` flag ensures that you are aware the action is irreversible, and historic states will be permanently removed.
|
||||
The `--confirm` flag ensures that you are aware the action is irreversible, and historic states will be permanently removed. Lighthouse will log:
|
||||
|
||||
```
|
||||
INFO Historic states pruned successfully
|
||||
```
|
||||
|
||||
4. After successfully pruning the historic states, you can restart the Lighthouse beacon node:
|
||||
|
||||
```bash
|
||||
|
||||
@@ -112,7 +112,7 @@ docker run lighthouse:local lighthouse --help
|
||||
You can run a Docker beacon node with the following command:
|
||||
|
||||
```bash
|
||||
docker run -p 9000:9000/tcp -p 9000:9000/udp -p 127.0.0.1:5052:5052 -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse --network mainnet beacon --http --http-address 0.0.0.0
|
||||
docker run -p 9000:9000/tcp -p 9000:9000/udp -p 9001:9001/udp -p 127.0.0.1:5052:5052 -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse --network mainnet beacon --http --http-address 0.0.0.0
|
||||
```
|
||||
|
||||
> To join the Goerli testnet, use `--network goerli` instead.
|
||||
@@ -135,18 +135,18 @@ docker run -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse bea
|
||||
|
||||
### Ports
|
||||
|
||||
In order to be a good peer and serve other peers you should expose port `9000` for both TCP and UDP.
|
||||
In order to be a good peer and serve other peers you should expose port `9000` for both TCP and UDP, and port `9001` for UDP.
|
||||
Use the `-p` flag to do this:
|
||||
|
||||
```bash
|
||||
docker run -p 9000:9000/tcp -p 9000:9000/udp sigp/lighthouse lighthouse beacon
|
||||
docker run -p 9000:9000/tcp -p 9000:9000/udp -p 9001:9001/udp sigp/lighthouse lighthouse beacon
|
||||
```
|
||||
|
||||
If you use the `--http` flag you may also want to expose the HTTP port with `-p
|
||||
127.0.0.1:5052:5052`.
|
||||
|
||||
```bash
|
||||
docker run -p 9000:9000/tcp -p 9000:9000/udp -p 127.0.0.1:5052:5052 sigp/lighthouse lighthouse beacon --http --http-address 0.0.0.0
|
||||
docker run -p 9000:9000/tcp -p 9000:9000/udp -p 9001:9001/udp -p 127.0.0.1:5052:5052 sigp/lighthouse lighthouse beacon --http --http-address 0.0.0.0
|
||||
```
|
||||
|
||||
[docker_hub]: https://hub.docker.com/repository/docker/sigp/lighthouse/
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
- [Does increasing the number of validators increase the CPU and other computer resources used?](#vc-resource)
|
||||
- [I want to add new validators. Do I have to reimport the existing keys?](#vc-reimport)
|
||||
- [Do I have to stop `lighthouse vc` the when importing new validator keys?](#vc-import)
|
||||
|
||||
- [How can I delete my validator once it is imported?](#vc-delete)
|
||||
|
||||
## [Network, Monitoring and Maintenance](#network-monitoring-and-maintenance-1)
|
||||
- [I have a low peer count and it is not increasing](#net-peer)
|
||||
@@ -33,6 +33,7 @@
|
||||
- [Should I do anything to the beacon node or validator client settings if I have a relocation of the node / change of IP address?](#net-ip)
|
||||
- [How to change the TCP/UDP port 9000 that Lighthouse listens on?](#net-port)
|
||||
- [Lighthouse `v4.3.0` introduces a change where a node will subscribe to only 2 subnets in total. I am worried that this will impact my validators return.](#net-subnet)
|
||||
- [How to know how many of my peers are connected through QUIC?](#net-quic)
|
||||
|
||||
## [Miscellaneous](#miscellaneous-1)
|
||||
- [What should I do if I lose my slashing protection database?](#misc-slashing)
|
||||
@@ -41,6 +42,7 @@
|
||||
- [Does Lighthouse have pruning function like the execution client to save disk space?](#misc-prune)
|
||||
- [Can I use a HDD for the freezer database and only have the hot db on SSD?](#misc-freezer)
|
||||
- [Can Lighthouse log in local timestamp instead of UTC?](#misc-timestamp)
|
||||
- [My hard disk is full and my validator is down. What should I do? ](#misc-full)
|
||||
|
||||
## Beacon Node
|
||||
|
||||
@@ -78,13 +80,13 @@ The `WARN Execution engine called failed` log is shown when the beacon node cann
|
||||
`error: Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(8551), path: "/", query: None, fragment: None }, source: TimedOut }), service: exec`
|
||||
|
||||
which says `TimedOut` at the end of the message. This means that the execution engine has not responded in time to the beacon node. One option is to add the flags `--execution-timeout-multiplier 3` and `--disable-lock-timeouts` to the beacon node. However, if the error persists, it is worth digging further to find out the cause. There are a few reasons why this can occur:
|
||||
1. The execution engine is not synced. Check the log of the execution engine to make sure that it is synced. If it is syncing, wait until it is synced and the error will disappear. You will see the beacon node logs `INFO Execution engine online` when it is synced.
|
||||
1. The execution engine is not synced. Check the log of the execution engine to make sure that it is synced. If it is syncing, wait until it is synced and the error will disappear. You will see the beacon node logs `INFO Execution engine online` when it is synced.
|
||||
1. The computer is overloaded. Check the CPU and RAM usage to see if it has overloaded. You can use `htop` to check for CPU and RAM usage.
|
||||
1. Your SSD is slow. Check if your SSD is in "The Bad" list [here](https://gist.github.com/yorickdowne/f3a3e79a573bf35767cd002cc977b038). If your SSD is in "The Bad" list, it means it cannot keep in sync to the network and you may want to consider upgrading to a better SSD.
|
||||
|
||||
If the reason for the error message is caused by no. 1 above, you may want to look further. If the execution engine is out of sync suddenly, it is usually caused by ungraceful shutdown. The common causes for ungraceful shutdown are:
|
||||
- Power outage. If power outages are an issue at your place, consider getting a UPS to avoid ungraceful shutdown of services.
|
||||
- The service file is not stopped properly. To overcome this, make sure that the process is stopped properly, e.g., during client updates.
|
||||
- Power outage. If power outages are an issue at your place, consider getting a UPS to avoid ungraceful shutdown of services.
|
||||
- The service file is not stopped properly. To overcome this, make sure that the process is stopped properly, e.g., during client updates.
|
||||
- Out of memory (oom) error. This can happen when the system memory usage has reached its maximum and causes the execution engine to be killed. When this occurs, the log file will show `Main process exited, code=killed, status=9/KILL`. You can also run `sudo journalctl -a --since "18 hours ago" | grep -i "killed process` to confirm that the execution client has been killed due to oom. If you are using geth as the execution client, a short term solution is to reduce the resources used. For example, you can reduce the cache by adding the flag `--cache 2048`. If the oom occurs rather frequently, a long term solution is to increase the memory capacity of the computer.
|
||||
|
||||
### <a name="bn-download-historical"></a> My beacon node is stuck at downloading historical block using checkpoint sync. What should I do?
|
||||
@@ -96,8 +98,8 @@ INFO Downloading historical blocks est_time: --, distance: 4524545 slo
|
||||
```
|
||||
|
||||
If the same log appears every minute and you do not see progress in downloading historical blocks, you can try one of the followings:
|
||||
|
||||
- Check the number of peers you are connected to. If you have low peers (less than 50), try to do port forwarding on the port 9000 TCP/UDP to increase peer count.
|
||||
|
||||
- Check the number of peers you are connected to. If you have low peers (less than 50), try to do port forwarding on the ports 9000 TCP/UDP and 9001 UDP to increase peer count.
|
||||
- Restart the beacon node.
|
||||
|
||||
|
||||
@@ -108,7 +110,7 @@ INFO Block from HTTP API already known`
|
||||
WARN Could not publish message error: Duplicate, service: libp2p
|
||||
```
|
||||
|
||||
This error usually happens when users are running mev-boost. The relay will publish the block on the network before returning it back to you. After the relay published the block on the network, it will propagate through nodes, and it happens quite often that your node will receive the block from your connected peers via gossip first, before getting the block from the relay, hence the message `duplicate`.
|
||||
This error usually happens when users are running mev-boost. The relay will publish the block on the network before returning it back to you. After the relay published the block on the network, it will propagate through nodes, and it happens quite often that your node will receive the block from your connected peers via gossip first, before getting the block from the relay, hence the message `duplicate`.
|
||||
|
||||
In short, it is nothing to worry about.
|
||||
|
||||
@@ -122,7 +124,7 @@ WARN Head is optimistic execution_block_hash: 0x47e7555f1d4215d1ad409b1ac1
|
||||
|
||||
It means the beacon node will follow the chain, but it will not be able to attest or produce blocks. This is because the execution client is not synced, so the beacon chain cannot verify the authenticity of the chain head, hence the word `optimistic`. What you need to do is to make sure that the execution client is up and syncing. Once the execution client is synced, the error will disappear.
|
||||
|
||||
### <a name="bn-timeout"></a> My beacon node logs `CRIT Beacon block processing error error: ValidatorPubkeyCacheLockTimeout, service: beacon`, what should I do?
|
||||
### <a name="bn-timeout"></a> My beacon node logs `CRIT Beacon block processing error error: ValidatorPubkeyCacheLockTimeout, service: beacon`, what should I do?
|
||||
|
||||
An example of the log is shown below:
|
||||
|
||||
@@ -131,7 +133,7 @@ CRIT Beacon block processing error error: ValidatorPubkeyCacheLockTime
|
||||
WARN BlockProcessingFailure outcome: ValidatorPubkeyCacheLockTimeout, msg: unexpected condition in processing block.
|
||||
```
|
||||
|
||||
A `Timeout` error suggests that the computer may be overloaded at the moment, for example, the execution client is still syncing. You may use the flag `--disable-lock-timeouts` to silence this error, although it will not fix the underlying slowness. Nevertheless, this is a relatively harmless log, and the error should go away once the resources used are back to normal.
|
||||
A `Timeout` error suggests that the computer may be overloaded at the moment, for example, the execution client is still syncing. You may use the flag `--disable-lock-timeouts` to silence this error, although it will not fix the underlying slowness. Nevertheless, this is a relatively harmless log, and the error should go away once the resources used are back to normal.
|
||||
|
||||
### <a name="bn-missing-beacon"></a> My beacon node logs `WARN BlockProcessingFailure outcome: MissingBeaconBlock`, what should I do?
|
||||
|
||||
@@ -141,7 +143,7 @@ An example of the full log is shown below:
|
||||
WARN BlockProcessingFailure outcome: MissingBeaconBlock(0xbdba211f8d72029554e405d8e4906690dca807d1d7b1bc8c9b88d7970f1648bc), msg: unexpected condition in processing block.
|
||||
```
|
||||
|
||||
`MissingBeaconBlock` suggests that the database has corrupted. You should wipe the database and use [Checkpoint Sync](./checkpoint-sync.md) to resync the beacon chain.
|
||||
`MissingBeaconBlock` suggests that the database has corrupted. You should wipe the database and use [Checkpoint Sync](./checkpoint-sync.md) to resync the beacon chain.
|
||||
|
||||
### <a name="bn-download-slow"></a> After checkpoint sync, the progress of `downloading historical blocks` is slow. Why?
|
||||
|
||||
@@ -171,7 +173,7 @@ The error is `503 Service Unavailable`. This means that the beacon node is still
|
||||
ERRO Failed to download attester duties err: FailedToDownloadAttesters("Some endpoints failed, num_failed: 2 http://localhost:5052/ => Unavailable(NotSynced), http://localhost:5052/ => RequestFailed(ServerMessage(ErrorMessage { code: 503, message: \"SERVICE_UNAVAILABLE: beacon node is syncing
|
||||
```
|
||||
|
||||
This means that the validator client is sending requests to the beacon node. However, as the beacon node is still syncing, it is therefore unable to fulfil the request. The error will disappear once the beacon node is synced.
|
||||
This means that the validator client is sending requests to the beacon node. However, as the beacon node is still syncing, it is therefore unable to fulfil the request. The error will disappear once the beacon node is synced.
|
||||
|
||||
### <a name="bn-fork-choice"></a> My beacon node logs `WARN Error signalling fork choice waiter`, what should I do?
|
||||
|
||||
@@ -266,9 +268,9 @@ repeats until the queue is cleared. The churn limit is summarised in the table b
|
||||
|
||||
<div align="center" style="text-align: center;">
|
||||
|
||||
| Number of active validators | Validators activated per epoch | Validators activated per day |
|
||||
| Number of active validators | Validators activated per epoch | Validators activated per day |
|
||||
|-------------------|--------------------------------------------|----|
|
||||
| 327679 or less | 4 | 900 |
|
||||
| 327679 or less | 4 | 900 |
|
||||
| 327680-393215 | 5 | 1125 |
|
||||
| 393216-458751 | 6 | 1350
|
||||
| 458752-524287 | 7 | 1575
|
||||
@@ -283,7 +285,7 @@ repeats until the queue is cleared. The churn limit is summarised in the table b
|
||||
|
||||
</div>
|
||||
|
||||
For example, the number of active validators on Mainnet is about 574000 on May 2023. This means that 8 validators can be activated per epoch or 1800 per day (it is noted that the same applies to the exit queue). If, for example, there are 9000 validators waiting to be activated, this means that the waiting time can take up to 5 days.
|
||||
For example, the number of active validators on Mainnet is about 574000 on May 2023. This means that 8 validators can be activated per epoch or 1800 per day (it is noted that the same applies to the exit queue). If, for example, there are 9000 validators waiting to be activated, this means that the waiting time can take up to 5 days.
|
||||
|
||||
Once a validator has been activated, congratulations! It's time to
|
||||
produce blocks and attestations!
|
||||
@@ -296,14 +298,14 @@ duplicate your JSON keystores and don't run `lighthouse vc` twice). This will le
|
||||
However, there are some components which can be configured with redundancy. See the
|
||||
[Redundancy](./redundancy.md) guide for more information.
|
||||
|
||||
### <a name="vc-missed-attestations"></a> I am missing attestations. Why?
|
||||
### <a name="vc-missed-attestations"></a> I am missing attestations. Why?
|
||||
The first thing is to ensure both consensus and execution clients are synced with the network. If they are synced, there may still be some issues with the node setup itself that is causing the missed attestations. Check the setup to ensure that:
|
||||
- the clock is synced
|
||||
- the computer has sufficient resources and is not overloaded
|
||||
- the internet is working well
|
||||
- you have sufficient peers
|
||||
|
||||
You can see more information on the [Ethstaker KB](https://ethstaker.gitbook.io/ethstaker-knowledge-base/help/missed-attestations).
|
||||
You can see more information on the [Ethstaker KB](https://ethstaker.gitbook.io/ethstaker-knowledge-base/help/missed-attestations).
|
||||
|
||||
Another cause for missing attestations is delays during block processing. When this happens, the debug logs will show (debug logs can be found under `$datadir/beacon/logs`):
|
||||
|
||||
@@ -311,14 +313,14 @@ Another cause for missing attestations is delays during block processing. When t
|
||||
DEBG Delayed head block set_as_head_delay: Some(93.579425ms), imported_delay: Some(1.460405278s), observed_delay: Some(2.540811921s), block_delay: 4.094796624s, slot: 6837344, proposer_index: 211108, block_root: 0x2c52231c0a5a117401f5231585de8aa5dd963bc7cbc00c544e681342eedd1700, service: beacon
|
||||
```
|
||||
|
||||
The fields to look for are `imported_delay > 1s` and `observed_delay < 3s`. The `imported_delay` is how long the node took to process the block. The `imported_delay` of larger than 1 second suggests that there is slowness in processing the block. It could be due to high CPU usage, high I/O disk usage or the clients are doing some background maintenance processes. The `observed_delay` is determined mostly by the proposer and partly by your networking setup (e.g., how long it took for the node to receive the block). The `observed_delay` of less than 3 seconds means that the block is not arriving late from the block proposer. Combining the above, this implies that the validator should have been able to attest to the block, but failed due to slowness in the node processing the block.
|
||||
The fields to look for are `imported_delay > 1s` and `observed_delay < 3s`. The `imported_delay` is how long the node took to process the block. The `imported_delay` of larger than 1 second suggests that there is slowness in processing the block. It could be due to high CPU usage, high I/O disk usage or the clients are doing some background maintenance processes. The `observed_delay` is determined mostly by the proposer and partly by your networking setup (e.g., how long it took for the node to receive the block). The `observed_delay` of less than 3 seconds means that the block is not arriving late from the block proposer. Combining the above, this implies that the validator should have been able to attest to the block, but failed due to slowness in the node processing the block.
|
||||
|
||||
|
||||
### <a name="vc-head-vote"></a> Sometimes I miss the attestation head vote, resulting in penalty. Is this normal?
|
||||
|
||||
In general, it is unavoidable to have some penalties occasionally. This is particularly the case when you are assigned to attest on the first slot of an epoch and if the proposer of that slot releases the block late, then you will get penalised for missing the target and head votes. Your attestation performance does not only depend on your own setup, but also on everyone elses performance.
|
||||
|
||||
You could also check for the sync aggregate participation percentage on block explorers such as [beaconcha.in](https://beaconcha.in/). A low sync aggregate participation percentage (e.g., 60-70%) indicates that the block that you are assigned to attest to may be published late. As a result, your validator fails to correctly attest to the block.
|
||||
You could also check for the sync aggregate participation percentage on block explorers such as [beaconcha.in](https://beaconcha.in/). A low sync aggregate participation percentage (e.g., 60-70%) indicates that the block that you are assigned to attest to may be published late. As a result, your validator fails to correctly attest to the block.
|
||||
|
||||
Another possible reason for missing the head vote is due to a chain "reorg". A reorg can happen if the proposer publishes block `n` late, and the proposer of block `n+1` builds upon block `n-1` instead of `n`. This is called a "reorg". Due to the reorg, block `n` was never included in the chain. If you are assigned to attest at slot `n`, it is possible you may still attest to block `n` despite most of the network recognizing the block as being late. In this case you will miss the head reward.
|
||||
|
||||
@@ -345,6 +347,13 @@ Generally yes.
|
||||
|
||||
If you do not want to stop `lighthouse vc`, you can use the [key manager API](./api-vc-endpoints.md) to import keys.
|
||||
|
||||
|
||||
### <a name="vc-delete"></a> How can I delete my validator once it is imported?
|
||||
|
||||
Lighthouse supports the [KeyManager API](https://ethereum.github.io/keymanager-APIs/#/Local%20Key%20Manager/deleteKeys) to delete validators and remove them from the `validator_definitions.yml` file. To do so, start the validator client with the flag `--http` and call the API.
|
||||
|
||||
If you are looking to delete the validators in one node and import it to another, you can use the [validator-manager](./validator-manager-move.md) to move the validators across nodes without the hassle of deleting and importing the keys.
|
||||
|
||||
## Network, Monitoring and Maintenance
|
||||
|
||||
### <a name="net-peer"></a> I have a low peer count and it is not increasing
|
||||
@@ -379,7 +388,7 @@ If the ports are open, you should have incoming peers. To check that you have in
|
||||
|
||||
If you have incoming peers, it should return a lot of data containing information of peers. If the response is empty, it means that you have no incoming peers and there the ports are not open. You may want to double check if the port forward was correctly set up.
|
||||
|
||||
2. Check that you do not lower the number of peers using the flag `--target-peers`. The default is 80. A lower value set will lower the maximum number of peers your node can connect to, which may potentially interrupt the validator performance. We recommend users to leave the `--target peers` untouched to keep a diverse set of peers.
|
||||
2. Check that you do not lower the number of peers using the flag `--target-peers`. The default is 80. A lower value set will lower the maximum number of peers your node can connect to, which may potentially interrupt the validator performance. We recommend users to leave the `--target peers` untouched to keep a diverse set of peers.
|
||||
|
||||
3. Ensure that you have a quality router for the internet connection. For example, if you connect the router to many devices including the node, it may be possible that the router cannot handle all routing tasks, hence struggling to keep up the number of peers. Therefore, using a quality router for the node is important to keep a healthy number of peers.
|
||||
|
||||
@@ -426,8 +435,8 @@ For these reasons, we recommend that you make your node publicly accessible.
|
||||
Lighthouse supports UPnP. If you are behind a NAT with a router that supports
|
||||
UPnP, you can simply ensure UPnP is enabled (Lighthouse will inform you in its
|
||||
initial logs if a route has been established). You can also manually [set up port mappings/port forwarding](./advanced_networking.md#how-to-open-ports) in your router to your local Lighthouse instance. By default,
|
||||
Lighthouse uses port 9000 for both TCP and UDP. Opening both these ports will
|
||||
make your Lighthouse node maximally contactable.
|
||||
Lighthouse uses port 9000 for both TCP and UDP, and optionally 9001 UDP for QUIC support.
|
||||
Opening these ports will make your Lighthouse node maximally contactable.
|
||||
|
||||
### <a name="net-monitor"></a> How can I monitor my validators?
|
||||
|
||||
@@ -440,7 +449,7 @@ Monitoring](./validator-monitoring.md) for more information. Lighthouse has also
|
||||
The setting on the beacon node is the same for both cases below. In the beacon node, specify `lighthouse bn --http-address local_IP` so that the beacon node is listening on the local network rather than `localhost`. You can find the `local_IP` by running the command `hostname -I | awk '{print $1}'` on the server running the beacon node.
|
||||
|
||||
1. If the beacon node and validator clients are on different servers *in the same network*, the setting in the validator client is as follows:
|
||||
|
||||
|
||||
Use the flag `--beacon-nodes` to point to the beacon node. For example, `lighthouse vc --beacon-nodes http://local_IP:5052` where `local_IP` is the local IP address of the beacon node and `5052` is the default `http-port` of the beacon node.
|
||||
|
||||
If you have firewall setup, e.g., `ufw`, you will need to allow port 5052 (assuming that the default port is used) with `sudo ufw allow 5052`. Note: this will allow all IP addresses to access the HTTP API of the beacon node. If you are on an untrusted network (e.g., a university or public WiFi) or the host is exposed to the internet, use apply IP-address filtering as described later in this section.
|
||||
@@ -463,7 +472,7 @@ The setting on the beacon node is the same for both cases below. In the beacon n
|
||||
|
||||
|
||||
If you have firewall setup, e.g., `ufw`, you will need to allow connections to port 5052 (assuming that the default port is used). Since the beacon node HTTP/HTTPS API is public-facing (i.e., the 5052 port is now exposed to the internet due to port forwarding), we strongly recommend users to apply IP-address filtering to the BN/VC connection from malicious actors. This can be done using the command:
|
||||
|
||||
|
||||
```
|
||||
sudo ufw allow from vc_IP_address proto tcp to any port 5052
|
||||
```
|
||||
@@ -476,16 +485,35 @@ It is also worth noting that the `--beacon-nodes` flag can also be used for redu
|
||||
No. Lighthouse will auto-detect the change and update your Ethereum Node Record (ENR). You just need to make sure you are not manually setting the ENR with `--enr-address` (which, for common use cases, this flag is not used).
|
||||
|
||||
### <a name="net-port"></a> How to change the TCP/UDP port 9000 that Lighthouse listens on?
|
||||
Use the flag ```--port <PORT>``` in the beacon node. This flag can be useful when you are running two beacon nodes at the same time. You can leave one beacon node as the default port 9000, and configure the second beacon node to listen on, e.g., ```--port 9001```.
|
||||
Use the flag `--port <PORT>` in the beacon node. This flag can be useful when you are running two beacon nodes at the same time. You can leave one beacon node as the default port 9000, and configure the second beacon node to listen on, e.g., `--port 9100`.
|
||||
Since V4.5.0, Lighthouse supports QUIC and by default will use the value of `--port` + 1 to listen via UDP (default `9001`).
|
||||
This can be configured by using the flag `--quic-port`. Refer to [Advanced Networking](./advanced_networking.md#nat-traversal-port-forwarding) for more information.
|
||||
|
||||
### <a name="net-subnet"></a> Lighthouse `v4.3.0` introduces a change where a node will subscribe to only 2 subnets in total. I am worried that this will impact my validators return.
|
||||
|
||||
Previously, having more validators means subscribing to more subnets. Since the change, a node will now only subscribe to 2 subnets in total. This will bring about significant reductions in bandwidth for nodes with multiple validators.
|
||||
Previously, having more validators means subscribing to more subnets. Since the change, a node will now only subscribe to 2 subnets in total. This will bring about significant reductions in bandwidth for nodes with multiple validators.
|
||||
|
||||
While subscribing to more subnets can ensure you have peers on a wider range of subnets, these subscriptions consume resources and bandwidth. This does not significantly increase the performance of the node, however it does benefit other nodes on the network.
|
||||
|
||||
While subscribing to more subnets can ensure you have peers on a wider range of subnets, these subscriptions consume resources and bandwidth. This does not significantly increase the performance of the node, however it does benefit other nodes on the network.
|
||||
|
||||
If you would still like to subscribe to all subnets, you can use the flag `subscribe-all-subnets`. This may improve the block rewards by 1-5%, though it comes at the cost of a much higher bandwidth requirement.
|
||||
|
||||
### <a name="net-quic"></a> How to know how many of my peers are connected via QUIC?
|
||||
|
||||
With `--metrics` enabled in the beacon node, you can find the number of peers connected via QUIC using:
|
||||
|
||||
```bash
|
||||
curl -s "http://localhost:5054/metrics" | grep libp2p_quic_peers
|
||||
```
|
||||
|
||||
A response example is:
|
||||
|
||||
```
|
||||
# HELP libp2p_quic_peers Count of libp2p peers currently connected via QUIC
|
||||
# TYPE libp2p_quic_peers gauge
|
||||
libp2p_quic_peers 4
|
||||
```
|
||||
which shows that there are 4 peers connected via QUIC.
|
||||
|
||||
## Miscellaneous
|
||||
|
||||
### <a name="misc-slashing"></a> What should I do if I lose my slashing protection database?
|
||||
@@ -523,7 +551,7 @@ which says that the version is v4.1.0.
|
||||
|
||||
### <a name="misc-prune"></a> Does Lighthouse have pruning function like the execution client to save disk space?
|
||||
|
||||
There is no pruning of Lighthouse database for now. However, since v4.2.0, a feature to only sync back to the weak subjectivity point (approximately 5 months) when syncing via a checkpoint sync was added. This will help to save disk space since the previous behaviour will sync back to the genesis by default.
|
||||
There is no pruning of Lighthouse database for now. However, since v4.2.0, a feature to only sync back to the weak subjectivity point (approximately 5 months) when syncing via a checkpoint sync was added. This will help to save disk space since the previous behaviour will sync back to the genesis by default.
|
||||
|
||||
### <a name="misc-freezer"></a> Can I use a HDD for the freezer database and only have the hot db on SSD?
|
||||
|
||||
@@ -531,11 +559,13 @@ Yes, you can do so by using the flag `--freezer-dir /path/to/freezer_db` in the
|
||||
|
||||
### <a name="misc-timestamp"></a> Can Lighthouse log in local timestamp instead of UTC?
|
||||
|
||||
The reason why Lighthouse logs in UTC is due to the dependency on an upstream library that is [yet to be resolved](https://github.com/sigp/lighthouse/issues/3130). Alternatively, using the flag `disable-log-timestamp` in combination with systemd will suppress the UTC timestamps and print the logs in local timestamps.
|
||||
|
||||
|
||||
|
||||
|
||||
The reason why Lighthouse logs in UTC is due to the dependency on an upstream library that is [yet to be resolved](https://github.com/sigp/lighthouse/issues/3130). Alternatively, using the flag `disable-log-timestamp` in combination with systemd will suppress the UTC timestamps and print the logs in local timestamps.
|
||||
|
||||
### <a name="misc-full"></a> My hard disk is full and my validator is down. What should I do?
|
||||
|
||||
A quick way to get the validator back online is by removing the Lighthouse beacon node database and resync Lighthouse using checkpoint sync. A guide to do this can be found in the [Lighthouse Discord server](https://discord.com/channels/605577013327167508/605577013331361793/1019755522985050142). With some free space left, you will then be able to prune the execution client database to free up more space.
|
||||
|
||||
For a relatively long term solution, if you are using Geth and Nethermind as the execution client, you can consider setup the online pruning feature. Refer to [Geth](https://blog.ethereum.org/2023/09/12/geth-v1-13-0) and [Nethermind](https://gist.github.com/yorickdowne/67be09b3ba0a9ff85ed6f83315b5f7e0) for details.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -32,11 +32,7 @@ FLAGS:
|
||||
--disable-deposit-contract-sync Explicitly disables syncing of deposit logs from the execution node. This
|
||||
overrides any previous option that depends on it. Useful if you intend to
|
||||
run a non-validating beacon node.
|
||||
--disable-duplicate-warn-logs Disable warning logs for duplicate gossip messages. The WARN level log is
|
||||
useful for detecting a duplicate validator key running elsewhere.
|
||||
However, this may result in excessive warning logs if the validator is
|
||||
broadcasting messages to multiple beacon nodes via the validator client
|
||||
--broadcast flag. In this case, disabling these warn logs may be useful.
|
||||
--disable-duplicate-warn-logs This flag is deprecated and has no effect.
|
||||
-x, --disable-enr-auto-update Discovery automatically updates the nodes local ENR with an external IP
|
||||
address and port as seen by other peers on the network. This disables
|
||||
this feature, fixing the ENR's IP/PORT to those specified on boot.
|
||||
@@ -185,6 +181,9 @@ OPTIONS:
|
||||
--builder-user-agent <STRING>
|
||||
The HTTP user agent to send alongside requests to the builder URL. The default is Lighthouse's version
|
||||
string.
|
||||
--checkpoint-blobs <BLOBS_SSZ>
|
||||
Set the checkpoint blobs to start syncing from. Must be aligned and match --checkpoint-block. Using
|
||||
--checkpoint-sync-url instead is recommended.
|
||||
--checkpoint-block <BLOCK_SSZ>
|
||||
Set a checkpoint block to start syncing from. Must be aligned and match --checkpoint-state. Using
|
||||
--checkpoint-sync-url instead is recommended.
|
||||
|
||||
@@ -13,7 +13,7 @@ FLAGS:
|
||||
--disable-auto-discover
|
||||
If present, do not attempt to discover new validators in the validators-dir. Validators will need to be
|
||||
manually added to the validator_definitions.yml file.
|
||||
--disable-log-timestamp If present, do not include timestamps in logging output.
|
||||
--disable-log-timestamp If present, do not include timestamps in logging output.
|
||||
--disable-malloc-tuning
|
||||
If present, do not configure the system allocator. Providing this flag will generally increase memory usage,
|
||||
it should only be provided when debugging specific memory allocation issues.
|
||||
@@ -21,6 +21,14 @@ FLAGS:
|
||||
DEPRECATED. Use --broadcast. By default, Lighthouse publishes attestation, sync committee subscriptions and
|
||||
proposer preparation messages to all beacon nodes provided in the `--beacon-nodes flag`. This option changes
|
||||
that behaviour such that these api calls only go out to the first available and synced beacon node
|
||||
--disable-slashing-protection-web3signer
|
||||
Disable Lighthouse's slashing protection for all web3signer keys. This can reduce the I/O burden on the VC
|
||||
but is only safe if slashing protection is enabled on the remote signer and is implemented correctly. DO NOT
|
||||
ENABLE THIS FLAG UNLESS YOU ARE CERTAIN THAT SLASHING PROTECTION IS ENABLED ON THE REMOTE SIGNER. YOU WILL
|
||||
GET SLASHED IF YOU USE THIS FLAG WITHOUT ENABLING WEB3SIGNER'S SLASHING PROTECTION.
|
||||
--distributed
|
||||
Enables functionality required for running the validator in a distributed validator cluster.
|
||||
|
||||
--enable-doppelganger-protection
|
||||
If this flag is set, Lighthouse will delay startup for three epochs and monitor for messages on the network
|
||||
by any of the validators managed by this client. This will result in three (possibly four) epochs worth of
|
||||
@@ -32,8 +40,8 @@ FLAGS:
|
||||
Enable per validator metrics for > 64 validators. Note: This flag is automatically enabled for <= 64
|
||||
validators. Enabling this metric for higher validator counts will lead to higher volume of prometheus
|
||||
metrics being collected.
|
||||
-h, --help Prints help information
|
||||
--http Enable the RESTful HTTP API server. Disabled by default.
|
||||
-h, --help Prints help information
|
||||
--http Enable the RESTful HTTP API server. Disabled by default.
|
||||
--http-allow-keystore-export
|
||||
If present, allow access to the DELETE /lighthouse/keystores HTTP API method, which allows exporting
|
||||
keystores and passwords to HTTP API consumers who have access to the API token. This method is useful for
|
||||
@@ -47,7 +55,7 @@ FLAGS:
|
||||
flag unless you're certain that a new slashing protection database is required. Usually, your database will
|
||||
have been initialized when you imported your validator keys. If you misplace your database and then run with
|
||||
this flag you risk being slashed.
|
||||
--log-color Force outputting colors when emitting logs to the terminal.
|
||||
--log-color Force outputting colors when emitting logs to the terminal.
|
||||
--logfile-compress
|
||||
If present, compress old log files. This can help reduce the space needed to store old logs.
|
||||
|
||||
@@ -55,7 +63,7 @@ FLAGS:
|
||||
If present, log files will be generated as world-readable meaning they can be read by any user on the
|
||||
machine. Note that logs can often contain sensitive information about your validator and so this flag should
|
||||
be used with caution. For Windows users, the log file permissions will be inherited from the parent folder.
|
||||
--metrics Enable the Prometheus metrics HTTP server. Disabled by default.
|
||||
--metrics Enable the Prometheus metrics HTTP server. Disabled by default.
|
||||
--prefer-builder-proposals
|
||||
If this flag is set, Lighthouse will always prefer blocks constructed by builders, regardless of payload
|
||||
value.
|
||||
@@ -69,7 +77,7 @@ FLAGS:
|
||||
--use-long-timeouts
|
||||
If present, the validator client will use longer timeouts for requests made to the beacon node. This flag is
|
||||
generally not recommended, longer timeouts can cause missed duties when fallbacks are used.
|
||||
-V, --version Prints version information
|
||||
-V, --version Prints version information
|
||||
|
||||
OPTIONS:
|
||||
--beacon-nodes <NETWORK_ADDRESSES>
|
||||
@@ -209,4 +217,9 @@ OPTIONS:
|
||||
--validators-dir <VALIDATORS_DIR>
|
||||
The directory which contains the validator keystores, deposit data for each validator along with the common
|
||||
slashing protection database and the validator_definitions.yml
|
||||
--web3-signer-keep-alive-timeout <MILLIS>
|
||||
Keep-alive timeout for each web3signer connection. Set to 'null' to never timeout [default: 90000]
|
||||
|
||||
--web3-signer-max-idle-connections <COUNT>
|
||||
Maximum number of idle connections to maintain per web3signer host. Default is unlimited.
|
||||
```
|
||||
@@ -1,127 +0,0 @@
|
||||
const NETWORK = "5";
|
||||
const NETWORK_NAME = "Goerli Test Network";
|
||||
const DEPOSIT_CONTRACT = "0x07b39F4fDE4A38bACe212b546dAc87C58DfE3fDC";
|
||||
const DEPOSIT_AMOUNT_ETH = "32";
|
||||
const GAS_LIMIT = "4000000";
|
||||
const DEPOSIT_DATA_BYTES = 420;
|
||||
|
||||
let PREVIOUS_NON_ERROR_STATE = "";
|
||||
|
||||
$(document).ready(function(){
|
||||
if (typeof window.ethereum !== 'undefined') {
|
||||
ethereum.on('networkChanged', function (accounts) {
|
||||
checkNetwork()
|
||||
})
|
||||
|
||||
PREVIOUS_NON_ERROR_STATE = "upload";
|
||||
checkNetwork()
|
||||
} else {
|
||||
console.error("No metamask detected!")
|
||||
triggerError("Metamask is not installed.<br> <a href='https://metamask.io'>Get Metamask.</a>")
|
||||
}
|
||||
|
||||
$("#fileInput").change(function() {
|
||||
openFile(this.files[0])
|
||||
});
|
||||
|
||||
$("#uploadButton").on("click", function() {
|
||||
$("#fileInput").trigger("click");
|
||||
});
|
||||
});
|
||||
|
||||
function checkNetwork() {
|
||||
if (window.ethereum.networkVersion === NETWORK) {
|
||||
setUiState(PREVIOUS_NON_ERROR_STATE)
|
||||
} else {
|
||||
triggerError("Please set Metamask to use " + NETWORK_NAME + ".")
|
||||
}
|
||||
}
|
||||
|
||||
function doDeposit(deposit_data) {
|
||||
const ethereum = window.ethereum;
|
||||
const utils = ethers.utils;
|
||||
|
||||
let wei = utils.parseEther(DEPOSIT_AMOUNT_ETH);
|
||||
let gasLimit = utils.bigNumberify(GAS_LIMIT);
|
||||
|
||||
ethereum.enable()
|
||||
.then(function (accounts) {
|
||||
let params = [{
|
||||
"from": accounts[0],
|
||||
"to": DEPOSIT_CONTRACT,
|
||||
"gas": utils.hexlify(gasLimit),
|
||||
"value": utils.hexlify(wei),
|
||||
"data": deposit_data
|
||||
}]
|
||||
|
||||
ethereum.sendAsync({
|
||||
method: 'eth_sendTransaction',
|
||||
params: params,
|
||||
from: accounts[0], // Provide the user's account to use.
|
||||
}, function (err, result) {
|
||||
if (err !== null) {
|
||||
triggerError("<p>" + err.message + "</p><p><a href=''>Reload</a> the window to try again.</p>")
|
||||
} else {
|
||||
let tx_hash = result.result;
|
||||
$("#txLink").attr("href", "https://goerli.etherscan.io/tx/" + tx_hash);
|
||||
setUiState("waiting");
|
||||
}
|
||||
})
|
||||
})
|
||||
.catch(function (error) {
|
||||
triggerError("Unable to get Metamask accounts.<br>Reload page to try again.")
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
function openFile(file) {
|
||||
var reader = new FileReader();
|
||||
|
||||
reader.onload = function () {
|
||||
let data = reader.result;
|
||||
if (data.startsWith("0x")) {
|
||||
if (data.length === DEPOSIT_DATA_BYTES * 2 + 2) {
|
||||
doDeposit(data)
|
||||
} else {
|
||||
triggerError("Invalid eth1_deposit_file. Bad length.")
|
||||
}
|
||||
} else {
|
||||
triggerError("Invalid eth1_deposit_file. Did not start with 0x.")
|
||||
}
|
||||
}
|
||||
|
||||
reader.readAsBinaryString(file);
|
||||
}
|
||||
|
||||
function triggerError(text) {
|
||||
$("#errorText").html(text);
|
||||
setUiState("error");
|
||||
}
|
||||
|
||||
function setUiState(state) {
|
||||
if (state === "upload") {
|
||||
$('#uploadDiv').show();
|
||||
$('#depositDiv').hide();
|
||||
$('#waitingDiv').hide();
|
||||
$('#errorDiv').hide();
|
||||
} else if (state == "deposit") {
|
||||
$('#uploadDiv').hide();
|
||||
$('#depositDiv').show();
|
||||
$('#waitingDiv').hide();
|
||||
$('#errorDiv').hide();
|
||||
} else if (state == "error") {
|
||||
$('#uploadDiv').hide();
|
||||
$('#depositDiv').hide();
|
||||
$('#waitingDiv').hide();
|
||||
$('#errorDiv').show();
|
||||
} else if (state == "waiting") {
|
||||
$('#uploadDiv').hide();
|
||||
$('#depositDiv').hide();
|
||||
$('#waitingDiv').show();
|
||||
$('#errorDiv').hide();
|
||||
}
|
||||
|
||||
if (state !== "error") {
|
||||
PREVIOUS_NON_ERROR_STATE = state;
|
||||
}
|
||||
}
|
||||
@@ -101,10 +101,6 @@ from this list:
|
||||
- `none`: Disable all broadcasting. This option only has an effect when provided alone, otherwise
|
||||
it is ignored. Not recommended except for expert tweakers.
|
||||
|
||||
Broadcasting attestation, blocks and sync committee messages may result in excessive warning logs in the beacon node
|
||||
due to duplicate gossip messages. In this case, it may be desirable to disable warning logs for duplicates using the
|
||||
beacon node `--disable-duplicate-warn-logs` flag.
|
||||
|
||||
The default is `--broadcast subscriptions`. To also broadcast blocks for example, use
|
||||
`--broadcast subscriptions,blocks`.
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ of the immaturity of the slasher UX and the extra resources required.
|
||||
The slasher runs inside the same process as the beacon node, when enabled via the `--slasher` flag:
|
||||
|
||||
```
|
||||
lighthouse bn --slasher --debug-level debug
|
||||
lighthouse bn --slasher
|
||||
```
|
||||
|
||||
The slasher hooks into Lighthouse's block and attestation processing, and pushes messages into an
|
||||
@@ -26,9 +26,6 @@ verifies the signatures of otherwise invalid messages. When a slasher batch upda
|
||||
messages are filtered for relevancy, and all relevant messages are checked for slashings and written
|
||||
to the slasher database.
|
||||
|
||||
You **should** run with debug logs, so that you can see the slasher's internal machinations, and
|
||||
provide logs to the developers should you encounter any bugs.
|
||||
|
||||
## Configuration
|
||||
|
||||
The slasher has several configuration options that control its functioning.
|
||||
|
||||
@@ -18,7 +18,7 @@ To enable the HTTP API for the beacon node, utilize the `--gui` CLI flag. This a
|
||||
|
||||
If you require accessibility from another machine within the network, configure the `--http-address` to match the local LAN IP of the system running the Beacon Node and Validator Client.
|
||||
|
||||
> To access from another machine on the same network (192.168.0.200) set the Beacon Node and Validator Client `--http-address` as `192.168.0.200`.
|
||||
> To access from another machine on the same network (192.168.0.200) set the Beacon Node and Validator Client `--http-address` as `192.168.0.200`. When this is set, the validator client requires the flag `--beacon-nodes http://192.168.0.200:5052` to connect to the beacon node.
|
||||
|
||||
In a similar manner, the validator client requires activation of the `--http` flag, along with the optional consideration of configuring the `--http-address` flag. If `--http-address` flag is set on the Validator Client, then the `--unencrypted-http-transport` flag is required as well. These settings will ensure compatibility with Siren's connectivity requirements.
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ The required Api token may be found in the default data directory of the validat
|
||||
If you receive a red notification with a BEACON or VALIDATOR NODE NETWORK ERROR you can refer to the lighthouse ui configuration and [`connecting to clients section`](./ui-configuration.md#connecting-to-the-clients).
|
||||
|
||||
## 4. How do I connect Siren to Lighthouse from a different computer on the same network?
|
||||
The most effective approach to enable access for a local network computer to Lighthouse's HTTP API ports is by configuring the `--http-address` to match the local LAN IP of the system running the beacon node and validator client. For instance, if the said node operates at `192.168.0.200`, this IP can be specified using the `--http-address` parameter as `--http-address 192.168.0.200`.
|
||||
The most effective approach to enable access for a local network computer to Lighthouse's HTTP API ports is by configuring the `--http-address` to match the local LAN IP of the system running the beacon node and validator client. For instance, if the said node operates at `192.168.0.200`, this IP can be specified using the `--http-address` parameter as `--http-address 192.168.0.200`. When this is set, the validator client requires the flag `--beacon-nodes http://192.168.0.200:5052` to connect to the beacon node.
|
||||
Subsequently, by designating the host as `192.168.0.200`, you can seamlessly connect Siren to this specific beacon node and validator client pair from any computer situated within the same network.
|
||||
|
||||
## 5. How can I use Siren to monitor my validators remotely when I am not at home?
|
||||
@@ -22,7 +22,7 @@ Most contemporary home routers provide options for VPN access in various ways. A
|
||||
In the absence of a VPN, an alternative approach involves utilizing an SSH tunnel. To achieve this, you need remote SSH access to the computer hosting the Beacon Node and Validator Client pair (which necessitates a port forward in your router). In this context, while it is not obligatory to set a `--http-address` flag on the Beacon Node and Validator Client, you can configure an SSH tunnel to the local ports on the node and establish a connection through the tunnel. For instructions on setting up an SSH tunnel, refer to [`Connecting Siren via SSH tunnel`](./ui-faqs.md#6-how-do-i-connect-siren-to-lighthouse-via-a-ssh-tunnel) for detailed guidance.
|
||||
|
||||
## 6. How do I connect Siren to Lighthouse via a ssh tunnel?
|
||||
If you would like to access Siren beyond the local network (i.e across the internet), we recommend using an SSH tunnel. This requires a tunnel for 3 ports: `80` (assuming the port is unchanged as per the [installation guide](./ui-installation.md#docker-recommended), `5052` (for beacon node) and `5062` (for validator client). You can use the command below to perform SSH tunneling:
|
||||
If you would like to access Siren beyond the local network (i.e across the internet), we recommend using an SSH tunnel. This requires a tunnel for 3 ports: `80` (assuming the port is unchanged as per the [installation guide](./ui-installation.md#docker-recommended)), `5052` (for beacon node) and `5062` (for validator client). You can use the command below to perform SSH tunneling:
|
||||
|
||||
```bash
|
||||
|
||||
@@ -55,7 +55,7 @@ If you have separate address setups for your Validator Client and Beacon Node re
|
||||
|
||||
|
||||
## 8. How do I change my Beacon or Validator address after logging in?
|
||||
Once you have successfully arrived to the main dashboard, use the sidebar to access the settings view. In the top right hand corner there is a `Configuration` action button that will redirect you back to the configuration screen where you can make appropriate changes.
|
||||
Once you have successfully arrived to the main dashboard, use the sidebar to access the settings view. In the top right-hand corner there is a `Configuration` action button that will redirect you back to the configuration screen where you can make appropriate changes.
|
||||
|
||||
## 9. Why doesn't my validator balance graph show any data?
|
||||
If your graph is not showing data, it usually means your validator node is still caching data. The application must wait at least 3 epochs before it can render any graphical visualizations. This could take up to 20min.
|
||||
|
||||
@@ -57,6 +57,7 @@ Monitor the mainnet validators at indices `0` and `1`:
|
||||
```
|
||||
lighthouse bn --validator-monitor-pubkeys 0x933ad9491b62059dd065b560d256d8957a8c402cc6e8d8ee7290ae11e8f7329267a8811c397529dac52ae1342ba58c95,0xa1d1ad0714035353258038e964ae9675dc0252ee22cea896825c01458e1807bfad2f9969338798548d9858a571f7425c
|
||||
```
|
||||
> Note: The validator monitoring will stop collecting per-validator Prometheus metrics and issuing per-validator logs when the number of validators reaches 64. To continue collecting metrics and logging, use the flag `--validator-monitor-individual-tracking-threshold N` where `N` is a number greater than the number of validators to monitor.
|
||||
|
||||
## Observing Monitoring
|
||||
|
||||
|
||||
@@ -30,8 +30,7 @@ or effectiveness.
|
||||
## Usage
|
||||
|
||||
A remote signing validator is added to Lighthouse in much the same way as one that uses a local
|
||||
keystore, via the [`validator_definitions.yml`](./validator-management.md) file or via the `POST
|
||||
/lighthouse/validators/web3signer` API endpoint.
|
||||
keystore, via the [`validator_definitions.yml`](./validator-management.md) file or via the [`POST /lighthouse/validators/web3signer`](./api-vc-endpoints.md#post-lighthousevalidatorsweb3signer) API endpoint.
|
||||
|
||||
Here is an example of a `validator_definitions.yml` file containing one validator which uses a
|
||||
remote signer:
|
||||
|
||||
Reference in New Issue
Block a user