Nethermind as of January 2024 takes about 1.1 TiB of space on a snap sync, and then grows by ~ 27 GiB/week.
When it starts to fill its disk, it can be online pruned (no downtime) to free up space again, to around the size of a fresh sync.
- This is not an archive node. Do not try to prune an archive node.
- The volume Nethermind stores its database on has roughly 300 GiB of free space or more.
Nethermind reportedly uses around 260 GiB of additional space when pruning. It may be good to alert at 350 GiB free and prune then.
You expect to see messages in Nethermind's log that it's working on "Full Pruning". This may take a few days: In testing 42 hours was seen. It requires additional disk space, just under 250 GiB in testing. And it will free disk space.
You need the admin_ API name space, and you may want Nethermind to restart after successful prune, and only use some of your cores. I also suggest to keep the admin_ namespace to a separate port only available on localhost for security reasons.
If you used Somer's guide, then the Nethermind service is in /etc/systemd/system/nethermind.service. Edit it to add some parameters.
sudo nano /etc/systemd/system/nethermind.service
Find the ExecStart line and append to it. In the below example, changes start from JsonRpc.AdditionalRpcUrls, everything above that would be however you already have it configured. Don't delete any existing lines.
Adjust --Pruning.FullPruningMemoryBudgetMb to be right for your system. With 16 GiB of RAM, you'd want it to be 4096, with 32 GiB or more, 16384 gives you the biggest speed increase. This parameter requires Nethermind 1.18.0 or later.
If you have 32 GiB of RAM and you are using Nethermind 1.26.0 or later, you can also add Init.StateDbKeyScheme=HalfPath, which will migrate Nethermind's database to the new HalfPath format during prune.
Take a look at the number of cores your CPU has, and decide on --Pruning.FullPruningMaxDegreeOfParallelism. Half the cores or 2, whichever is greater, is often a good choice. In this example, I will assume a quad core and use 2.
If Restart is currently on-failure, change that to always.
The below is only an example: Adjust the ExecStart to be right for your system and version of Nethermind.
Restart=always
ExecStart=/usr/local/bin/nethermind/Nethermind.Runner \
--config mainnet \
--datadir /var/lib/nethermind \
--Sync.SnapSync true \
--JsonRpc.JwtSecretFile /var/lib/jwtsecret/jwt.hex \
--JsonRpc.AdditionalRpcUrls http://127.0.0.1:1337|http|admin \
--JsonRpc.EnginePort 8551 \
--JsonRpc.EngineHost 127.0.0.1 \
--Pruning.FullPruningCompletionBehavior AlwaysShutdown \
--Pruning.FullPruningTrigger=VolumeFreeSpace \
--Pruning.FullPruningThresholdMb=375810 \
--Pruning.FullPruningMaxDegreeOfParallelism 2 \
--Pruning.FullPruningMemoryBudgetMb=16384 \
--Init.StateDbKeyScheme=HalfPath
Save the file with "Ctrl-X" and tell systemd about the changes, then restart Nethermind.
Without
EnginePortand possiblyEngineHost, Nethermind may stop syncing, asAdditionalRpcUrlsoverwrites the default in themainnet.configfile
sudo systemctl daemon-reload
sudo systemctl restart nethermind
Verify Nethermind is still running successfully:
sudo journalctl -fu nethermind
If an error was introduced in the parameters, fix it.
That was the hardest part, and you never have to do that again.
With the above changes to the service, Nethermind will automatically prune when it gets to 350 GiB of free disk space or less.
If you wish to manually start a prune, run one command and then simply wait.
wget -qO- "http://localhost:1337" --header 'Content-Type: application/json' --post-data '{"jsonrpc":"2.0","method":"admin_prune","params":[],"id":1}'
You can observe progress with sudo journalctl -fu nethermind | grep Full.
Once pruning is done, Nethermind will automatically restart.
If you're using Eth Docker, it's already auto-pruning. You can also run ./ethd prune-nethermind.
With halfpath it grows by 30GB after three months with the default 1GB pruning cache.