Client node pruning deletes data from the relational database.


Data pruning is irreversible. After the data is pruned from the Client nodes, the data is no longer available. As a best practice, back up the Client nodes before initiating a pruning process.



  1. Identify the current Ledger offset.
    image=$(docker images --format "{{.Repository}}:{{.Tag}}" | grep "daml-ledger-api");
    sudo docker run -v /config/daml-ledger-api/environment-vars:/config/daml-ledger-api/environment-vars --network blockchain-fabric $image -- --dump-index-metadata | grep WARN | cut -d "|" -f 10 | cut -d "(" -f 1 | grep "ledger_end:" | cut -d ":" -f 2

    You can set up a cron job to query the current offset using the command and record the output. Then, schedule the cron job to run daily or at a designated interval.

    Record the Ledger offset output, time, and results to specify up to what time you can prune.


    You cannot query for the past events on the ledger offset.

    The ledger offset output has 32 digits with leading and trailing zeros, such as 00000000000074180000000000000000.


    You cannot prune the complete data until you set the prune_up_to parameter value to current ledger_offset minus 1.

  2. Initiate the pruning operation using gRPC and define the recorded offset as a prune_up_to parameter.
    • If TLS not enabled, run the following command.

      sudo grpcurl -plaintext -d '{"prune_up_to": "<offset>","submission_id": "<identifier_string>"}' <ClientIP>:<ClientPort> com.daml.ledger.api.v1.admin.ParticipantPruningService.Prune
    • If TLS is enabled, run the following command.

      sudo grpcurl -cacert root-ca.crt -cert client.crt -key client.key -d '{"prune_up_to": "<offset>","submission_id": "<identifier_string>"}' <ClientIP/localhost>:<ClientPort> com.daml.ledger.api.v1.admin.ParticipantPruningService.Prune

      All TLS certificates, root-ca.crt, client.key, and client.crt must be available in the root folder you are issuing this command.

    The pruning operation starts from the oldest smart contracts and stops at the provided offset time. You can optionally set an identifier submission_id used for logging.

    When the prune request ends successfully, the command returns an empty response.

  3. If the pruning operation fails, fix the error and rerun the pruning operation.

    You might see one of the following error messages:

    • INVALID_ARGUMENT- the Client node payload, specifically the offset, is faulty or missing.

    • UNIMPLEMENTED- the Client node is based on a ledger that has not implemented pruning.

    • INTERNAL- the Client node encountered a failure and might have partially implemented pruning.

  4. (Optional) Compress the index database to reclaim storage space for reuse.
    sudo docker execdaml_index_db psql -d daml_ledger_api -U indexdb -c "VACUUM FULL"
  5. (Optional) Check that the daml_index_db PostgreSQL metrics statistics from the pg_stat_database table is pruned.
    sudo docker execdaml_index_db psql -d daml_ledger_api -U indexdb -c "SELECT * FROM pg_stat_database WHERE datname in ('indexdb', 'daml_ledger_api');"

    The metrics statistics might span multiple tables. Creating contracts increases values in the tup_inserted column, and pruning activity increases in the tup_deleted value. The exact magnitude of the increase is variable.


When the pruning process is successful, you can see the following changes in the system:

  • Decreased index database disk size after the PostgresSQL vacuum command was used. You can validate by running the command  du -s /mnt/data/db/base before and after pruning.

  • Active smart contracts are not affected regardless of their age and can be queried by ID.

  • PostgreSQL metrics, especially the tup_deleted column value in pg_stat_database, indicate that the Daml archived smart contracts and transaction records were deleted from the daml_ledger_api.