Note:
- The retention policies functionality is available only for licensed users.
- This feature is currently in beta and disabled by default. To enable it, use the environment variable
CONVOY_ENABLE_FEATURE_FLAG=retention-policy
or the CLI flag--enable-feature-flag=retention-policy
.
storage_policy
object to your database configuration JSON.
Key | Description |
---|---|
type | Storage type, either “s3” or “on_prem” |
s3.prefix | Prefix for S3 storage path |
s3.bucket | Name of the S3 bucket |
s3.access_key | AWS access key for S3 authentication |
s3.secret_key | AWS secret key for S3 authentication |
s3.region | AWS region where the S3 bucket is located |
s3.session_token | Temporary session token for AWS authentication (optional) |
s3.endpoint | Custom endpoint URL for S3-compatible storage |
on_prem.path | File system path for on-premises storage location |
Environment Variable | Description |
---|---|
CONVOY_STORAGE_POLICY_TYPE | Storage type, either “s3” or “on_prem” |
CONVOY_STORAGE_AWS_PREFIX | Prefix for S3 storage path |
CONVOY_STORAGE_AWS_BUCKET | Name of the S3 bucket |
CONVOY_STORAGE_AWS_ACCESS_KEY | AWS access key for S3 authentication |
CONVOY_STORAGE_AWS_SECRET_KEY | AWS secret key for S3 authentication |
CONVOY_STORAGE_AWS_REGION | AWS region where the S3 bucket is located |
CONVOY_STORAGE_AWS_SESSION_TOKEN | Temporary session token for AWS authentication (optional) |
CONVOY_STORAGE_AWS_ENDPOINT | Custom endpoint URL for S3-compatible storage |
CONVOY_STORAGE_PREM_PATH | File system path for on-premises storage location |
Convoy v25.1.1
the job that ran the retention policy carried out two tasks:
SELECT
queries and uploading webhook event data to S3.DELETE
queries on the tables.DELETE
queries on moderately large Postgres databases triggers AUTOVACUUM
and causes disk pressure.convoy.delivery_attempts
convoy.event_deliveries
convoy.events
convoy.events_search
If you are running convoy in a docker container, you can execute this command usingWhen the above command is run, it attempts to partition all the tables at the same time. This operation can take a while (10 to 60 minutes) if the tables are moderately large, so you would need to partition them one after the other like this:docker exec
partitioning using docker exec