Both the proxmox VM and file backup client will split the backups into many small chunks, S3 is not known to perform well upon reading many small file

Search code, repositories, users, issues, pull requests...

submited by
Style Pass
2024-09-05 19:30:08

Both the proxmox VM and file backup client will split the backups into many small chunks, S3 is not known to perform well upon reading many small files.

Also think about cost peaks for object read operations. The proxmox backup clients will request required chunks sequentially, there is currently no way to optimize this in the proxy.

You should consider twice if you want to make the S3 backend your primary backup storage. For local S3 instances with S3 compatible API (ceph, minio) the performance depends largely on your setup.

As with proxmox 8.2, a feature called "Backup fleecing" was introduced, See release notes which prevents VM lockup / slow down in case of slow backup storage, which is more likely to happen with hosted S3 over slow network connections.

Add the following to your docker-compose.yml, add/update your -endpoint, then run docker compose up -d. The service will be accessible at localhost:8007.

Leave a Comment