Score:0

s3cmd multipart chunk-size memory management

bd flag

I was wondering how to predict how much memory s3cmd is going to use to upload a given file. I'm concerned that for large files it could use too much system memory. And if I increase the chunk size, will it use more memory per chunk/multipart?

I checked the documentation

https://s3tools.org/s3cmd

but couldn't find anything related to memory management.

I've experimented and tried to upload a 200GB file with something like

/usr/bin/s3cmd put largefile_200G.txt s3://mybucket/testlargefile/largefile_200G.txt --multipart-chunk-size-mb=250

and what I've seen myself is that the memory usage for the created process was constant and chosen proportionally to the available memory on the server (the more is available on the server the more it is allocated for the s3cmd process).

Does someone happen to know roughly how the memory management works and if it will always be constant like in the experiment I've tried (without spikes that is)?

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.