Score:4

What would be the maximum acceptable block size for disk enryption?

pf flag

AES-256 in XTS mode has 32x16 (512) bytes of blocksize but there are some other wide-block modes of operation such as WCFB which accepts any block size.

My question is:

What is the maximum acceptable block size for disk encryption? "Acceptable" is ways of not having too much cons.

forest avatar
vn flag
Why do you want a wide-block mode?
phantomcraft avatar
pf flag
I will create a Lioness block cipher kernel module in the future that uses Blake3 for speed in 32-bit machines (and also a SHAKE-256 kernel module for generating such long block size IVs).
forest avatar
vn flag
But why do you want a wie-block mode in general? Why not use AES-CBC-ESSIV with dm-integrity for authentication instead?
phantomcraft avatar
pf flag
In general: My father have many surveillance cams of his stores and I want to store the recordings encrypted and accessible through a Raspberry Pi and I want to create a kernel module of a wide-block cipher for encrypting them. I have been tested with AES-Adiantum with 4094-byte block size and I'm not very happy with the benchmark I made. So I want to implement something that could possibly be faster. In my PC (AMD Ryzen 5 1400) AES-CBC decrypts at 2771.1 MiB/s and encrypts at 907.7 MiB/s, encryption is very compared to decryption even having AES-NI here, in a Raspberry should be terrible.
Score:7
ng flag

AES-XTS is a mode of operation of AES that encrypts blocks, but is not a blockcipher. It has the security drawback that when information in a large block changes locally (e.g. a single byte), it's possible to tell within which 16-byte sub-block the change occurred. The larger the block size, the more that's an information leak.

WCFB is a true tweaked block cipher. As such, there is no inherent security drawback with increasing its block size. The price to pay is that it uses the underlying block cipher about twice as many times.

For both kinds of block encryption, other drawbacks of increasing the block size $b$ are from the standpoint of the information system using the encryption.

Practical large block ciphers with adjustable block size, including WCFB, has cost of cryptographic operations per block at least (and about) proportional to $b$. Assuming this:

  1. The cryptographic cost of reading (or changing) any small part of a piece of encrypted information increases about proportionally to $b$. For a video storage application, that unavoidably implies that the time for seeking to a certain point in the video grows about linearly with $b$. If the block cipher is used for disk encryption at a block level, the same linear growth applies to most file system operations, like creating a small file. In a sense, system response time to a given stimulus increases linearly with $b$.

  2. The memory buffers needed for each file (and more generally piece of data) must be increased proportionally to $b$, otherwise performance is down the drain due to multiple encryption and decryption operations during linear access. And there are drawbacks to large memory buffers beyond the memory usage and the need to adjust it, including: more information lost in case of power cut, longer time to flush buffers, increased disk activity (and wear for Flash/SSD) per flush, all proportional to $b$.

    Note: If the block cipher is used for disk encryption at a block level, and the file system and applications are unaware of the large block size, then the "down the drain" case is bound to happen. In particular, if a file system uses a certain block size for file allocation and directory chunks, it's critical that $b$ is a divisor of (including: equal to) that, and blockcipher blocks are aligned to file system blocks. That's both for performance and integrity in case of power loss.

  3. If nothing special is made about memory caches built into the CPU, the amount of cache effectively flushed by a block encryption/decryption increases linearly with $b$, perhaps to the point of effectively flushing the whole CPU cache (or a whole layer of that). If we are in the above "down the drain" scenario, that cache flush effect will make things even worse. On the other hand, with large enough buffer, the larger $b$, the less frequent are cache flushes, thus this cache flushing will typically not impact average performance negatively for linear file access workload.

For the application at hand: if possible, perform encryption at the application level or file access layer (e.g. read or fread write or fwrite, or with an encrypting filesystem that encrypts at the file access level), where there is no need for large block size. This is what gives the best responsiveness, minimizes information loss on power loss, and often makes encryption the least costly overall. If you must perform disk block encryption (e.g. because you can't change an application, and can't slip encryption at the file access layer nor use an encrypting filesystem), then you must use as block cipher size some divisor of the disk block size known to and used by the file system, and insure alignment. Usually the best choice is whatever the underlying storage system uses at block size, like 512 bytes for many old consumer hard disks and many SSDs, 4kiB for many recent hard disks. If in doubt, use 512, which is unlikely to be disastrous.

For video encryption, AES (and derivatives) is a good choice only if there is hardware support for it, and it's used. Otherwise, modern ARX ciphers like Chacha are going to be much faster.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.