Score:2

Is a compression function call the same as invoking the hash function itself?

pf flag

In BLAKE2X paper it is said:

BLAKE2X adds a constant overhead of $\lceil\ell/64\rceil$ (resp. $\lceil\ell/32\rceil$ compression function calls compared to the underlying 64-bit (resp. 32-bit) BLAKE2 hash. For example, to compute a 1056-bit (132-byte) hash as required in Ed521 signatures, BLAKE2X adds† $\lceil132/64\rceil=3$ extra compression function calls compared to BLAKE2b. Note that $\operatorname{B2}(i,j,H_0)$ calls can be computed in arbitrary order, and in parallel.

Is a compression function call the same as a invoking of the hash function itself?

In BLAKE2X there is also this notation: $$\operatorname{B2}(0,64,H_0)\mathbin\|\operatorname{B2}(1,64,H_0)\mathbin\|\ldots\mathbin\|\operatorname{B2}(\lfloor\ell/64\rfloor,\ell\bmod64,H_0)$$

Are these concatenated values successive calls to the hash function itself?

This question can sound obvious, but I am not a very good English reader and I am little confused about timing when generating a 1GiB stream with BLAKE2X using a 64-byte seed compared to using a 40KiB one:

$ time dd if=/dev/zero count=1 bs=64 2>/dev/null | /usr/local/bin/b2sum -X 8589934592 > /dev/null 

real    0m7.058s
user    0m6.273s
sys 0m0.785s

$ time dd if=/dev/zero count=1 bs=1073741824 2>/dev/null | /usr/local/bin/b2sum -X 8589934592 > /dev/null

real    0m9.295s
user    0m7.669s
sys 0m2.018s

/\ Too much fast and good for me. Maybe the seed is stored in L1/L2 cache (that should have a speed of 1TiB/s)


† with an obvious fix from the original $\lceil132/3\rceil=3$

fgrieu avatar
ng flag
I see $\operatorname{B2}$ defined on the last third of [page 2](https://www.blake2.net/blake2x.pdf#page=2).
phantomcraft avatar
pf flag
@fgrieu "Create the hash function instance B2 from the BLAKE2 instance used..." I was a little confused about what would be a compression function call, if it's the same of invoking the hash function.
Score:2
sa flag

Yes they are. Specifically as pointed out by @fgrieu,

$B2(i, j, X)$ denotes the hash of $X$ using $i$ as node offset and $j$ as digest length.

And as stated in the paper, the calls need not be successive, they can be computed in any order, or in parallel. Furthermore, each invocation of $B2(i,j,X)$ makes Blake2 call its' round function only once, therefore these calls are light compared to the usual scenario for typical hash function invocations where the compression function is called many times.

fgrieu avatar
ng flag
Addition: and each invocation of $\operatorname{B2}(i, j, X)$ makes Blake2 call it's round function a single time.
phantomcraft avatar
pf flag
Now I know why the timing are the near same for short or long inputs, BLAKE2X condensates entropy from the source into a key and hash this key using different node offsets, I thought it was reading directly the input, that's why I didn't understand. I'm curious to know the extracted key size: https://github.com/BLAKE2/BLAKE2/blob/54f4faa4c16ea34bcd59d16e8da46a64b259fc07/ref/blake2xb-ref.c -- I tried to take a look in BLAKE2X reference implementation but I don't know how to read C.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.