For other algorithms, the big-O notation usually hides the constant factors, making the exact elementary operation an unimportant detail. But the cryptographic papers state the complexities exact, which is surprising to me.
You're right to be surprised --- in general, writing "exact complexities" is hopeless due to what is known as the linear speedup theorem --- we can only pin down computational complexties up to an arbitrary multiplicative factor.
This is because we can always increase the size of the underlying alphabet by some constant factor to allow us to compute a larger number of operations per unit time step. This (sort of) has a practical component --- you can increase the size of an instruction set architecture to do more computations per clock cycle (at the cost of chips using more silicon).
So what cost metric do people use? It really depends on the application.
For example, a well-known attack on AES is the Biclique Attack, which recovers the AES key in complexity $2^{126.1}$. Of course, authors do not tend to claim arbitrary numbers as their results --- how do they compute this?
The full complexity of the independent biclique approach
is evaluated as follows:
$$C_{full} = 2^{n−2d}[C_{biclique} + C_{precomp} + C_{recomp} + C_{falsepos}],$$
where
- $C_{precomp}$ is the complexity of the precomputation in Step 3. It is equivalent to less than $2^d$ runs of the subcipher $g$.
- $C_{recomp}$ is the complexity of the recomputation of the internal variable $v$ $2^{2d}$ times. It
strongly depends on the diffusion properties of the cipher. For AES this value varies from
$2^{2d−1.5}$ to $2^{2d−4}$.
The biclique construction is quite cheap in this paradigm. The method in Section 3.1 enables
construction of a biclique in only $2^{d+1}$ calls of subcipher $f$. Therefore, usually the full key
recovery complexity will be dominated by $2^{n−2d}C_{recomp}$. However, it is dependent on the
width of the matching variable and biclique dimension $d$ too. We give more details for the case
of AES in further sections. The memory complexity of the key recovery is upper-bounded
by storing $2^d$ full computations of the cipher.
For AES in particular, they then say
Altogether, we need an equivalent of $3.4375$ SubBytes operations (i.e., 55 S-boxes), 2.3125
MixColumns operations, and a negligible amount of XORs in the key schedule. The number of
SubBytes computations clearly is a larger summand. S-boxes are also the major contributor
to the practical complexity of AES both in hardware and software. Therefore, if we aim for a
single number that refers to the complexity, it makes sense to count the number of SubBytes
operations that we need and compare it to that in the full cipher. The latter number is
$10 + 2.5 = 12.5$ as we have to take the key schedule nonlinearity into account. As a result,
$C_{recomp}$ is equivalent to $2^{16}\times 3.4375/12.5 = 2^{14.14}$ runs of the full AES-128. The values $C_{biclique}$
and $C_{precomp}$ together do not exceed $2^8$ calls of the full AES-128.
The full computational complexity amounts to about
$2^{112} (2^7 + 2^7 + 2^{14.14} + 2^8) = 2^{126.18}$.
This is to say that the "primitive operation" of the biclique attack is the number of SubBytes calls, where one normalizes things such that the brute force attack would take $2^{128}$ complexities.
There are of course other notions of complexity.
For example
I've heard of an "Area x Time" metric, measuring the complexity of an attack in terms of the time it takes a certain size circuit to complete it (but cannot find references now),
similarly, it is common for attacks to measure both some notion of computational complexity along with some notion of memory/sample complexity,
in lattice-based cryptography, a certain metric known as the "Core SVP count" is fairly popular in analysis.
In general though, as it is not at all obvious what $2^{128}$ primitive operations means (for example), it is up to authors to define these terms, so they generally do.