Endianness¹ can affect the result of a randomness test, and change a result from pass to fail, or vice versa (throwing a different run of the generator can have the same effect, to a somewhat lesser degree). However, if endianness significantly affects the outcome of the test, then (assuming the test is correct and correctly used)
- the generator is broken, since one of the to versions significantly fails the test, and any fixed swap of bits at the output of a generator indistinguishable from random yields a generator indistinguishable from random
- and the test is sensitive to a minor reordering of it's input, which is an indication of an overspecialized test.
My advise is thus to ignore the issue of endianness in the input of randomness tests.
Rather, question the motivation of running Dieharder or NIST SP800-22. It's customary in substandard crypto papers, especially those illustrating visually what encryption does to Lena. But success of such test is not an argument or (worse proof/demonstration) of the quality of some encryption, or PRNG, or TRNG incorporating a postprocessing state. Towards this, an analysis of the method used in the encryption, PRNG, or source+postprocessing of a TRNG, is necessary.
¹ That is the big-endian, little-endian, or other-endian order of bits in bytes, words or integers at output of the generator tested, and/or at input of the test program. For example, the NIST Statistical Test Suite's function convertToBits
(file src/utilities.c here) converts bytes to bits per big-endian convention (contrary to the most common order in asynchronous serial communication). That matters in theory if an implementation of a generator that is mathematically defined to output a bitstream (e.g. A5-1) has it's production passed to that test in byte mode for efficiency.