Score:1

"Infinite" (crpytographic) pseudorandom sequence

us flag

This is mainly for math purposes although it would be good for cryptographic purposes too, are there any known algorithms for generating an infinitely long (pseudo)random sequence of numbers (say bits). The sequence cannot be repeating or have some pattern and should behave like a "normal" number (i.e. similar the digits of Pi or some other constant).

kelalaka avatar
in flag
For a PRF $F$, $a_1 = F(1), a_2 = F(2||a_1), \ldots , a_i =F(i || a_i{-1})$, can't prove, one may give a PRF that fails this.
Score:2
ph flag

It's not possible for a deterministic Finite State Machine to output a sequence that never repeats. Once it returns to a state it has already been in, the output will repeat previous outputs from that point.

This is more of a theoretical concern than a practical one, though. Even a simple CSPRNG has more possible states than it can visit in the lifetime of the universe. So you can use it "forever" before observing that sort of repeat. Any actual use of a pseudo-random sequence is necessarily finite, and so questions like normality that only apply to infinite sequences don't really make sense. Instead, there are criteria like "indistinguishable from random", which seems like the finite counterpart to what you are asking about.

J. Linne avatar
us flag
@poncho I was sort of thinking about that... Perhaps some PRNG but uses a different modulus every time an old modulus "expires" (i.e. the sequence is about to repeat).
poncho avatar
my flag
Note that if we go to a more general computational model, where we allow the state size to grow during generation, we really can have an output that never falls into repeats (and the amount of growth needed is logarithmic in the output generated - that is, it grows quite slowly). Of course, as bmm6o observes, we usually don't care if the sequence repeats after $2^{1000}$ (or even $2^{256}$) outputs, and so this doesn't come up in practice...
ph flag
Right, if you were implementing something like this you would likely follow the model Poncho describes, where your algorithm allocates more memory as needed (since your moduli will need to increase without bound). The point remains that when you've output $n$ values you need at least $log(n)$ bits of state to avoid revisiting a previous state.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.