Score:0

Implementation of the chaotic map to produce (pseudo)-random number

in flag

For my project I used Henon map to generate (pseudo)-random number. I used the following code to generate the matrix of (pseudo)-random number.

def generate_by_henonmap(dimension, key):
    x = key[0]
    y = key[1]
    # Total Number of bitSequence produced
    sequenceSize = dimension * dimension * 8
    bitSequence = []  # Each bitSequence contains 8 bits
    byteArray = []  # Each byteArray contains m bitSequence
    Matrix = []  # Each Matrix contains m*n byteArray

    for i in range(sequenceSize):
        # Classical Henon map have values of a = 1.4 and b = 0.3
        xN = y + 1 - 1.4 * x**2
        yN = 0.3 * x

        x = xN
        y = yN

        if xN <= 0.4:
            bit = 0
        else:
            bit = 1

        try:
            bitSequence.append(bit)
        except:
            bitSequence = [bit]

        if i % 8 == 7:
            decimal = dec(bitSequence)
            try:
                byteArray.append(decimal)
            except:
                byteArray = [decimal]
            bitSequence = []

        byteArraySize = dimension*8

        if i % byteArraySize == byteArraySize-1:
            try:
                Matrix.append(byteArray)
            except:
                Matrix = [byteArray]
            byteArray = []

    return Matrix

Before I use this code in my production I test the randomness by NIST Test suite from this but got this result:

Eligible test from NIST-SP800-22r1a:
-monobit
-frequency_within_block
-runs
-longest_run_ones_in_a_block
-dft
-non_overlapping_template_matching
-serial
-approximate_entropy
-cumulative sums
-random_excursion
-random_excursion_variant
Test results:
- PASSED - score: 0.525 - Monobit - elapsed time: 0 ms
- PASSED - score: 0.999 - Frequency Within Block - elapsed time: 0 ms
- FAILED - score: 0.0 - Runs - elapsed time: 1 ms
- FAILED - score: 0.002 - Longest Run Ones In A Block - elapsed time: 0 ms
- FAILED - score: 0.004 - Discrete Fourier Transform - elapsed time: 2 ms
- PASSED - score: 0.899 - Non Overlapping Template Matching - elapsed time: 8 ms
- FAILED - score: 0.0 - Serial - elapsed time: 54 ms
- FAILED - score: 0.0 - Approximate Entropy - elapsed time: 102 ms
- PASSED - score: 0.887 - Cumulative Sums - elapsed time: 4 ms
- FAILED - score: 0.11 - Random Excursion - elapsed time: 28 ms
- PASSED - score: 0.678 - Random Excursion Variant - elapsed time: 1 ms

I thought Chaotic map can generate enough randomness but the result was so frustrating. Is there any logical error inside the code which produce this poor result? I guess the way it generate the bit sequence of the number create the issue.

        if xN <= 0.4:
            bit = 0
        else:
            bit = 1

Is there any better implementation of the chaotic map to produce (pseudo)-random number?

Paul Uszak avatar
cn flag
Hiya. This all feels wrong, The test took ms? It should take ages especially for Python. How big was the sample file? Why is `xN` so biased? Run `ent` on the sample file and see what it says. It's the go-to randomness test at this stage.
fgrieu avatar
ng flag
Two remarks not meant as an explanation of why the test fails: 1) it is used floating-point approximation of real variables. That invalidates arguments based on the assumption of real variables. In particular, arguments that the transformation leads to chaotic behavior and long period falls apart, in theory and to some degree practice. 2) Experimental statistical tests like the NIST test can sometime show that a generator is unsuitable; not that it is good for cryptographic usage. It's very easy to make a generator that passes the NIST test, yet is predictable from a few consecutive outputs.
Score:2
my flag

The problem with this random number generator is that the bits are correlated; adjacent output bits differ 70% of the time (of course, for a random stream, we would expect the output to differ 50% of the time). This is enough to cause any test that depends on bit correlation to fail.

This rng is chaos based, but the Lyapunov exponent is not nearly large enough to disguise the correlation between states of consecutive samples.

WhyMeasureTheory avatar
in flag
Is there any improvement of this rng or I should use different chaotic map? @poncho Your suggestion will be a great help for me. And thanks for your response.
poncho avatar
my flag
How to improve it would depend on what you're trying to do. If you're trying to come up with a statistical rng, well, the obvious approach would be to compute the Lyapunov exponent, and from that, determine how many 'steps' it takes to turn the floating point inaccuracies into variances in the bit output (and generate a single output once that many steps). If you're trying to develop a 'crypto rng', well, that's more difficult (because of the issues that fgrieu pointed out)
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.