Recently, I read this paper NEURAL NETWORK BASED
CRYPTOGRAPHY. Under the section 3.1 it said:
The aim is to improve the randomness of the random numbers generated by any
algorithm using an NN. In order to improve pseudo-random numbers via a neural network, random numbers are generated by a modified subtract with borrow
algorithm in MATLAB. The random numbers generated by the modified subtract
with borrow algorithm are tested for randomness by NIST. Then, these random
numbers are used as input values, initial weight, bias values and the neuron number of hidden layers. The network’s output values are evaluated without training.
The output values of the NN are neural network-based pseudo-random numbers.
Therefore, the algorithm can be called a neural-based pseudo-random numbers generator (PRNG). The random numbers generated by the NN-based pseudo-random
numbers generator are also tested for randomness by NIST.
I was wondering how the network's output values are evaluated without training? If it use the randomness of the input then the structure of networks changes randomly. I was trying to understand the network but couldn't find any clear explanation (like how to build such network) from that paper.
Could anyone provide any paper/repository where I can get/idea to building the process of a similar NN (determine Neural network from PRNG or any chaotic/randomness input)?