First of all, you have only 540 digits, and 540 digits seems a bit small. I would have asked the AI to generate as many digits as it can. Nevertheless, I was able to find the following sources of non-randomness here by casual observation followed by some statistical analysis.
The string 295830 repeats twice on the end of the 8th row, and we see the related string 395830 on the end of the 9th row. The string 30294583 appears in one place while the similar string 3029583 appears in another place.
There are only 13 instances out of 539 where a digit repeats. There should be about 54 instances of a repeating digit. It seems like the AI avoids repeating digits because they may appear 'non-random' or because the AI learned incorrectly from humans that random strings do not repeat characters very much.
We can actually calculate the probability of having at most 13 repeated digits out of 539 instances if the string were truly random since this distribution is a binomial distribution. Here, the probability of getting $k$ instances out of 539 instances of a repeated digit is $\binom{539}{k}\cdot 0.1^{k}\cdot 0.9^{539-k}$. Therefore, the probability of getting at most 13 out of 539 repeated digits is
$\sum_{k=0}^{13}\binom{539}{k}\cdot 0.1^{k}\cdot 0.9^{539-k}\approx 4.90903\cdot 10^{-12}$. We can therefore safely conclude that these digits are not random/pseudorandom.
My word embedding algorithm detecting non-randomness
So I fed these 540 digits into my own complex matrix-valued word embedding algorithm with dimension $d=30$. I can tell that the word embedding converges to a good local maximum for two reasons (the word embedding does not always converge to a good local maximum):
I trained the word embedding twice, and I got the same fitness level both times (with $1.31\cdot 10^{-14}$ difference in fitness).
Even though the matrices were complex during the initialization and training, after the word embedding was fully trained, I checked that the matrices were real up-to-symmetry but computing the traces of their products. If I tried training using real matrices though the word embedding may converge to a suboptimal level of fitness (but it did not in this case which is good), so I needed to use complex matrices during training; the fitness function has a lot of near singularities and complex matrices are good at avoiding these near singularities while real matrices are not.
I trained the word embedding using quaternionic and real matrices and the trained word embedding was the same thing (up-to-a quaternionic unitary matrix or real orthogonal equivalence) as I got when training with the complex matrices.
After training, I got a fitness of about -1.1372531160707808. A smaller fitness value after training means that the sequence is more random while a larger fitness value means that the sequence is less random. I have been searching for a random sequence that has a higher fitness than the fitness for the AI generated sequence, and I have not been able to find such a random sequence.
I call my word embedding algorithm an MPO word embedding which stands for matrix product optimized. Suppose that $A$ is a set. In our case, $A$ will be the set of digits from 0 to 9, but in natural language processing, $A$ will be the set of tokens. Let $a_1\dots a_n\in A^*$ be a string.
Suppose now that $f:A\rightarrow M_d(\mathbb{C})$ is a function. Then define
the fitness $$L(a_1\dots a_n,f)=\log(\rho(f(a_1)\dots f(a_n))^{1/n})-\log(\|\sum_{a\in A}f(a)f(a)^*\|_p)/2.$$ Here, $\|X\|_p$ denotes the Schatten $p$-norm of a matrix $X$ while $\rho(X)$ denotes its spectral radius. We maximize the fitness using gradient ascent. In our case, $p=2$, so $\|X\|_p$ is just the Frobenius norm. One reason why we get the same fitness after training multiple times is that the dominant eigenvalue of $\sum_{a\in A}f(a)f(a)^*$ is substantially larger than the other eigenvalues which means that there is a rank-1 positive semidefinite matrix that somewhat approximates each $f(a)$. In other words, we are making a tradeoff between being able to use all $d=30$ complex dimensions equally and always converging to the same local optimum. The matrices $f(a)$ were also approximately low rank matrices. If the matrices $f(a)$ were all rank-1 matrices, then the MPO word embedding would reduce to a weighted digraph embedding where the weight of $(u,v)$ is the number of times the string $uv$ occurs in the cycle of 540 digits.
Here is a more detailed description of my word-embedding algorithm, and here is a link for code on my Github account.