Sounds more so like a test for your tests using fuzzing.
That could inform you of how sensitive your tests are to non-statistically consequential input transformations. But, only if your transformations aren't introducing changes to the statistical properties of your sample. Because if they are, it's not defined apriori what the tests are testing. However, it's possible they're testing the randomness of your transformations, instead of testing the randomness of your sample.
The paper which you cite proposes the use of transformations on sample data which then cause randomness tests to be statistically independent:
Definition 1. Consider a randomness test $T$ and a transformation $σ:L→L$ where $L$ is the set of all $n$-bit binary sequences. $T$ is said to be invariant under $σ$ if for any $S \in L$, $T(S) = T(σ(S))$.$^{(7)}$ Here, we define a new concept of sensitivity to measure the effect of a transformation to output $p$-values. If a test $T$ is invariant under $σ$, sensitivity of $T$ to $σ$ is represented by $0$. If the transformation has small effect on the test results, that is, there is a significant correlation between $T(S)$ and $T(σ(S))$, sensitivity is represented by $1$. Whenever $T(S)$ and $T(σ(S))$ are statistically independent, sensitivity is represented by $2$, in those cases $T(σ(.))$ can be added to the test suite as a new test. $- §$4
The authors are clearly not considering simple transformations $σ$ which are limited in their impact on the statistical properties of samples $S$. They are only considering transformations which cause statistical independence between tests on $σ(S)$ and $S$. They clarify this point as follows:
It is obvious that the independence of $T(σ(S))$ and $T(S)$ is not enough to add $T(σ(.))$ to the suite. It should also be independent of other tests in the suite. $- §$4
Admittedly, there is a subtle distinction between my usage of "statistically independent data" and the authors' idea of transformations on data which lead to statistically independent statistical tests. But, I'd conjecture it's a distinction without a difference. As, if there's no statistical test $T$ which can show a statistical correlation between data $σ(S)$ and $S$, then the data itself must be statistically independent. This conjecture is incorrect in general.
Now, there are obvious examples of such transformations to a sample which make any subsequent tests statistically irrelevant. For instance, if I bitwise-AND the sample with zeros:
$$ \begin{equation}\begin{aligned} S = 00000110110100100011101&0101001111111011010110010 \newline &\mathrm{AND} \newline 00000000000000000000000&0000000000000000000000000 \newline &\downarrow \newline σ(S) = 00000000000000000000000&0000000000000000000000000 \end{aligned}\end{equation} $$
The now null sample $σ(S)$ will fail all randomness tests, while telling me nothing about $S$.
Likewise, if I apply a randomizing transformation to the sample:
$$ \begin{equation}\begin{aligned} S = 00000000000000000000000&0000000000000000000000000 \newline &\downarrow \newline \mathrm{H}&\mathrm{ASH}\left(S\right) \newline &\downarrow \newline σ(S) = 00100011100111110101000&0011010000001101110100101 \end{aligned}\end{equation} $$
The new sample $σ(S)$ is now endowed with randomness not present in the original, also telling me nearly nothing about $S$.
In both cases, the transformations caused any tests on the new sample $σ(S)$ to be statistically independent from tests on the original $S$. These cases are intended to be extreme, and therefore clear examples of how statistical independence can be introduced. The point is: How can you be sure your specific transformations aren't also doing something egregious to the information that can be gained about the original sample?
Generally, if your transformations are making your new samples statistically independent, it would be strange to learn anything statistically relevant from them. Doing so may even be a bit paradoxical? I refer to the definition: "two events or random variables are considered statistically independent if the knowledge of one provides no information about the other."
Of course, there are degrees to which transformations will conjure irrelevance. But, it would seem non-trivial to discern signal from the noise inherent to any and each type of transformation which introduces even a small degree of statistical independence.
Accounting for noise, and qualifying the effects each novel transformation has on the insights gained by each test result, may be the same amount of work, or even more, with unclear added value, than would be just inventing a new test that's well-justified, with insights that are understood, and that are designed from the start to provide specific value. The former is definitely not as confidence inspiring as the latter.