Since we are considering cryptography, we can assume that we have two distributions $p$ and $q$ which are supported on a finite set, say
$[n]=\{1,\ldots,n\}.$ Typically $n=2^b$ where $b$ is a bitlength representing some security parameter.
The paper that started research along these lines in the CS literature is Batu et al, see here. The distance used is the total variation distance
$$
TV(p,q):=\sum_{x \in [n]} |p(x)-q(x)|,
$$
which is directly related to distinguishability of two distributions.
Theorem 1. Given $\delta>0,$ and $p,q$ over $[n]$ there is a test which runs in time
$O(\epsilon^{-4} n^{2/3} \log n \log \frac{1}{\delta})$ such that if
$$TV(p,q)\leq \max\left(\frac{\epsilon^2}{32 n^{1/3}},\frac{\epsilon}{4 n^{1/2}}\right)\quad(1),$$ then the test outputs PASS (i.e., that $p$ and $q$ are closer than the bound in (1)) with probability $\geq 1-\delta.$
Alternatively if $TV(p,q)> \epsilon$ the test outputs FAIL (i.e., that $p$ and $q$ are further away than $\epsilon$ in TV distance) with probability $\geq 1-\delta.$
The "runs in time" refers to sampling the distributions in a black box query model which would apply here. Note that the probability of correct answer by the test is equal in both cases.