I think that all should be possible provided that the probabilities are independent/uncorrelated with the high bits of the discrete logarithm.
Consider a type 1 algorithm oracle where the probability of success is, say, about 1 in a million. Given a discrete logarithm problem $y=g^x$ (given $y$ find $0\le x<\ell$) we can make an assumption on, say, the bottom 4 bits of $x$ by repeating 16 times. This will make our problem equivalent to solving $y'=g^{[x/16]}$ and the bits of $[x/16]$ are the bits of $x$ shifted down by 4. As usual we recover the discrete logarithm bitwise and shift. To recover the low bit of $[x/16]$ we choose a few million random $r$ in the range $[0,\ell-\ell/16]$ and run our algorithm on $y'g^r$ (which has discrete logarithm $0\le [x/16]+r<\ell$. We expect to succeed at least once and the low bit of $[x/16]$ will be the bit returned by our algorithm XOR with the least significant bit of $r$. Rinse; repeat.
Likewise for a type 2 algorithm with probability, say, 0.501 we construct the same $y'$ and again sample, say 100 million $r$. We get 100 million predictions for the least significant bit of $[x/16]$ of which about 50,100,000 are correct and about 49,900,000 are incorrect the chance of getting more incorrect prediction than correct ones is very small. Rinse; repeat.
In both case the inputs to the putative algorithm are uniformly selected from a large set of elements (covering most of our group) whose discrete logarithms lie in a particular interval. Unless our algorithm's power is concentrated on elements outside of such sets, we should be able to recover the full discrete logarithm.