Score:3

Uniform and Non-Uniform PPTs

in flag

While reading the paper

I stumbled upon the case in which it was necessary to state whether the authors were assuming uniform or non-uniform attackers. For what I know, non uniform PPT are basically a sequence of PPTs, so $\mathcal{A}=\{\mathcal{A}_1,\mathcal{A}_2,\dots,\mathcal{A}_n\}$, and on input $x$ of size $|x| = \lambda$, $\mathcal{A}_\lambda$ is called. Why is this model "stronger" than the uniform one? Couldn't ad attacker $\mathcal{A}$ just embed all the other attackers into its own description and invoke the suitable one? Are there problems we know how to solve in the uniform model but not (if not with weaker guarantees) in the non-uniform model?

Score:1
ng flag

Consider a unary enumeration of all Turing machines, e.g. a machine $M$ is represented by $1^{\lambda_M}$ for some unique integer $\lambda_M$. The halting problem is uncomputable, even in this "sparse" representation. But it is non-uniformly computable, as each Turing machine $M$ either halts or does not, and we can "hard-code" deciding this into each turing machine $A_{\lambda_M}$.

If we could "uniformly generate" the $A_i$'s, meaning efficiently generate them from the solely the description of $i$, you would be right. In the above example, you can't, as you don't know whether each Turing machine $M$ halts.

us flag
Another way to see that non-uniform algorithms can do things that uniform algorithms can't: there are uncountably many non-uniform algorithms but countably many uniform algorithms.
jacobi_matrix avatar
in flag
Thank you for your answers. I was wondering if we know practical problems that become easier in the non-uniform model. Rephrasing, how can an encryption/commitment/signature... scheme offer a certain level of security when we assume uniform adversaries but lower levels of security against non-uniform adversaries, if we can build a uniform adversary hard-coding the others into its own description (and still have a polynomial description size)
Mark avatar
ng flag
@jacobi_matrix See section 4 of [another look at tightness 2](https://eprint.iacr.org/2016/360.pdf) for some discussion of non-uniform hardness assumptions. It is worth mentioning that a somewhat related notion of "pre-processing" attacks (where one gets instance-specific advice) has a number of examples where they help (and can lead to practical attacks), for example the [logjam attack](https://en.wikipedia.org/wiki/Logjam_(computer_security)), or many problems in lattice-based cryptography (although most lattice cryptosystems generate fresh instances each time, limiting the danger of precomp
Mark avatar
ng flag
@jacobi_matrix the issue is that you can't build a uniform adversary hard-coding the others into its description, as generating an arbitrary $A_i$ may not be efficiently computable (or even computable at all, for example the "Sparse halting problem" non-uniform algorithm given in my answer). Moreover, you can't just "store" them all in the uniform adversary, as the uniform adversary would have too large of a description.
jacobi_matrix avatar
in flag
Perfect! The section that you linked and the logjam attack exemple really did the trick. The problem was that i thought (wrongly, obviously) that the advice string had to be the input size itself (eg. encoded in unary). Thanks again .
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.