Score:2

Deterministic EC key derivation with anonymity and proofs

pk flag

Following up this question

There are 4 parties:

  • Alice, who needs to prove a posession of some statement $m$, unique to her, say a street address, which is basically a string of some predefined format, to
  • Bob, who consumes the proof.
  • an Oracle, who "helps" Alice to prove $m$ by performing some external checks and signing a tuple of ("Alice", $m$, timestamp)
  • A good Samaritan Sam, who wants to help Alice prove statement $m$ because he actually owns it

Normal protocol flow is as follows:

  • Alice says "I can prove some unique info about me" to Bob.
  • Bob composes $m$ and asks Alice to prove it.
  • Alice asks Oracle(s) to help her prove $m$.
  • Oracle(s) do their magic and issue a signed tuple ("Alice", $m$, timestamp) to some online storage for using later.
  • Alice shares the ID of the signed tuple to Bob.
  • Bob fetches the tuple from online storage, validates Oracle's signature, timestamp and rewards Alice with whatever they agreed on.

Now, for the attacks:

If Alice does not posess $m$ she might want to ask Sam to pretend he is Alice and pass the Oracle check for her. She might share her "Alice" identity (we'll define it later) to him.
We want to prevent this or to at least make Alice's and Sam's life very difficult in case they collude.

One way of achieving this is to track the public key that signed $m$ each time and perform penalty if it changes.

So, we ask Alice to sign each statement with a special ephemeral key before sending it to an Oracle. This ephemeral key has to be derived from her master secret using KDF(master_secret,$hash(m))$ which she can prove to Bob using some sort of ZKP. So the only way for Alice to use Sam would be to share her master secret with him.

It also allows Alice to sign different statements without being doxxed by figuring out that $m1, m2, m3$ were proved by the same person

So, the question is, is there a simpler way of achieving these goals than doing complex ZKP for KDFs?

John dow avatar
pk flag
Message m is some statement which Alice and verifiers agree on with the help of another protocol which is out of the scope. Yes, they do have an online phase communication prior to signing. These statements have timestamp attached for expiry, so a single statement is supposed to be signed multiple times by a single m-dependent key. We want to make sure that only the owner of master key did the signing without revealing his identity.
John dow avatar
pk flag
Let m be a stetement "I own a ferrari". And Bob gives a present for every ferrari owner. An oracle checks whether a person actually has a ferrari and signs the tuple (statement, identity, timestamp). If Alice does not have it she might ask her friend Sam to pass the Oracle check for her and present this to Bob. We want Sam to be penaltized for this kind of actions. So next time he passes the Oracle check he will have to use his own master key which will be noticed, because previous signed statements are stored online. And will be punished.
knaccc avatar
es flag
Would this be accurate: Zoe uses many pseudonyms, one of which is "Alice". Zoe discloses her real identity to the Oracle, and asks the Oracle to sign a timestamped statement declaring that her pseudonym "Alice" owns a Ferrari. "Alice" can then provide this signed statement to Bob to prove that "Alice" has a Ferrari. Zoe also wants to use the pseudonym "Alisha" and provide proof to Bob that "Alisha" owns a Porsche. The Oracle knows that Zoe owns both, but we don't want Bob to know that there is any connection between Zoe, "Alice" and "Alisha".
John dow avatar
pk flag
Though there is no "real identity" per se. At least not smth we could treat as a "soulbound". Each new invocation of the protocol establishes ownership of some statement by some entity, let's say a public key. And we we want to make it up to Alice to use the same signing key per statement, but prevent her sharing this key under penalty.
John dow avatar
pk flag
Well, Alice needs to have some sort of persistent connection to the statements that she proves. So that next time she needs to prove the same statement we knew that it's indeed she again. And not some other person whom she allowed to use her data to prove that he has a ferrari.
John dow avatar
pk flag
Because this keypair isn't related to any "master entity", it's possible for Alice to share it with whoever she wants without any consequences and they will be able to sign her statement in future
knaccc avatar
es flag
If I understand correctly, if $a_1$ = KDF(master_secret, hash($m_1$)), then her new identity is the (private, public) key pair $(a_1, A_1 = a_1 \cdot G)$, where $G$ is the base point on the curve. She will sign further messages using this same key pair. But, disclosing $a_1$ to Sam does not disclose the master secret to Sam, due to the KDF construction. Have I missed something? You would need to ensure that disclosing $a_1$ also discloses the master secret.
John dow avatar
pk flag
Yes, "disclosing a1 also discloses the master secret" - that's the desired property. That's why we thought of some KDF(+ maybe timestamp) circuit - with it Alice would have to also share her master secret with Sam, for him to produce a valid proof.
Maarten Bodewes avatar
in flag
knacc has provided an answer, but it is not clear to me how much of this info should be transferred to the question. Could you edit the question and add the information that is required (at least for knacc's answer)?
Score:2
es flag

Alice has a master private key scalar $a$, with corresponding public key $A=aG$. $G$ is a well-known base point on the curve.

Alice deterministically creates a new identity associated with a particular message $m$. She calculates the private key $b = HKDF_s(a, m)$, where $\operatorname{HKDF_s}$ means to use HKDF to derive a scalar result. Alice calculates the corresponding public key $B=bG$.

Alice encrypts her master private key with a uniformly random scalar $k$, as $a'=a+k$. She calculates $K=kG$.

Alice discloses $A$, $B$, $K$ and $a'$ to the Oracle. The Oracle verifies that $a'G\overset{?}{=} A+K$.

The Oracle signs the tuple ($B$, $K$, $a'$, $m$, timestamp) and stores that signed tuple as part of the public record. The Oracle does not disclose $A$ publicly. This signed tuple asserts that the Oracle allows the public key $B$ to be associated with $m$.

From now on, when signing messages associated with this tuple, Alice must sign twice - once with $B$ and once with $K$.

This proves that Alice knows both $b$ and $k$.

Alice cannot disclose $b$ and $k$ to Sam, because Sam would be able to recover $a$ as $a = a'-k$.

Note: scalar addition and subtraction are modulo the order of group generated by the base point.

John dow avatar
pk flag
Do you think it's possible to hide real $A$ from oracle also? Maybe using OPRF or NIZK proof for DLOG or even simple blinding. As much as we want to trust oracles, it would be desirable to omit the possibility to be doxxed by an oracle
knaccc avatar
es flag
@Johndow How is the Oracle going to identify the requester and confirm that the requester owns a Ferrari, if the Oracle does not know the identity of the requester? My assumption was that the requester is identified by the public key $A$.
John dow avatar
pk flag
In a perfect workd an oracle only needs to track the mapping of $K$ to $m$ and a proof that $K$ was derived from something, but should not need $A$ itself
knaccc avatar
es flag
@Johndow I think it would help if you explained more about what an Oracle does. The Oracle can't verify that Alice's freshly-created pseudonym owns a Ferrari if the Oracle does not know that the pseudonym is associated with Alice.
John dow avatar
pk flag
This whole construction is needed to allow Alice to prove $m$ multiple times. That's why we store all those values as public records. So that when Alice comes to oracle in a week to prove that she still owns the same ferrari, he would accept that fact. However if Alice would decide to let Sam use her real world credentials to prove that he owns her ferrari to get some goods, it should be punishable. Either she would have to share her master secret with him or he would use his own, but oracle would notice that and put this on public record
knaccc avatar
es flag
How does Alice originally prove she owns a Ferrari? Surely you need a trusted party, in this case the Oracle, to provide a signed statement asserting that Alice owns a Ferrari.
John dow avatar
pk flag
She asks oracle to initiate the verification procedure which is rather complex but results in oracle saying that "whoever did this owns a ferrari". And it does not actually matter "who", there is no preliminary registration step. Alice just presents a (pseudo)random value, it might be a wallet address or in our case a public key K which oracle records together with $m$
knaccc avatar
es flag
For clarity, in my answer, the pseudo-identity is $B$, and $K$ is the public key associated with a decryption key $k$ that allows for recovery of the master private key $a$. My answer relies on the Oracle knowing that $A$ is a genuine identity, which prevents Alice from creating a one-time master secret key that she does not care about leaking to Sam. I can't think of a way around this, but then again I'm not sure I completely understand your entire scenario.
John dow avatar
pk flag
Sorry for the delay, I was trying to rephrase and simplify this again. Looks like for our goals, we need to be able to prove 2 things: 1. That Alice knows _some_ preimage of a key used to sign $m$-related data 2. That she used the same preimage for $m1$ and $m2$. It allows Alice not to reveal the master public key to anyone but achieves everything we need
knaccc avatar
es flag
This question is looking a bit bloated already and the comments are full up - please open a new question about how to prove knowledge of pre-images. I suspect there will be more discussion necessary there to understand exactly what would be suitable for your purposes.
John dow avatar
pk flag
Ok, thanks anyway, you were of a great help
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.