Score:3

How to formalize part of a protocol in ProVerif?

dz flag

I'm trying to formalize the following protocol in $\mathsf{ProVerif}$, where $m$, $p$ are messages, $j$, $k$, $h$ are private keys, and $\{m\}_k$ is the ciphertext obtained by encrypting $m$ with $k$. Furthermore, $f$ and $g$ are functions whose result does not reveal any information about any of the two arguments. I also want to prove that no adversary not knowing $j$, $k$ or $h$ is capable of determining either the value of $m$ or the value of $p$.

$A:\;\mathsf{Send}\;\{m\}_k\;\mathsf{to}\;B$

$A:\;\mathsf{Send}\;\{p\}_h\;\mathsf{to}\;C$

$B:\;\mathsf{Send}\;\{m\}_j\;\mathsf{to}\;C$

$C:\;\mathsf{Send}\;f(m, p)\;\mathsf{to}\;A$

$C:\;\mathsf{Send}\;g(m, p)\;\mathsf{to}\;B$

So far I've written the following code:

free c : channel.

(*Symmetric encryption*)
type key.

fun senc(bitstring, key): bitstring. (*Encryption*)

reduc forall m: bitstring , k: key; sdec(senc(m, k), k) = m. (*Decryption*)

free m, p : bitstring[private].
query attacker (m).
query attacker (p).

not attacker(new sk_J).
not attacker(new sk_K).
not attacker(new sk_H).


let processA(sk_K: key, sk_H: key) =
    out(c, senc(m, sk_K));
    out(c, senc(p, sk_H)).


let processB(sk_K: key, sk_J: key) =
    in(c, x:bitstring);
    let z = sdec(x, sk_K) in
    out(c, senc(z, sk_J)).

let processC(sk_J: key, sk_H: key) =
    in(c, x:bitstring);
    let z1 = sdec(x, sk_H) in
    in(c, y:bitstring);
    let z2 = sdec(y, sk_J) in
    0.


process
    new sk_J: key;
    new sk_K: key;
    new sk_H: key;
    (
        (!processA(sk_K, sk_H)) | 
        (!processB(sk_K, sk_J)) | 
        (!processC(sk_J, sk_H))
    )

This code successfully verifies the proof (I think) for the first 3 operations, my question now is how can I formalize the last 2 operations/functions in $\mathsf{ProVerif}$?

Score:1
ru flag

You should be able to just specify the functions with

fun f(bitstring,bitstring): bitstring.
fun g(bitstring,bitstring): bitstring.

and then use them in ProcessC with

out (c,f(z2,z1));

and

out (c,g(z1,z2)).

respectively. Unless you specify any destructors for expressions using f or g using reduc then the model will not permit the adversary to learn anything about the inputs. If you also want to make the adversary unable to compute f and g for values known to the adversary, you should declare the functions as [private].

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.