Score:1

LWE Decryption: Generating errors for (c1, c2) that match binary message m

sy flag

In the encryption process, the ciphertexts c1 and c2 are added to errors e1 and e2 each to get noisy ciphertexts u and v.

c1 = A * r
c2 = b * r + m * (q/2)

u = c1 + e1
v = c2 + e1

However, choosing a random value for e1 and e2 would cause u and v to not match to its message m. Wikipedia and several research papers suggest using discrete Gaussian distribution to choose e1 and e2 that match with m. The error e requires the noise distribution χ is such that ||e|| ≤ q/4 with high probability, for e ← χ. We can choose χ to be a discrete Gaussian distribution that satisfies this constraint. However, I am not sure how to go about that.

These are my Python functions for encryption and for picking e1 and e2, which are incorrect because they did not produce the correct binary values upon decryption.

def encrypt(binaryMessage):
   
    m = ''
    u_list =[]
    v_list = []
    

    s, q, A, b = getKeyValues()

    binary_list = [int(bit) for bit in binaryMessage]

    print("no. of bits")
    print(len(binary_list))

   
    i= 0
    for m_bit in binary_list:   

        r = np.array([0, 0, 1, 0, 0, 0, 1])

        
        c1 = np.dot(A.T, r)  
      
        c2 = round((np.dot(b, r.T) + np.dot(m_bit, q/2))) 


        
        u = pickE1(c1) % q 
        v = pickE2(c2) % q
        
    
        m_bit = computeM(u, c2, q, s)
        m += m_bit   
    
    print(m)
Score:0
ng flag

We can choose $\chi$ to be a discrete Gaussian distribution that satisfies this constraint. However, I am not sure how to go about that.

There are a few ways one can do this. The most straightforward is theoretical (probabilistic) analysis, though if one is more computationally-minded you can numerically compute the relevant probabillities.

Many authors have detailed descriptions of the first approach (it is really a standard exercise in probabillity theory). The second approach can be found in Vadim Lybuahsevky's notes, see for example section 2.3.

That being said, it's not clear to me that (mis)setting $q$ is your only issue. In particular

c1 = A * r
c2 = b * r + m * (q/2)

u = c1 + e1
v = c2 + e1

looks a little different than I would expect. There are (roughly) two ways to build lattice-based public-key encryption from lattice-based secret-key encryption. Both require "replacing the public key with something that looks uniform". The two approachs are appealing to the Leftover Hash Lemma (using something that is statistically close to uniform), or appealing to the LWE assumption (using something that is computationally close to uniform).

Anyway, this looks like mixing the two to me. It's plausible what you're doing is fine, but I just wanted to explicitly mention to you that it seems somewhat sketchy to me, and I would caution you to make sure the source you're working from is correct/reputable (or look for another source, such as the lecture notes I linked to you above).

user108142 avatar
sy flag
I am using the standard LWE lattice-based public key cryptography. Thanks for pointing that out. I am basing this on a few research papers. Can you clarify about 'mixing the two'? Is this implementation different from conventional implementation?
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.