Score:4

Indistinguishability of symmetric encryption under CCA

vn flag

I am learning about symmetric encryption and its security properties. One of the security notion is security against chosen cipher-text attacks (CCA), particularly IND-CCA notion.

Under this notion, adversary has access to both an encryption oracle and a decryption oracle. The IND-CCA game/experiment imposes an important restriction on adversary that he cannot make a query of cipher-text (to the decryption oracle) obtained by encrypting a plain text (obtained from the encryption oracle); else adversary can trivially win

I understand the need to put the restriction on adversary for formalization of the notion. But I do not understand how would this game/experiment model a real life scenario. What aspect of reality does this notion talk about?

Titanlord avatar
tl flag
In [Katz & Lindell's textbook (2nd edition)](https://www.cs.umd.edu/~jkatz/imc.html) in chapter 3.7.2 about Padding Oracle Attacks (page 98) you should find the explanation you are looking for.
Score:7
us flag

CCA security always seems extreme to people who are just learning about it. The premise seems ridiculous, why would we give the attacker so much power? Why would we just let the attacker decrypt almost everything it wants, and learn the entire result of decryption? In the real world, when would we ever just act as a decryption oracle to an attacker?

I like to motivate CCA security in two different ways:

(1) If I write down a secret message inside an envelope, and you never get to touch that envelope, you can't tell what's inside my envelope. What if I also agreed to open any other envelope in the world -- would that help you figure out what's inside this special envelope? Of course not. Why would decrypting what's inside ciphertext #1 tell you about what's inside ciphertext #2? I wouldn't want to buy a box of envelopes with that property, and I wouldn't want to use an encryption scheme with that property.

(2) You probably had similar reservations about about CPA security: In the real world, when do we ever let an attacker completely choose what things we encrypt? That's a valid question, but suppose we had a security definition that doesn't allow the attacker to influence the choice of plaintext at all. Then each time we encrypt in the real world, we have to be absolutely sure that the plaintext had zero influence from any attacker -- that's the only way we could be sure that this hypothetical security definition would apply to our situation.

Since that's not realistic, our security definition must allow an attacker to have at least some influence on the plaintexts that are being encrypted. But it just doesn't make for a useful definition to allow the attacker to have some weirdly defined partial influence over the choice of plaintexts. The level of influence in the real world depends heavily on the specific application, and we don't want a million different security definitions for a million different application scenarios. The easiest thing to do is what the CPA security definition does: we might as well aim to protect against attackers who completely choose what plaintexts are encrypted! Even though such full adversarial control over plaintexts doesn't reflect a single realistic scenario, it is general enough to apply well to all scenarios where the attacker has some influence over the choice of plaintexts.

A similar situation holds with respect to decryption. Can you imagine a web server that accepts ciphertexts from the outside world, decrypts those ciphertexts, and then does something based on the result of decryption (i.e., if the result of decryption is this, then do this; otherwise do that)? If this sounds natural, then your security definition must give the attacker the ability to learn something about the result of decryption, on adversarially crafted ciphertexts. (CPA definition doesn't capture this situation.) So how much information should the attacker get? In the real world, the answer to this question depends heavily on the particular application. If we want a general-purpose security definition, that applies to many realistic scenarios, then that security definition should simply give as much power as possible to the attacker. In this case, it should just let the attacker freely decrypt whatever it wants, and learn the entire result of decryption (except in ways that obviously trivialize the security game).

CCA security is not meant to reflect a single real-world scenario, where we expose a full decryption oracle to the attacker. Rather, it is meant to be general enough to any scenario where the attacker learns something about the result of decrypting adversarially generated ciphertexts.

driewguy avatar
vn flag
Great explanation! I understood some of the rationale behind the security notions. I can see how IND-CCA1 security would be relevant. But IND-CCA2 notion, where adversary can choose cipher texts to send to decryption oracle based on unknown cipher texts, still seems far fetched. What scenarios do IND-CCA2 notion model? After seeing an unknown cipher text if adversary is allowed to choose cipher texts what prevents him to choose the unknown cipher text as query to the decryption oracle and learn about plain text? Again,I understand the need of such restriction in formal definition of experiment
us flag
yet another way to understand CCA: suppose I accept a ciphertext $c$ and I am willing to reveal the answer to one tiny question about $\textsf{Dec}(c)$. CCA ensures that I'm only revealing what I want. Without CCA, if I answer that question about $c$ and $c_1$ and $c_2$ etc, then it might reveal more than I wanted about $c$. Without CCA, information about $c$ can "leak into" other ciphertexts $c_i$, so that the answer to this question about $c_i$ lets the attacker learn more about what's inside $c$.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.