- The general question is: Why are ABE schemes usually/sometimes proven in the selective-set of attributes model of security? Or even co-selective (both attributes and policy function)? Is it just because of difficulties in security proofs, i.e., reductions?
- More precisely, what are the limitations of the simulator in answering the adversary's secret key queries? Like generating secret keys by the simulator without knowing the master secret key?
- With these limitations for the simulator in generating secret keys, Is the secret key space partitioned into two subspaces, secret keys that the simulator can generate, and secret keys that the simulator can't generate? As in [ ABE slides: pages 14-23, Alison Bishop, The 3rd BIU Winter School 2013: Bilinear pairings in cryptography]
- What are similar limitations in lattice-based (LWE-based) ABE that in some schemes we have selective secure lattice-based ABE and not fully secure, i.e. adaptively secure, lattice-based ABE?
In the following, some historic cases of selective security are noted.
As in first KP-ABE of Goyal et al. [Vipul Goyal, Omkant Pandey, Amit Sahai, and Brent Waters. ”Attribute-based encryption for fine-grained access control of encrypted data”, Computer and Communications Security (CCS), pp 89-98, 2006.]: ... We now discuss the security of an ABE scheme. We define a selective-set model for proving the security of the attribute-based under chosen plaintext attack. This model can be seen as analogous to the selective-ID model [16, 17, 8] used in identity-based encryption (IBE) schemes [36, 10, 18]. ...
[D. Boneh and X. Boyen. Efficient Selective-ID Secure Identity Based Encryption Without Random Oracles. In Advances in Cryptology Eurocrypt, volume 3027 of LNCS, pages 223-238. Springer, 2004.]:
... Canetti et al. [CHK03, CHK04] recently proposed a slightly weaker security model, called selective identity IBE. In this model, the adversary must commit ahead of time (non-adaptively) to the identity it intends to attack. The adversary can still issue adaptive chosen ciphertext and adaptive chosen identity queries. Canetti et al. are able to construct a provably secure IBE in this weaker model without the random oracle model. ... Canetti, Halevi, and Katz [CHK03, CHK04] define a weaker notion of security in which the adversary commits ahead of time to the public key it will attack. We refer to this notion as selective identity, chosen ciphertext secure IBE (IND-sID-CCA). ...
[R. Canetti, S. Halevi, and J. Katz. Chosen Ciphertext Security from Identity Based Encryption. In Advances in Cryptology Eurocrypt, volume 3027 of LNCS, pages 207-222. Springer, 2004.]:
... We now give a definition of security for IBE. As mentioned earlier, this definition is weaker than that given by Boneh and Franklin and conforms to the “selective-node” attack considered by Canetti, et al. [chk03]. Under this definition, the identity for which the challenge ciphertext is encrypted is selected by the adversary in advance (i.e., “non-adaptively”) before the public key is generated. ...
[R. Canetti, S. Halevi, and J. Katz. A Forward-Secure Public-Key Encryption Scheme. In Advances in Cryptology Eurocrypt, volume 2656 of LNCS. Springer, 2003.]:
... The security notion that we present here for (binary tree encryption) BTE requires the attacker to commit to the node to be attacked in advance (i.e., before seeing the public key); we call this attack scenario a selectivenode (SN) attack (cf. "selective forgery" of signatures [23]). While this definition is weaker than the corresponding definition for HIBE achieved by [19], it suffices for constructing a forward-secure PKE scheme from any BTE scheme (cf. Section 3) in the standard model. ...
[S. Goldwasser, S. Micali, and R. Rivest. A digital signature scheme secure against adaptive chosen-message attacks. SIAM J. Computing, 17(2):281-308, April 1988.]:
... What Does It Mean To "Break" a Signature Scheme?
One might say that the enemy has "broken" user $A$'s signature scheme if his attack allows him to do any of the following with a non-negligible probability:
A Total Break: Compute $A$'s secret trap-door information.
Universal Forgery: Find an efficient signing algorithm functionally equivalent to $A$'s signing algorithm (based on possibly different but equivalent trap-door information).
Selective Forgery: Forge a signature for a particular message chosen a priori by the enemy.
Existential Forgery: Forge a signature for at least one message. The enemy has no control over the message whose signature he obtains, so it may be random or nonsensical. Consequently, this forgery may only be a minor nuisance to $A$.
...