CCA security always seems extreme to people who are just learning about it. The premise seems ridiculous, why would we give the attacker so much power? Why would we just let the attacker decrypt almost everything it wants, and learn the entire result of decryption?
In the real world, when would we ever just act as a decryption oracle to an attacker?
I like to motivate CCA security in two different ways:
(1) If I write down a secret message inside an envelope, and you never get to touch that envelope, you can't tell what's inside my envelope.
What if I also agreed to open any other envelope in the world -- would that help you figure out what's inside this special envelope?
Of course not.
Why would decrypting what's inside ciphertext #1 tell you about what's inside ciphertext #2?
I wouldn't want to buy a box of envelopes with that property, and I wouldn't want to use an encryption scheme with that property.
(2)
You probably had similar reservations about about CPA security: In the real world, when do we ever let an attacker completely choose what things we encrypt?
That's a valid question, but suppose we had a security definition that doesn't allow the attacker to influence the choice of plaintext at all.
Then each time we encrypt in the real world, we have to be absolutely sure that the plaintext had zero influence from any attacker -- that's the only way we could be sure that this hypothetical security definition would apply to our situation.
Since that's not realistic, our security definition must allow an attacker to have at least some influence on the plaintexts that are being encrypted.
But it just doesn't make for a useful definition to allow the attacker to have some weirdly defined partial influence over the choice of plaintexts.
The level of influence in the real world depends heavily on the specific application, and we don't want a million different security definitions for a million different application scenarios.
The easiest thing to do is what the CPA security definition does: we might as well aim to protect against attackers who completely choose what plaintexts are encrypted!
Even though such full adversarial control over plaintexts doesn't reflect a single realistic scenario, it is general enough to apply well to all scenarios where the attacker has some influence over the choice of plaintexts.
A similar situation holds with respect to decryption.
Can you imagine a web server that accepts ciphertexts from the outside world, decrypts those ciphertexts, and then does something based on the result of decryption (i.e., if the result of decryption is this, then do this; otherwise do that)?
If this sounds natural, then your security definition must give the attacker the ability to learn something about the result of decryption, on adversarially crafted ciphertexts. (CPA definition doesn't capture this situation.)
So how much information should the attacker get?
In the real world, the answer to this question depends heavily on the particular application.
If we want a general-purpose security definition, that applies to many realistic scenarios, then that security definition should simply give as much power as possible to the attacker.
In this case, it should just let the attacker freely decrypt whatever it wants, and learn the entire result of decryption (except in ways that obviously trivialize the security game).
CCA security is not meant to reflect a single real-world scenario, where we expose a full decryption oracle to the attacker.
Rather, it is meant to be general enough to any scenario where the attacker learns something about the result of decrypting adversarially generated ciphertexts.