I am trying to get into Universal Composable Security, but before diving deeper I would like to confirm my intuition of the framework.
https://eprint.iacr.org/2000/067.pdf
A protocol $\pi$ securely evaluates a function $\mathcal{f}$ if for any adversary
$\mathcal{A}$ there exists an "ideal adversary" $\mathcal{S}$ such
that no environment $\mathcal{E}$ can tell with non-negligible
probability whether it is interacting with $\pi$ and $\mathcal{A}$ or
with $\mathcal{S}$ and the ideal process for $\mathcal{f}$.
I think this is very clear. I imagine two scenarios. In the first scenario, we play the protocol $\pi$. All parties communicate with each other, a subset is corrupted, and they try to evaluate a function like the sum of all inputs, but without revealing anything beyond the result. Now the second scenario. The ideal process receives the private input of all parties and computes f, and returns all parties the desired output of f. Now the environment has to guess in which scenario he is, by generating input for all machines, letting the protocol or the ideal process for f be executed, and having access to all machines outputs. At this point I assume the outputs of the fair machines are f evaluated on the inputs, but our adversary (or its controlled machines) will try to output some cruicial information it has learned, in order to proof the environment it is a successfull adversary. For example by just outputting the private input of one of the fair machines (assuming the protocol is very insecure). The simulator cannot do this, since it has only access to the trusted party and only to the corrupted machines input/local random choices. So if we know that for any adversary there exists a simulator adversary that is atleast "as good as" him despite being so restricted, then the protocol is said to securely realize f.
Now the UC-framework. Here we allow the environment not only to generate input at the beginning and read all outputs at the end, but also interact with the adversaries at any time. What I do not get is what the reactive tasks is about, what is the difference between ideal process and the ideal functionality?
From my understanding we replace our ideal process/trusted party executing some specific function by some more abstract function. All machines (or a subset) communicating might want to use "securely summing all inputs" and depending on the outputs they now might to "securely find the smallest input" or "biggest input"...probably bad examples, but the ideal functionality is very dynamic and always delivers the desired output and is maintaining the state.
So we would now talk about $\pi$ UC-realizes $\mathcal{F}$, with the analog definition above.
So far, we talk about one "stand-alone" task. Not about composing our protocol $\pi$.
Before talking about the universal composition theorem they introduce the protocol $\rho$ in which the machines are allowed to make multiple calls to some ideal functionality $\mathcal{F}$.
The ideal functionalities can run concurrently, and independantly. I do not get why we now work with "multiple sessions" of $\mathcal{F}$. Can't one single $\mathcal{F}$ session not take care of all machines inputs and desired outputs multiple times? I only partially understand the idea here. $\rho$ represents basically our composed protocol. The theorem just states that by replacing all ideal functionality calls with a $\pi$ call, we end up with a protocol that is atleast as secure as $\rho$. That means we can just focus in future on a "stand-alone" task namely looking for a protocol that UC-realizes an ideal functionality, and knowing by the theorem it is securely-realizing in a composed protocol too.