- There could be other vulnerability in the scheme/protocol/construction, for example, unsafe design of protocol, how can we be sure that the scheme is at least as difficult as the hard problem? Or, if the scheme/protocol has this kind of vulnerability, it should not be considered holding a reduction to the problem, even the scheme does intend to use the problem as its underlying hard problem?
Having a reduction to a problem is a formal mathematical theorem. It is either valid, or isn't.
Reductions are used as a guide against the design flaws that you mention --- a protocol with a reduction has a guarantee that if the protocol is broken, then provided the reduction (and its proof!) are valid, you have an algorithm to attack the underlying hard problem.
A scheme "intending" to do anything doesn't really matter. A reduction is a formal procedure to convert attacks on the scheme into (roughly, up to things like the aforementioned tightness gap) equally effective attacks on the problem. None of this has anything to do with intent.
- Is there a situation like solving problem P as a subroutine of breaking scheme S? It looks more reasonable to me, but all I see is the vice versa way.
Yes, this almost always holds, but is the opposite of what you are talking about. Namely, for most cryptographic schemes $\mathsf{S}$ based on problem $\mathsf{P}$, we get that $\mathsf{S} \leq \mathsf{P}$. As an example, if we can take discrete logs, discrete-log based crypto is broken.
The interesting statement is the reversal of this, $\mathsf{P} \leq \mathsf{S}$, or if discrete-log based crypto is broken, we at least get an algorithm for taking discrete logs as a consolation prize.
Note that there are often formal gaps between the problem $\mathsf{P}$ a cryptosystem is based on in a high-level conversation (discrete logs, factoring), and the problem which shows up in the formal reduction (DDH, RSA assumption i.e. taking modular roots).
Why in tightness gap, the definition is with "greater than or equal to" advantage? I mean, greater advantage and longer time both contribute to break the scheme/problem, why time is at most and advantage is at least? Does it have anything to do with average/worst case or computational complexity?
It doesn't. It really has to do that the informal way of measuring the complexity of an attack is something like $\log_2\frac{\mathsf{time}}{\mathsf{advantage}}$.
Often this isn't precisely what you want, i.e. maybe you use $\mathsf{advantage}^2$ rather than $\mathsf{advantage}$.
Anyway, with an expression of this form, if we want to upper bound the complexity of an attack (i.e. show the attack is not too bad), we need
- an upper bound on time, and
- an upper bound on $\frac{1}{\mathsf{advantage}}$, or a lower bound on $\mathsf{advantage}$.