The referenced paper describes a $(t, n)$ threshold signature scheme. That is a signature scheme where any $t+1$ out of $n$ parties can generate a signature by working together. In such a scheme then the assumption will be that at most $t$ out of the $n$ parties are malicious. Clearly if more than $t$ parties are malicious then they can sign whatever they want by design.
During key generation the parties will indeed choose and $(t, n)$ VSS-share a private value $u_i$, later leading to a $(t, n)$ Shamir sharing of the signing key $x$ with each party holding a share $x_i$.
Now due to the assumption that at most $t$ parties are malicious, no $t+1$ parties will collaborate and be able to reconstruct the secret key $x$ directly from their shares $(x_i)$. Nor will there be enough malicious parties to reconstruct each party's $u_i$, to indirectly reconstruct the secret key $x$.
There is another potential source of confusion, based on your message:
For example, if we suppose 4 participants include [Alice, Bob, Carol and Dave] and we want to have $(4,3)$ Tss. In fact 3 people can perform signing. Alice put her $u_i$ value on a quadratic polynomial and performs $(4,3)$ Feldman’s Vss.
In a $(t, n)$ secret sharing it has to hold that $t < n$, so the notation $(4, 3)$ makes no sense. I assume you meant a $(3, 4)$ sharing, where $3 + 1 = 4$ (out of $4$) parties are required to reconstruct the value?
If so then the sharing would involve a polynomial of degree $3$ (thus requiring $4$ shares to reconstruct). And since we here assume that at most $t = 3$ parties are malicious, they cannot do so.
(As an aside: The described scheme claims to work even in the case of a dishonest majority, that is they only require that $t < n$. If embedded in a larger system, other components might impose stricter requirements, such as $t < 2n$ or $t < 3n$.)