You are looking for a fair exchange primitive (fair reconstruction of shares is a slightly more complex variant, I'll focus on the former in this answer). There has been a lot of research on the subject. The bottom line is:
- In the standard model of computation without honest majority, and without any trusted third party, fair exchange is impossible (seminal result here, some more recent work here).
- However, with a very limited form of TTP (who sees nothing and is only used when things go wrong), it becomes feasible. The keyword is optimistic fair exchange (here or here).
- With a strict honest majority of parties, general fair secure computation is possible.
Then, there are other ingenious solutions, that deviate from the standard model of computation:
- You can use timing assumptions, and primitives such as time-lock puzzles, or timed commitments. Let me give you a simplified example: suppose Alice and Bob have respective secrets $a$ and $b$. They first exchange commitments $c_a$ and $c_b$ to their inputs. Then, they will gradually weaken the opening of the commitment - think about e.g. releasing bit-by-bit the opening information, in rounds. Then, you obtain the following guarantee, which is not quite fairness, but can be good enough: suppose for example that Bob aborts early, and manages to recover $a$ by brute-forcing the missing part of the opening in time $T$. Then Alice always has almost as much information as Bob, up to a single bit (since the parties exchange a single bit per round). Therefore, she can herself recover $b$ in time at most $2T$.
More involved constructions try to ensure that the "brute-force time" is necessarily sequential, such that even if the cheater has a large amount of computational power, they cannot significantly speed it up. See more details here - there are many follow-ups.
A downside of this solution is that there is no upper bound on the power honest parties might have to invest: if the adversary is willing to spend a super huge time $T$, then honest parties must spend twice more time than that. This can be somewhat fixed, if you are willing to settle for partial fairness, where you are ok with having a small probability $1/n$, where $n$ is polynomial, that the adversary breaks fairness. In this setting, we showed in this paper that you can get a (partially) fair exchange protocol with a clear bound on the maximum computation the participants might have to invest (even when someone cheats).
Or you can replace the TTP by using a blockchain as your trust assumption. This, in some sense, decentralizes the TTP. Some proposals along these lines can actually be implemented in practice on the Ethereum blockchain. A starting point for this line of work is this paper.
Variants of the above use monetary incentive, with smart contracts that guarantee that you either play honestly and fairly, or you loose money (here is a sample from this line of work, though I don't think it explicitly address fairness).
The introduction to my paper here provides more extended discussions and pointers, which you might find of interest.