Score:1

Is it possible to generate random hash that is preimage of current hash

eh flag

In one online crash game website (casino betting game), every game has a hash that is made public after the plane crashes and the crash coefficient is supposed to be random. The coefficient can be retrieved using the game hash (so if you know the game hash you know the crash coefficient).

But what I find unclear is that the hash of the current game is nothing else than the hash of the hash of the next game using SHA-256. Meaning for every game, the next game hash (which is not yet started) is the preimage of the current hash but how are they able to generate a random hash that is the preimage of a given hash? or maybe the process isn't random as it is supposed to be.

jp flag
Finding preimages of random SHA-256 hashes is not computationally practical. A clearer description (and link to the game & its documentation) would help. They might be doing something like taking a random value, hashing it repeatedly, then using the Nth hash for game 1, the N-1sh hash for game 2, N-2nd for game 3, etc.
dddr rddd avatar
eh flag
They have now over 4 million games and still counting (it's a live game),so could it be that they have precalculated all this number of hashes ? if so won't they run out of hashes one day ? also here is their website : https://stake.com/casino/games/crash
dddr rddd avatar
eh flag
Here is the function to find precedessor game hahs of a given hash game def get_prev_game(hash_code): m = hashlib.sha256() m.update(hash_code.encode("utf-8")) return m.hexdigest()
Score:1
kr flag

Only the authors can explain how they did it. This is just a guess. But I think @GordonDavisson is right. I think they have some initial value and just generate a sequence up to some value, then up to previous one, etc.

Now days a single GPU can generate about 1 000 000 000 hashes per second. See some statistics. And when you run it for 100 000 seconds, a bit more than 24h, you will generate 100 000 000 000 000 hashes. Even if there are 1 000 000 000 games a day, this would be sufficient for about 300 years.

They would not need to compute all the hashes every time. It would be sufficient to store the 1 000th, 1 000 000th, 1 000 000 000the etc. hashes. Then when needed they could just compute less than 1 000 hashes. When such a "pool" is used, they would do "a step back" and compute a bit longer step of 1 000 000 hashes, then they would have a new base for 1 000 hashes. Etc.

Thus, computationally it is feasible even for a single GPU.

I find their idea very nice, because the first impressions is "it is impossible" and because it does not need much resources.

fgrieu avatar
ng flag
If I get their method correctly, it can convince players that the outcome is not chosen as a function of their (or anyone's) play; but not that other players with insider knowledge do not know the outcome. As far as "high score" or similar go, that's an issue. I have not examined the rules to determine if that's a monetary issue.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.