I'm writing my term paper on bandwidth restrictions between varying generations of PCI-E lanes using the 1x interface of the lane and exploring modern bandwidth restrictions when mining using the latest GPUs.
GPU risers themselves use the 1x lane and I've been looking at the PCI-E wiki to analyze these restrictions:
https://en.wikipedia.org/wiki/PCI_Express
Now I know my Motherboard supports PCI-E 2.0 (12x) and PCI-E 3.0 (1x), which all my GPUs are plugged into the 2s.
I sat down and did the math to which became confused:
We can speculate under the notion we're mining
Ethereum.
Ethereum Hash => 64 hexadecimal characters or 256 bits.
RTX 3090 - 125 MH/s (Megahashes a Second) close to average.
How many bits?
1 MegaByte (MB) = 1000000 Bytes.
Therefore we can substitute bytes with hash in this case as they're near synonymous.
125 MH/s = 125 * 1 H / (1000000 MH) = 125000000 H/s.
Now convert hash to bits.
125000000 H/s = 125000000 * 256/1 bits/hash = 32000000000 b/s (bits/second).
Finally convert bits/second to MegaBytes and GigaBytes.
Remember 1000000 Bytes = 1 MB and there's 8 bits in a byte.
Therefore
1 MB (MegaByte) = 8000000 b.
32000000000 b/s = 1 MB / (8000000 b) = 4000 MB/s.
MB: 4000 MB/s.
GB: 4 GB/s.
Therefore the RTX 3090 (125 MH/s) requires a data bandwidth of 4 GB/s.
There's definitely preconceived notions shown here and any clear ups as to:
- How is it pulling 4GB/s while PCI-E 2.0 only supports up to .500 GB/s on the 1x interface when using a riser.
- Am I missing anything or over speculated the size of the hashes, etc.
Thanks! Any help is appreciated.