Not really an answer but would be hard to read as comments. Feel free to downvote.
Pretty much all personal (consumer) computers for the last decade have CPUs with special instructions for AES and often also GCM/GMAC, so when SSL/TLS/HTTPS uses those (and usually it does) the CPU load is minimized. However, such computers for the last two decades have almost never used CPU to display graphics (or video), but instead one or several GPU(s), so comparing 'CPU cycles' for these uses is meaningless. What does take CPU in modern web browsing is running megabytes upon megabytes of 'JavaScript' (much of it really ECMAscript, or sometimes WASM now) and 'responsive' (i.e. constantly moving to make it difficult to read or control) design.
As a first approximation, try downloading some large files, like Unix distros, to storage. Even if your network connection is fast enough for this to pin one or two cores, I bet you'll find the volume of data you can download (but not process) is at least 10 times what you can effectively handle from a website, meaning that encryption/decryption and authentication (which is also required but you didn't ask about) plus TCP/IP processing (much of which is often offloaded to the NIC now) plus filesystem is less than 10% of the total, and probably more like 1-3%. Unless you have antivirus or similar doing realtime scanning, in which case it may eat as much as everything else combined -- or not.
But note the things you identify as computers, like desktops, laptops, and tablets, are not really 'commodity'. The computers embedded in things like your car braking system, refrigerator, stove/oven, furnace/boiler and/or air conditioner, doorbell, lawn sprinkler, and dog collar are far more numerous and far cheaper, making them much more commodities than anything you can ever use to display a website.