I've been implementing an asymmetric key exchange for creating a symmetric key.
My question is more of a philosophical/legal one in terms of key-exchange responsibility, and what happens when let's say, a commonly known key is used.
I'd like to set these base variables:
- The client has the servers public key hardcoded in memory
- The server has its private key hardcoded in memory
- There is a hardcoded
nonce_size
- There is a hardcoded
public_key_size
- There is a hardcoded
mac_size
The steps of communication for exchange are as follows:
- After successful TCP connection, client generates new public key
client_pubkey
, encrypts its own public key client_pubkey
using a new nonce temp_nonce_1
and the hardcoded server_pubkey
, and sends a message where the first bytes are the nonce, and the rest are the encoded client_pubkey
(including its mac). This has a known size of nonce_size + public_key_size + mac_size
.
- The server receives a message, checking its size for exactly
nonce_size + public_key_size + mac_size
. If it is, it assumes it is a key exchange. It decrypts the public_key. The server then generates a new symmetric_key
, and sends a message back containing a new nonce tmp_nonce2
, and the symmetric key + mac encrypted using the received client_pubkey
.
- The client receives this message, knows exactly what size it should be otherwise it rejects. The client now has a symmetric key to use.
- Server and client communicate through the symmetric keys.
So here is my attempt at understanding why this is secure:
- In the case of random garbage bytes being sent to the server, the server generates a key, encrypts key using garbage, creating more garbage, and then sends that. So in reality, we have sent out more garbage, and then we just never receive a valid message and eventually timeout, closing connection. The only harm here is wasted CPU time for a few seconds.
- The client specifically sends a public_key of just zero's (or some other obvious bitset, like all 1's) and a valid (or also zero) nonce. In which case the server treats the zero's as the public key and sends a symmetric key encrypted via the public key zero's. However, this should still be safe because having the public key of zeros assumes you know the private key that said zero public_key is derived from, which would potentially take billions of years to reverse-engineer, so it is still safe.
- Assuming the client and server can restart this process infinitely (and there is no x-try limit on the server), assuming somehow by sheer incalculable luck and sending random bytes, a key exchange occurs, and a valid message of whatever data the server was expecting is sent and parsed by server, we throw our arms up in the air as it is approachable nearly impossible and basically
shit happens
.
- In this case, we can't say anyone is liable can we? There is nothing that the server can do about knowing it to be a malicious packet.
- Even if we had an x-retry limit of 2 attempts, it could theoretically even occur on the very first connection from an IP address, and again there is nothing we can do about it.
So basically, everything is secure because the basis of the very first message is expected to be encrypted by a public_key that only the server should know the matching private_key of, right? And that in the sheer case that something, somehow, ever does manage to get through, the outcome is thrown to the undefined?
I am sorry if this is really terse, I am just trying to verify both my understanding of the theory and ensure there is no mistake in the implementation of such a protocol.