This is not specific to Kubernetes, this is normal behaviour of LACP. It does not provide true throughput increase, rather its action better be described as "deterministic distribution of connections" (not individual packets), and a fault tolerance.
It extracts from the packets some header fields (determined by the mode) and hashes them. For instance, hash mode "layer3+4" takes OSI layer 3 and 4 information, e.g. IP and port. The hash directly determines which LACP leg to egress this packet. Whichever hashing mode you choose, all packets belonging to the same connection will be hashed to the same leg, so any single connection could not exceed a single leg throughput.
When another connection appears, if you're lucky, it could utilize anoher LACP leg. In this case two connections will be distributed between legs and you will have twice total throughput between hosts. This is not guaranteed: it could happen they both be routed through the same leg. But, when you have many connections (as it usually is when we consider convergent clusters), on average both legs will be utilized.
I can compare this to Kubernetes, if you wish. If you add nodes (and scale the deployment accordingly), you can increase the number of clients that could be served by the cluster. But you can't improve response latency (time to service) of a particular request by this scaling (if the cluster wasn't overloaded).