Score:1

How new cache designs stop cache-based side channel attacks

jo flag

How do new cache designs aid in defending against cache-based side-channel attacks? While conducting a side-channel attack, we have the T-table/S-virtual box's address. Normally, address mapping is handled by the operating system. An adversary can always determine the virtual address. How will a new cache design help?

New cache design - https://memlab.ece.gatech.edu/papers/MICRO_2018_2.pdf

The code for cache-based side channel attack- https://github.com/ECLab-ITU/Cache-Side-Channel-Attacks/blob/master/AES%20-%20HalfKey/Flush%2BReload/spy.cpp

Score:3
my flag

An adversary can always determine the virtual address. How will a new cache design help?

Of course, the virtual address is translated into a physical address via the MMU tables - this does not attempt to disguise that.

Instead, what this idea attempts to do is disguise the mapping between physical address and location within the cache.

The idea behind cache attacks is that memory references that the program under attack makes affects the cache (for example, if the program makes a reference to an sbox entry, that sbox entry will be pulled into the cache if it wasn't already there). Hence, if the attacker monitors the cache (which is shared between different programs running on the same CPU), they can deduce what memory references the program makes (and if the program makes secret dependent references, then the attacker gets some information about those secrets).

The idea in the paper is to randomize the mapping between physical addresses and cache entries; that is, if the program loads in an sbox entry, the attacker could potentially deduce that something was loaded into the cache, but wouldn't be able to determine which entry was loaded (and so he couldn't deduce the sbox entry).

That said, there are several obvious issues with the cited paper:

  • The attacker couldn't necessarily deduce where things got loaded, but they could determine that something was, and that gives them some information. For example, the attacker would be able to distinguish between 'two different sbox entries in two different cache lines were referenced' and 'only the sbox entries in one cache line were referenced', and that obviously is some information.

  • Modern CPUs have multiple levels of cache; this proposal does not address the L1 cache (which is internal to the CPU), and the attacker can obviously exploit that.

  • The "two cycle" cipher they propose is, to put it kindly, dreadful. It is entirely linear (see figure 6 of the paper); that is, $\text{Encrypt}_K(M)$ is equivalent to $\text{Encrypt}_0(M) \oplus \text{F}(K)$ (for some function $F$, which is easy to compute, but which we can ignore) - the attack cares only about collisions, and the value of the secret key $K$ does not affect what collides with what. Hence, the attacker can just proceed with his normal attack, just taking into account how $\text{Encrypt}_0$ works, and he proceeds as efficiently as before.

b degnan avatar
ca flag
it's weird when you know the research group. That paper is unusually bad for the group. One of the things that I could do on PowerPCs was cache locking. I could just put a lookup table into the cache. Also, why don't people just explicit calculate the SBOX without the using lookups? That'd pretty much mask what was calculated as it'd just be a series of XORs and other ALU stuff.
nivedita avatar
jo flag
How will this affect the code (as a countermeasure)? Please refer to the line 85 and 96 - we are flushing the address from the cache. The mapping from physical address to caches is happening in the backend. I am not able to understand how new cache designs can stop this.
nivedita avatar
jo flag
"what this idea attempts to do is disguise the mapping between physical address and location within the cache" -- We use mmap function to get the virtual address, which is flushed. What role does cache addresses play in all this?
poncho avatar
my flag
@nivedita: I hadn't looked at the github code; instead, I was reviewing the paper. Yes, if you're running in the same thread as the code under attack, this idea doesn't help at all. I was working on the weaker assumption that the program under attack was benign, and that the attacker's code happened on another thread running in parallel (say, within a cloud environment) - my point was that it didn't help much in that weaker scenario either...
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.