Score:2

A tensor-based Diffie-Hellman exchange

in flag

Below is a description of a "cube" Diffie-Hellman, based on commuting matrix actions on tensor products. Some questions:

  1. References for something similar?
  2. Obvious flaws, is this a terrible idea?
  3. Any other comments?

Definitions

Let $S$ be a finite ring, $n$, $k$, positive integers, and $R=M_k(S)^n$ the $n$-fold product of $k\times k$ matrices over $S$. Let $M$ be the $R$-module $(S^k)^{\otimes n}$, the $n$-fold tensor product of the free $S$-module of rank $k$. $R$ acts diagonally on $M$ (say from the left).

[One could also let $k=k_i$ vary with the index $1\leq i\leq n$, but we fix $k$ for simplicity.]

Cube Diffie-Hellman

Let $T\in M$ be fixed and public (perhaps of a particular form?). Alice and Bob randomly choose $A, B\in R$ that commute (or commute with some probability). Alice sends Bob $T_A=A\cdot T$ and Bob sends Alice $T_B=B\cdot T$. Their shared key is $$ T_{AB}=T_{BA}=A\cdot T_B=B\cdot T_A $$ assuming $A$ and $B$ commute.

For instance, Alice could choose a random index $a$ and $A\in M_k(S)$ as the $a$th coordinate of $R$ (with $I_k$ in the other coordinates) and similarly Bob could choose a random index $b$ and $B\in M_k(S)$ as the $b$th coordinate of $R$. Similar variations are possible, e.g. Alice acts on first half of coords, Bob acts on second half: $$ T_A = (A_1,\ldots,A_{n/2},I_k,\ldots I_k)\cdot T, \quad T_B = (I_k,\ldots I_k,B_1,\ldots,B_{n/2},)\cdot T. $$

Is the action easy to compute?

If $T$ is a random tensor, then one has to go through all $k^n$ basis elements $e_{i_1}\otimes\cdots\otimes e_{i_n}$ (where $\{e_i\}_{i=1}^k$ is the standard basis for $S^k$), apply the coordinate maps (e.g. $A=(A_1,\ldots, A_n)\in R$) and simplify. This is basically $nk^n$ replacements ($e_{i_j}\mapsto A_je_{i_j}$) and collecting coefficients to rewrite in the standard basis.

[This computation can be reduced if $T$ is taken to be, say, a "pure" tensor, $v_1\otimes\cdots\otimes v_n$ (now only $n$ replacements). However, the final shared key computation will be large again.]

Is the action hard to invert?

Problem: Given $T$ and $X\cdot T= T_X$, find $X$. When $n=2$, this is easy; just (pseudo)invert $T$. For larger $n$ (say over a field), I think one can "reduce" $T$ to some standard form, e.g. compute $Y\in R$ s.t. $Y\cdot T=T_0$, with $T_0$ something like a higher-order identity tensor $\sum_ie_i\otimes\cdots\otimes e_i$. If $X$ and $Y$ "mostly" commute then $X$ might be computable from $Y\cdot T_X$.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.