Score:2

Discrepancy $δ$ in the Berlekamp-Massey Algorithm

ar flag

I have a question regarding to the Berlekamp–Massey algorithm. Can someone guide me to understand the idea/intuition of this algorithm?

According to the explanation in Wikepedia, in each iteration, the algorithm is trying to calculate the discrepancy $δ$.

If $δ≠0$, the algorithm will update the error locator polynomial using an update polynomial $B(x)$. However, at this point, I know that the resulted error locator polynomial will make the $δ$ in current iteration become zero. But, how about all the $δ$ in previous iterations? It is very hard to visualize that the algorithm actually makes all previous $δ=0$.

Score:1
sa flag

Actually, BMA is a recursive algorithm and the discrepancy $$d_{n+1}=\hat{s}_{n+1}-s_{n+1}$$is the difference between

  • What the current polynomial $C^{\{n\}}$ determined by BMA based on the sequence $(s_0,\cdots,s_n)$ predicts as the next symbol $\hat{s}_{n+1}$, and
  • The actual symbol $s_{n+1}$.

This is what is used to update the polynomial if necessary. By induction, all discrepancies are then zero.

ytj_banana avatar
ar flag
But I couldn't visualize from the case that adding the updated polynomial, $B(x)$ can actually make the new error-locator polynomial generate all the previous $s_0,\cdots,s_{n-1}$ with 0 discrepancies. Do you mind giving me a small proof of showing it is true?
ytj_banana avatar
ar flag
The equation that bother me is $C(x) <- C(X) + (d/b)(x^m)B(X)$ (note: the notation is from the Wikipedia). I know that after this modification, the $C(X)$ will generate the sequence $s_n$ with 0 discrepancy. But, how can the algorithm make sure that it can still generate all previous sequence $s_0,\cdots,s_{n-1}$ with 0 discrepancies?
kodlu avatar
sa flag
Don't use wikipedia as an authoritative source. It's not easy to determine where they err, if they do. The proof goes by induction but it is a bit delicate, I don't have time to describe it now. Blahut's book *Fast Algorithms for Signal Processing* has a good description, in section 7.4. The book is possible to find online.
ytj_banana avatar
ar flag
Thank you for recommending the book. It's very usefull!! I got it now!
Score:1
us flag

The Berlekamp-Massey algorithm is a procedure for LFSR synthesis; it. finds the shortest LFSR that will produce the given sequence $s_0, s_1, s_2, \cdots$. The algorithm is iterative (not recursive since it doesn't call itself) in that it examines the sequence one symbol at a time until it has processed the entire given sequence. At the end of the $i$-th iteration, the algorithm has examined $s_0, s_1, \cdots, s_{i-1}$ (examination of $s_i, s_{i+1}, \cdots$ is yet to come) and has synthesized the shortest LFSR that will generate $s_0, s_1, \cdots, s_{i-1}$. Then, at the beginning of the $(i+1)$-th iteration, the algorithm determines whether $s_i$ will be generated by the LFSR it has just found by computing what the next output $\hat{s}_i$ would be from the just-synthesized LFSR, and comparing it to what we want the LFSR to generate, namely, the given $s_i$. The difference is called the discrepancy $\delta_i$ and if $\delta_i$ happens to be nonzero, the previously-synthesized LFSR is updated so that the updated LFSR will compute $s_0, s_1, \cdots, s_{i-1}, {\mathbf s_i}$ (emphasis added). This update might increase the LFSR length and also change the feedback taps, or just change the feedback taps. It can be proven that if an iteration resulted in an increase in LFSR length and changing the taps, then on the very next iteration, only the feedback taps might change; the LFSR cannot increase in length.

In short, there is no need to worry about what happens to previous discrepancies; the current LFSR is guaranteed to generate all of the sequence examined thus far without any discrepancies creeping in during the generation process.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.