When we invent a more complex encoder, the task of decoding the received vector becomes less straightforward. (The reader who is eager to see the denoument of the plot may skip ahead to section 1.2.5.)
If we assume that the channel is a binary symmetric channel and that all source vectors are equiprobable, a priori, then clearly the optimal decoder is one that identifies the source vector whose encoding differs from the received vector in the fewest bits. Is there an efficient way of finding this source vector?
In the special case of a linear code and a binary symmetric channel the decoding task can be re-expressed as syndrome decoding. The first four received bits , purport to be the four source bits; and the received bits purport to be the parities of the source bits, as defined by the generator matrix . So why not evaluate the three parity check bits for the received bits , and see if they match the three received bits ? The differences (modulo 2) between these two triplets are called the ` ' of the received vector. If the syndrome is zero, that is, all three parity checks agree with the corresponding received bits, then the received vector is a codeword, and the most probable decoding is given by reading out its first four bits. If the syndrome is non-zero, then we are certain that the noise sequence for the present block was non-zero, and the syndrome is our pointer to the most probable underlying error pattern.
The operation of finding the syndrome vector can be described by a
linear operation. If we define the matrix
such that the matrix of
equation (1.2)
is
where is the identity matrix, then
the syndrome vector is , where the parity check matrix
is given by ; in the case of modulo 2 arithmetic, -1 = 1, so
Since the received vector is given by
and =0,
the syndrome decoding problem is then to find the most probable noise vector satisfying
the equation