DECODING OF 2-D CONVOLUTIONAL CODES BASED ON ALGEBRAIC APPROACH

In this paper, we apply the decoding matrix for 2-D convolution codes to reconstruct information sequences. It is suitable for non-square matrices with multivariate polynomial elements. Next, development of a syndrome decoder for 2-D convolutional codes based on Gröbner bases is introduced. The computation of the syndrome vector employs the computation of the syzygy module, found by means of the Gröbner basis of a certain module. Then, estimated error vector can be identified by using m-variate division algorithm. Simulation results show error-correcting capability of decoding process. AMS Subject Classification: 94B10, 94B35


Introduction
The representation of codes can be given in terms of either generator matrix or parity-check matrix.In our daily life, there are several applications of channel coding such as image storage in magnetic disks, data storage in silicon memory, and data transmission in optical media.These applications require reliable processes in order to correctly recover the original information.These significant processes include encoding, detecting and decoding.In particular, an efficient decoding process is not straightforward to develop, and in general, the decoding process is more difficult and more complex than the encoding process.Theory and application of m-D convolutional codes in last two decades can potentially solve those problems, especially the error-correcting code part.The overview of applications of m-D error-correcting codes given in [2] described potential practical implementations such as transmission of image and video signals over noisy channels.
In a noise-free channel, the decoded information can be retrieved from the convolutional encoded codewords without correcting errors by using a pseudoinverse encoder, which follows from the non-square polynomial nature of the encoding matrix.One classical pseudo inverse is the Moore-Penrose generalized inverse.However, it is unlikely to assume that most communication channels are noise-free.Methods to find out a solution in the presence of noise are an active research area.In the last decade, research on m-D convolutional decoding with error-correcting capability has been reported in [7,10], although the theories of m-D signals and systems continue to be a flourishing field.Literature on this field is vast; the attention of this research focuses on 2-D convolutional codes based on an algebraic approach.The path breaking results in this area can be found in Wiener [13], where several fundamental concepts and aspects of m-D convolutional code have been proposed systematically.During past decade, Rosenthal and his research group [11,12] have developed several fundamental concepts on convolutional coding theory by deriving the relationship between convolutional codes and linear systems theory, using a behavior based approach.Another independent approach [5] is to investigate the problems of m-D convolutional code by using Gröbner basis/module theory.Several results are concerned with m-variate polynomial matrix factorization.An example of such applications was also reported in [6].Fornasini and Valcher [8] considered 2-D convolutional codes over the Laurent polynomial ring, by using both the behavioral approach and a state-space procedure to construct encoders and decoders.Convolutional code aspects were reported as well.
In this paper, we apply an approach based on [4, 14] for computing the decoding matrix for 2-D cases.The purpose of using this approach is to reconstruct information sequences.An example is carried out to illustrate the approach.It is suitable for non-square matrices with multivariate polynomial elements.Next, the computation of the syndrome vector employs the computation of the syzygy module, found by means of the Gröbner basis of a cer-tain module.Development of a syndrome decoder for 2-D convolutional codes based on Gröbner bases has some limitations of term orderings.By testing error-correcting capability, reverse lexicographical ordering is the best for this problem.However, using universal Gröbner bases or other term orderings may achieve good accuracy of decoding.The Singular [9] is chosen to implement several algebraic procedures.

Decoding Matrix
Let F = F q be the finite field with q elements and let R = F[D 1 , D 2 , . . ., D m ] be the ring of m-variate polynomials whose coefficients belong to the field F. An m-dimensional convolution code of length n over In 2-D case, there are two directions of delays (shift registers) named horizontal delay (D 1 ) and vertical delay (D 2 ) [5].The realizations of 2-D convolutional encoders illustrate that the correlations between delay elements in generator matrix can be constructed with row and column locations in a circuit diagram.Next, we consider encoder primeness or matrix primeness, a property that is necessary to impose on the m-variate polynomial matrices generating a convolutional code.In the past, many properties of convolutional codes have been introduced based on these primeness, such as the papers [4, 5, 8] that introduced the right inverse of a polynomial matrix, syndrome decoder, and factorization matrices.These papers provided complete descriptions of primeness.Moreover, the paper by Youla and Gnavi [14] about primeness notions is fundamental.It is therefore necessary to discuss and summarize them as follows.
2. left minor prime(LMP) if all the k × k minors of G(D) have no common divisors in R except for units; is necessarily a unimodular matrix i.e., det(T) is a unit.
Generally, the relationship between the codeword sequence represented by vector v and the corresponding information sequence represented by vector u can be expressed as: v = u • G, where G is an k × n generator matrix, whose elements are m-variate polynomials.If one can find a polynomial matrix Z as equal to an identity matrix, then one can directly retrieve the original information sequence under the noise-free environment.The matrix Z is called a right inverse of matrix G or decoding matrix.A suitable method for computing Z based on the algebraic approach and Gröbner bases is investigated.By using the constructive proof of Theorem 2 by Youla and Gnavi in [14], the algorithm for finding a right inverse of an m-variate polynomial matrix has been derived by Charoenlarpnopparut [4], and an appropriate example has been provided.
Catastrophic encoders for codes over the m-D polynomial ring have been studied by Weiner [13].To test a catastrophic encoder, one needs to evaluate the gcd of the full-size minors of generator matrix.In the 1-D and 2-D cases, a convolutional encoder is noncatastrophic if and only if the gcd of the full-size minors of encoder is in the form D l , for l ≥ 0. Proof.If G is LZP, all the k × k minors of G generate the unit ideal in R. We can get where Λ denotes the diagonal matrix with elements {λ 1 , λ 2 , . . ., λ k }.A matrix K is any n × k real constant matrix whose k × k minors are denoted by k j , j = 1, 2, . . ., n k .Note that such a K always exists.Therefore, we can get Then, we imply that a matrix Z is a right inverse of G as follows: Example 2. Consider the 2-D generator matrix G (1) : .
By referring to Definition 1, G (1) is both LZP and LMP.Also G (1) is noncatastrophic, since the gcd of the full-size minors of G (1) is 1.As a result, one can obtain Z such that G (1) Z = I 2 , for instance:

Syndrome Decoder
Due to various kinds of interference in transmission channel, the transmitted codeword is subject to errors and hence it can be written as: ṽ = v + e, where ṽ is called received codeword, and e is called error vector.Later, the received codeword goes into the parity-check matrix H for syndrome computation: s = ṽ • H T , where s is called syndrome vector.If the syndrome vector is nonzero, the presence of error vector is detected.Zero syndrome implies that ṽ is a correct codeword and therefore the error vector is assumed to be null, i.e. no error correction.The relationship between the syndrome vector and the error vector can be derived as: The error-correcting process can be performed by first estimating the error vector for solving e in above syndrome equation, which has an infinite number of solutions.The estimated error vector ê, that corresponds to the computed syndrome, is, then subtracted from the received vector for correction purposes.
In the binary field, addition and subtraction are interchangeable.The decoding process can be performed by the equations: v = ṽ + ê and û = v • Z, where v is called estimated codeword, û is called estimated information sequence, and Z is the decoding matrix, computed as a pseudo inverse of the generator matrix G by using Proposition 1.If ê is determined correctly, i.e. ê = e the decoded information contains no errors, i.e. û = u.A technique is to directly calculate ê from the syndrome equation and use it for the error-correcting process to recover the original information.The m-D generalized version of this scheme is proposed in this section, based on the Gröbner bases and the theory of syzygy.The evaluation of ê is the main objective of a syndrome decoder.
Gröbner bases are powerful tools to deal with polynomial ideals.The computation of a Gröbner basis is generally done by employing the Buchberger algorithm [1,3].The extension of Gröbner bases to the module case can be found in [1].The syzygy is a module whose members are annihilators of the ideal generator, analogous to the null space of a given matrix.The computation of the syzygy module is generally done by first computing the Gröbner basis.Definition 3. Let g 1 , g 2 , . . ., g k be m-variate polynomial row vectors in R n .A syzygy of the k × n generator matrix . . .
The set of all such syzygies is called the syzygy module of G and is denoted by Syz(g 1 , g 2 , . . ., g k ) or Syz(G).
Definition 4. The bit error probability (P e ) is the probability that an information bit of codeword is erroneously transmitted to the destination.Definition 5.The correctable percentage of a code is defined as the ratio of the number of detected and corrected codewords (v = v) and the number of received codewords ṽ.Proposition 2. If sygyzy module of the parity-check matrix H can be expressed as Syz(H) = p 1 , p 2 , . . ., p t , where the subscript t denotes a number of syzygies and a polynomial vector q is a solution of s = q • H T , then the estimated error vector can be defined as q {p 1 ,p 2 ,...,pt} −→ + ê.
Proof.By using the m-variate division algorithm [1], the syndrome vector s = ṽ • H T is reduced to the remainder vector r modulo H that can be calculated by using s H −→ + r.The computation returns quotients represented by a polynomial vector q and a remainder r = 0, i.e. s = q • H T since r is a member of the module generated by H.The q is a particular solution of the estimated error vector.We find Syz(H) = p 1 , p 2 , . . ., p t , and then then expression of all possible estimated error vectors can be written as [4]: ê = q + α 1 p 1 + α 2 p 2 + • • • + α t p t , where α 1 , α 2 , . . ., α t are polynomials in F[D 1 , D 2 ].Also we can express q = α 1 p 1 + α 2 p 2 + • • • + α t p t + ê.Completely reduce the vector q to ê with respect to the module generated by polynomial vectors p 1 , p 2 , . . ., p t .The computation returns quotients α 1 , α 2 , . . ., α t and the remainder vector ê i.e. q {p 1 ,p 2 ,...,pt} −→ + ê.Example 6.This example is to simulate error-correcting performance of the decoding process based on Proposition 2 for three different polynomial generator matrices G (1) , G (2) , G (3) [10].These matrices are shown as following: It is assumed that the binary information sequences have 10,000 random symbols, each of them consists of 8 binary bits, and that the transmission channel has a bit error probability P e , as defined in Definition 4. We assume all bits of transmitted codeword have the same probability, and have P e ≤ 1.The generated binary information sequences are transformed into polynomial vectors.The performance of this decoding process can be measured in term of correctable percentage, given in Definition 5. Table 1 illustrates the correctable percentage of errors in decoding the convolutional codes corresponding to various term orderings.For this decoding problem, the degree reverse lexicographical ordering is the most efficient ordering.However, for every particular problem, one of the term orderings can be the most effective.
Example 7. The original image for testing has 512 × 512 pixels and each pixel has 8 bits as shown in Fig. 1(a).Using encoder G (1) given in Example 6, G (1) provides the ratio 8/48 for binary input and binary output bits.We assume all bits of transmitted codeword have the same P e .In Fig. 1(b), many Table 1: Correctable percentage of errors based on Proposition 2 for G (1) , G (2) and G (3) with different term orderings.

Term ordering
G Once applied to the encoded image, the decoding procedure is required to retrieve the original image.Consequently, using both the syndrome decoder with lexicographical ordering (this section) and the decoding matrix (previous section), almost all these random erroneous bits on the image are corrected as shown in Fig. 1(c).The peak signal-tonoise ratio (PSNR) is used to evaluate the image quality, and is here defined as PSNR = 10log 10

Proposition 1 .
Let an k × n polynomial matrix, G whose elements are m-variate polynomials in the ring F[D 1 , D 2 , . . ., D m ] has a right inverse if and only if G is LZP.

255 2 MSE
. The mean square error (MSE) can be denoted asMSE = M,N [I ij −I ij ] 2 M •N, where M and N are the number of rows and columns in the images, and I ij and I ij are original image and noisy/reconstructed image respectively.The higher the PSNR, the better the quality of the reconstructed image.

Figure 1 :
Figure 1: Decoding image with noise