ALGEBRAIC DECODING IN TWO-DIMENSIONAL LINEAR CODES

The tools of algebraic approach are used to decode two-dimensional linear block code defined as a two-step operation, namely a column-wise encoding matrix and a row-wise encoding matrix. A binomial ideal is essential to initiate the decoding algorithm associated with the Gröbner basis of this ideal and a m-variate division algorithm. To demonstrate the efficiency of the decoding process by using this algorithm can be measured in terms of the correctable percentage with interesting parameters: bit error probability, term ordering, and computational time. By testing erroneous bits in received codewords for several different encoding matrices of 2-D linear block codes, the algorithm can be reliable to correct up error-correcting capability of code. There is a struggle of several variables (indeterminates) in the binomial ideal since almost computational time will be lost finding Gröbner basis of binomial ideal. AMS Subject Classification: 94B05, 94B35


Introduction
For decades, 1-D linear block codes have been applied to protect digital data Received: December 18, 2014 c 2015 Academic Publications, Ltd. url: www.acadpubl.eu§ Correspondence author from transmission errors due to the imperfection of the communication channel.When there is the need for transmitting 2-D data such as images and field arrays, designers opt to transform the 2-D data array into a 1-D data vector by scanning row-by-row or column-by-column.As a result, the 2-D correlation among neighboring bits is lost and ignored.In many coding schemes with memory, such as 2-D convolutional codes, the correlation can be employed to gain better error approximations.Although, a 2-D linear block code is a memoryless coding scheme, studying its properties and behavior can lead to a better understanding of 2-D convolutional code designs.The application of Gröbner bases to linear codes was first introduced by Cooper (1993) [1].The author proposed the use of polynomial expressions for cyclic codes to derive a decoder that makes use of the Gröbner basis approach.The development of the decoder is based on the computation of a set of syndrome equations that are m-variate polynomials.If the number of errors affecting the received codeword does not exceed the error-correcting capability, the roots of syndrome can be obtained.In Borges-Quintana et al, [2,3], authors introduced a binomial ideal I C constructed from a binary linear encoder and, the reduced Gröbner basis of this ideal with respect to total degree term ordering was derived.The study can be applied to the decoding problem of binary linear codes.Bounded distance decoding of arbitrary linear codes using Gröbner bases was formulated by Bulygin and Pellikaan in [4].Solutions of the syndrome equations associated with a certain ideal are essential for this approach.In a recent work, Saleemi and Zimmermann (2010) [5,6], shown that binary linear codes can be associated with binomial ideals.The Gröbner bases of binomial ideals are studied.In this paper, we present a novel decoding for 2-D linear block codes which are based on Gröbner basis, and also provide a good insight for implementing a code search algorithm.Moreover, the merit of our work lies on the alternative decoding method which can be generalizable to a non-memoryless coding scheme such as 2-D convolutional code based on tail-biting technique.

Matrix Description of Encoder
A k 1 × k 2 matrix of input information is encoded and, a n 1 × n 2 matrix of codeword is produced by the encoding matrices.Next, every codeword in C can be denoted as All entries of U and V are called bits.In later discussion, each entry in matrix can be called bit as well.For the case of a binary channel, a bit here means 0 and 1. G 1 and G 2 here are defined as column-wise encoding matrix and row-wise encoding matrix, respectively.Without loss of generality, both G 1 and G 2 are assumed to be systematic; this means: and where are matrices whose entries are all bits.As a result, the corresponding parity-check matrices can be expressed: and where the superscript T denotes the transpose of a matrix.Note that plus and minus for binary field are interchangeable.These parity-check matrices satisfy the constraints: Moreover, we define a relationship between the parity-check matrix and the codeword as follows: Definition 1.The code rate R c is defined as the ratio between the number of information digits and the number of codeword digits.From the above discussion, it is possible to define the code rate as: The code rate is one of the key parameters for evaluating the code performances.A high code rate implies that there are few redundant check bits among the codeword bits, but one of disadvantages is that it is difficult to remedy transmission errors.

Minimum Distance
Basic concepts for 1-D linear block codes such as a minimum distance and an error correction algorithm are necessary for developing new aspects of 2-D linear block codes.The following definitions are important results for this work.In the following, we provide concise descriptions of these concepts.The reader is referred to references for further details.Definition 3. The Hamming weight of a codeword is defined as the number of nonzero elements that are in the codeword.Definition 4. The minimum Hamming distance d min of a code is the smallest Hamming distance between distinct codewords.The parameter d min can be used to find the error-correcting capability of a code.If a code can correct up to t errors, where t is the upper bound, then where 3. Formulation of the Problem

Binomial Ideal
A binomial ideal as a binary linear code plays a crucial role in the decoding process of 2-D linear block codes, since an application of the binomial ideal of a linear code can be extended to the decoding process of 1-D error-correcting codes.Therefore, a short summary on the binomial ideal and its applications is given in the following.
The situation for a decoding process associated with the binomial ideal of 1-D linear block codes gives in [3,5,6].Then, we apply fundamental points of view for decoding 2-D linear block codes, and refer to the binomial ideal I C : The matrix V does not belong to C. It can be represented as: where V m ∈ C is the codeword that has the maximum weight, and K is a matrix whose all entries are nonzero bits.The matrix K has the same dimensions as the matrix V m .Then, the number of terms in (X V − 1) equals to the number of all possible V m .
Definition 5.The bit error probability (P e ) of a binary linear code is the probability that an information bit of codeword is erroneously transmitted to the destination.
Because assume the transmission channel has bit error probability given in Definition 5, the transmitted codewords may occur some erroneous bits and can be expressed: where Ṽ is called received codeword and E is called error matrix in transmission channel.In simulation part, we assume erroneous bits e ij , for 1 ≤ i ≤ n 1 and 1 ≤ j ≤ n 2 in E, are at random as well as the number of them equals t of a code.Ṽ , V and E in Eq.( 12) can be represented by n 1 × n 2 matrices as: V and Ṽ can be mapped into monomial in Eq.( 10) The purpose of the decoding process is to find unknown elements in an estimated error matrix defined as Ê.Assume that Ê has all unknown elements êij , for 1

Algorithm for Decoding 2-D Block Codes
Let G be the Gröbner basis of the binomial ideal I C .Let the received codeword Ṽ ∈ F n 2 for n = n 1 n 2 and be based on the monomial term defined X Ṽ .
Proposition 2. If X Ṽ reduces to r modulo G and we denote as X Ṽ G −→ + r, then the remainder of the m-variate division algorithm identifies E in received codeword Ṽ .
Proof.If G = {g 1 , g 2 , . . ., g k } is a Gröbner basis of ideal I C , then for any V ∈ C, the corresponding X V can be expressed as a unique linear combination of the basis elements; i.e., there exist unique λ 1 , λ 2 , . . ., λ k ∈ F 2 such that −→ + r.This implies that the remainder is E.
If the remainder (r) is zero, the codeword V is equal to the received codeword Ṽ , and hence there are no errors on Ṽ .Otherwise, Ṽ contains errors.Note that a short summary of the m-variate division algorithm can see in [7].The correctable percentage of a code is defined as the ratio of the number of detected and corrected codewords ( V = V ) and the number of received codewords ( Ṽ ).
Algorithm 1: Decoding of error-correcting code by means of Gröbner bases.
Step 3: Construct I C with respect to any term ordering.
Step 4: Compute Gröbner basis G for I C .
Step 5: Use the m-variate division algorithm in order to determine Ê by the reduction process Step 6: Carry out the remainder of the m-variate division algorithm to represent the errors.For instance, if the remainder is x 11 x 12 x 23 x 32 and the received codeword Ṽ is 3 × 3 matrix, then the estimated error matrix Ê can be mapped into the matrix: As shown in Fig. 1, the Ṽ passes into the decoding 2-D linear block code to produce the Ê and then it will be used to correct Ṽ with V = Ṽ + Ê, where V is defined as estimated codeword.If the V is correct i.e.V = Ṽ = V , then the error correction is successful, but otherwise Ṽ contains errors.In the binary field, addition and subtraction are interchangeable.The estimated information matrix denoted by Û can be expressed: where Z 1 and Z 2 are a left inverse of G 1 and a right inverse of G 2 , respectively, as well as in this work, both matrices have not been demonstrated.

Simulation Result
Two examples are selected to demonstrate the efficiency of decoding process by using Algorithm 1.The considerations are focused on some parameters.Singular [8] is chosen to implement several algebraic procedures such as ideals, syzygy modules, m-variate division algorithms, and Gröbner basis.Note that all computational results have been done on Windows operating system with processor speed 2.67 GHz and RAM memory 4.00 GB.
Example 7.This example is to simulate the error-correcting performance of the decoding process by means of Algorithm 1 for five different encoding matrices.To simplify the notation of the different codes, the superscript (i) is introduced which leads to All encoding matrices are given in Table 1.We do not give details of all entries in each code Assume that the number of random erroneous bits of E in each received codeword equals the error-correcting capability t of a code.For example, in Table 2, the number of random erroneous bits of E for G 2 has only one.There are 16000 received codewords.The performance of this algorithm can be measured in terms of correctable percentage.The I C with respect to degree lexicographical ordering.For example, the 2 can be expressed as follows: I C = x 13 x 14 x 22 x 23 x 31 x 33 − 1, x 13 x 14 x 21 x 23 x 32 x 33 − 1, x 12 x 13 x 23 x 24 x 31 x 33 − 1, x 12 x 13 x 21 x 23 x 33 x 34 − 1, x 11 x 13 x 23 x 24 x 32 x 33 − 1, x 11 x 13 x 22 x 23 x 33 x 34 − 1, Table 2 indicates the computation results.The columns represent: the code matrices 2 , the minimum distance, the error-correcting capability, the number of detected and corrected codewords ( V = V ), the number of received codewords, and the correctable percentage, respectively.As seen in the table, the correctable percentage of all codes is completely 100 except for G 2 .Moreover, comparing the same d min with different matrix size of encoding matrices, matrix size of V may affect a performance of corrections.
Table 3 indicates that, for this decoding process, degree lexicographical ordering is the most efficient term ordering, where lp, dp, and Dp are acronyms for lexicographical ordering, degree reverse lexicographical ordering, and degree lexicographical ordering, respectively.The reader is referred to term orderings in [7].However, for other problems, other term orderings can be more effective.??.As seen in the graph, the time of computing Gröbner basis is exponential, as it is clearly seen when the number of variables varies from 18 to 24.This is a consequence of the ideal membership problem.ideal and the m-variate division algorithm give the final results of estimated error matrix.By testing erroneous bits in received codewords for several different encoding matrices, the algorithm appears to be reliable to correct up errorcorrecting capability of code.The struggle of several variables(indeterminates) in the binomial ideal is a problem since almost computational time will be lost finding Gröbner basis.Then, fast algorithm for finding this basis of the binomial ideal or the method of reduced variables in this ideal is required.Moreover, it is potential to apply for other codes.Due to several advantages of 2-D codes, comparing with 1-D codes, it is feasible to use 2-D codes for transmission of images and animations in the future [9].
2-D decoding process with forward error correction Definition 6.

Table 1 :
Codes for simulations G

Table 2 :
Correctable percentage by using Algorithm 1 to decoding of error-correcting codes: G

Table 3 :
Correctable percentage of decoding process by using Algorithm 1 for G