[ Pobierz całość w formacie PDF ]
.The encoding function isE(x7, x6,., x1) = (x8, x7,., x1),where x8 = x7 + x6 + · · · + x1 with addition in Z2.Let x = (x1,., xn) and y = (y1,., yn) be binary n-tuples.TheHamming distance or distance, d(x, y), between x and y is the numberof bits in which x and y differ.The distance between two codewords is theminimum number of transmission errors required to change one codewordinto the other.The minimum distance for a code, dmin, is the minimumof all distances d(x, y), where x and y are distinct codewords.The weight,w(x), of a binary codeword x is the number of 1’s in x.Clearly, w(x) =d(x, 0), where 0 = (00 · · · 0).Example 6.Let x = (10101), y = (11010), and z = (00011) be all of thecodewords in some code C.Then we have the following Hamming distances:d(x, y)=4,d(x, z)=3,d(y, z)=3.7.1ERROR-DETECTING AND CORRECTING CODES115The minimum distance for this code is 3.We also have the following weights:w(x)=3,w(y)=3,w(z)=2.The following proposition lists some basic properties about the weightof a codeword and the distance between two codewords.The proof is left asan exercise.Proposition 7.2 Let x, y, and z be binary n-tuples.Then1.w(x) = d(x, 0);2.d(x, y) ≥ 0;3.d(x, y) = 0 exactly when x = y;4.d(x, y) = d(y, x);5.d(x, y) ≤ d(x, z) + d(z, y).The weights in a particular code are usually much easier to computethan the Hamming distances between all codewords in the code.If a codeis set up carefully, we can use this fact to our advantage.Suppose that x = (1101) and y = (1100) are codewords in some code.Ifwe transmit (1101) and an error occurs in the rightmost bit, then (1100) willbe received.Since (1100) is a codeword, the decoder will decode (1100) asthe transmitted message.This code is clearly not very appropriate for errordetection.The problem is that d(x, y) = 1.If x = (1100) and y = (1010)are codewords, then d(x, y) = 2.If x is transmitted and a single erroroccurs, then y can never be received.Table 7.2 gives the distances betweenall 4-bit codewords in which the first three bits carry information and thefourth is an even parity check bit.We can see that the minimum distancehere is 2; hence, the code is suitable as a single error-correcting code.To determine exactly what the error-detecting and error-correcting ca-pabilities for a code are, we need to analyze the minimum distance for thecode.Let x and y be codewords.If d(x, y) = 1 and an error occurs wherex and y differ, then x is changed to y.The received codeword is y and noerror message is given.Now suppose d(x, y) = 2.Then a single error cannot116CHAPTER 7ALGEBRAIC CODING THEORYTable 7.2.Distances between 4-bit codewords000000110101011010011010110011110000222222400112222242010122224220110222422210012224222101022422221100242222211114222222change x to y.Therefore, if dmin = 2, we have the ability to detect singleerrors.However, suppose that d(x, y) = 2, y is sent, and a noncodeword zis received such thatd(x, z) = d(y, z) = 1.Then the decoder cannot decide between x and y.Even though we areaware that an error has occurred, we do not know what the error is.Suppose dmin ≥ 3.Then the maximum-likelihood decoding scheme cor-rects all single errors.Starting with a codeword x, an error in the transmis-sion of a single bit gives y with d(x, y) = 1, but d(z, y) ≥ 2 for any othercodeword z 6= x.If we do not require the correction of errors, then we candetect multiple errors when a code has a minimum distance that is greaterthan 3.Theorem 7.3 Let C be a code with dmin = 2n + 1.Then C can correctany n or fewer errors.Furthermore, any 2n or fewer errors can be detectedin C.Proof.Suppose that a codeword x is sent and the word y is received withat most n errors.Then d(x, y) ≤ n.If z is any codeword other than x, then2n + 1 ≤ d(x, z) ≤ d(x, y) + d(y, z) ≤ n + d(y, z).Hence, d(y, z) ≥ n + 1 and y will be correctly decoded as x.Now supposethat x is transmitted and y is received and that at least one error hasoccurred, but not more than 2n errors.Then 1 ≤ d(x, y) ≤ 2n.Since theminimum distance between codewords is 2n + 1, y cannot be a codeword.Consequently, the code can detect between 1 and 2n errors.7.2LINEAR CODES117Example 7.In Table 7.3, the codewords c1 = (00000), c2 = (00111),c3 = (11100), and c4 = (11011) determine a single error-correcting code.Table 7.3.Hamming distances for an error-correcting code0000000111111001101100000334001113431110034311011433Historical NoteModern coding theory began in 1948 with C.Shannon’s paper, “A MathematicalTheory of Information” [7].This paper offered an example of an algebraic code, andShannon’s Theorem proclaimed exactly how good codes could be expected to be.Richard Hamming began working with linear codes at Bell Labs in the late 1940sand early 1950s after becoming frustrated because the programs that he was runningcould not recover from simple errors generated by noise.Coding theory has growntremendously in the past several years.The Theory of Error-Correcting Codes,by MacWilliams and Sloane [5], published in 1977, already contained over 1500references.Linear codes (Reed-Muller (32, 6)-block codes) were used on NASA’sMariner space probes.More recent space probes such as Voyager have used whatare called convolution codes.Currently, very active research is being done withGoppa codes, which are heavily dependent on algebraic geometry.7 [ Pobierz całość w formacie PDF ]
  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • przylepto3.keep.pl