Wednesday, June 5, 2019

Differences Between Linear Block And Convolutional Codes Computer Science Essay

Differences Between Linear Block And Convolutional Codes Computer Science EssayGenerally the coding conjecture is the expound analysis of the properties of the figures and their suitable quality for the identified application. Codes be mainly apply in cryptography, error chastisement , data compression and now a long time it is withal utilise for the network coding.this coding much often than not involves with the analog interrupt politys, convolutional codes, turbo codes various(prenominal)ly.LINEAR BLOCK CODESIn coding theory, the linear chock up code primarily referred as a error correcting code for which the obtained resultant codeword is the linear combination of any two codewords. In artlessx words, the linear ingurgitate code possesss the one-dimensionality property that is the sum of any two codewords is also a codeword. These linear block codes are divided in to the block odes and convolutional codes, eventhough turbo codes is the combination of these tw o types. more efficient encoding and decodig algorithmic rules are provided by the linear codes when compared with the other codes.(http// en.wikipedia.org/wiki/linear codes).CONVOLUTIONAL CODESThe main principle involved in the convolutional code is the weighted sum of the diverse introduce message tokens is the resultant codeword symbol. This resemblance the convolution used in the LTI establishments w present we find the pop output of a syatem by knowing the proneness response and the respective introduce. So hence the output of a convolutional encoder stop be obtained by the convolution of the input bits with the deposits of the convolution encoder registers.(http//en.wikipedia.org/wiki/convolutional codes).TURBO CODESIn 1993, turbo codes are developed which are the discipline of high performance forward error correction codes and they are also the first practical codes to come nearer to the chgannel capacity for the code tread at which reliable parley is still pos sible while a specific identified noise level is attached. Berrou glavieux and thitimajshima introduced these turbo codes in 1993 in their idea named near Shannon unsex error correcting and decoding turbo codes. Once after the introduction of these parallel turbo codes many other types are disc all overed which mainly includes repeat accumulate codes, serial versions. eventide the conventional FEC systems are also been applied by the iterative turbo decoding methods.(http//en.wikipedia.org/wiki/turbo )2. CORE OF THE PAPER1. PRINCIPLES OF THE LINEAR BLOCK CODESA block code of continuance n and codewors is said to be a linear(n, k) code if and solo if the k dimensional subspace of the vector space is form by all the codewords of all the n- tuples over GF(2). A linear code of length n and rank K is the linear subspace C with dimension K of the vector space where is the finite knit consisting of the q elements. Such tytpe of a code with parameter q is called a q ary code. The co de is said to be as binary star code or ternary code if and only ifq =2 or q=3 individually. Linear block codes are briefly described by their symbol alphabets ( sample binary, ternary) and with parameters (n,m, ) wheren is the codeword length in symbols.m is the issue forth mo of source symbols which are used for encoding.is the codes minimum ham it up distance.Linear block codes are of many types they areCyclic codesRepetition codesParity codes polynomial codesReed Solomon codesAlgebraic geometric codesReed muller codesPerfect codes.A. GENERATOR MATRIX AND PARITY CHECK MATRIXSince the linear codes could be considered as a linear subspace C of , so any codeword C can be stand for as a linear combination of a set of theme vectors such that ,C= ++.= mGWhere G is the generator matrix and m is the message.B. THEORITICAL CONCEPTSAs linear subspace of , the span of the minimal set of codewords can be encountered by the entire code word C. These obtained codewords are collected and combined in the rows of the matrix G known as generator matrix for the code C. Theoretically, it is the standard form of is given as G=(/A), where is the kk identity matrix and A is k(n-k) matrix.A check matrix is defined as the matrix which represents the linear function H whose kernel is C. The generating matrix G in standard form, G=(/A), with C as a code then H=(/) is a check matrix for C.C .HAMMING CODESHamming codes are the first codes developed for the purpose of the error correction. These are widely used in digital communication systems. For any positive integer r=2, there exists a -r-1,3 hamming code. precedent given the following generator matrix and parity check matrix, the linear block code is 7,4,3 hamming code.G= ,D. HADAMARD CODEHadamard codes is capable of correcting many errors and is a , r, linear code. Hadamard code has minimum distance and therefore can correct -1 errors.Example The linear block code with the following generator matrix is a 8,3,42 Hadamard c odeE..MERITSLinear codes are the very prominent calss of the error correcting codes. It generally includes the simple description, easy procedure for encoding, nice properties, conceptually easy decoding.If C is a linear code then the minimum distance d(c) is easy to compute.Linear codes possesss simple specifications.To specify the (n,k) linear code it is sufficient to list K linearly independent codewords where as for the non linear code words generally we energise to represent with all the code words.The most important linear block codes areHamming codes, cyclic codes, reed Solomon codes, BCH codes, LDPC codes, turbo codes respectively.F. DEMERITSThe main drawback of the hamming code is that it has the fixed hamming distance and also it is very difficult to go for coders for large block. The detection of two error bits and ability to correct the single errored bit can be obtained from the fixed hamming distance.if we choose a code which generally allows the implementor to selec t the desired hamming distance will be profitable or beneficial. turncock coder practical largest possible allows for codewords of nearly 31bits only.G. APPLICATIONS(1) These linear block properties are used in many applications. Trellis shaping is gene charge per unitd by using the syndrome coset uniqueness property of linear block codes. (2) these are also used in sensor networks for distributed sourcecoding. (3)by using these codes phase shift can be easily change by reversal and detected and also multiple signals can be easily sent from one enthral. (4) some other applications include mobile phone systems which are used in CDMA.(references William E.ryan and shu lin(2009) channel codes classical and modern, Cambridge university press.,Thomas M. corer and joy A. Thomas(1991) elements of selective breeding theory john wiley and sons).2.PRINCIPLES OF CONVOLUTIONAL CODESConvolutional codes comes under a category of error correction codes in which an n bit symbol consists of in dividually m bit information symbol to be encoded where code direct is m/n (n=m) and the transformation acts as a function of the last K information symbols, here the constraint lenghht of the code is denoted by K.A. .Convolutional encoderTo encode data convolutinoally, we begin with K memory reegisters holding one input bit each. Until other economic value specified , the 0 is the initial value af all memory registers. The encoder includes n-modulo 2 adders( it can be put effort with single Boolean XOR gate ) and n generator polynomials- each adder has one of them. The left most register is feded with the an input bit m. By using those existing values in the rest registers and the respective generatot polynomials the n bits are the encoder outputs. By bit shifting all register values to their right and have to wait for its next input input bit. The encoder continues output until all registers have returned to the zero state if there are no remaining input bits. The below figure sh ows the rate 1/3 of the encoder with k value (constraint length) of 3. And also the generator polynomials are =(1,1,1), =(0,1,1) and =(1,0,1)The calculation of the output bits are as followsn1 = m1 + m0 + m-1n2 = m0 + m-1n3 = m1 + m-1.Img.1. Rate 1/3 non-recursive, non-systematic convolutional encoder with constraint length 3.B. algorithmic AND NON RECURSIVE CODESThe above figure shows the encoder where as the below one shows the recursive one respectively,Rate 1/2 recursive, systematic convolutional encoder with constraint length 4.It can be easily observe that the input which is encoded is included in the output sequence also. These codes are generally called as systematic. The another(prenominal) type of codes are generally called as non systematic codes. Generally the systematic codes are always recursive codes where as the non systematic codes are always non recursive codes. The name convolutional encoder came for it because it carry out a convolution of the input stream with the encoders impulse responses.=Where x is an input sequence, is an output sequence from output j and which is the impulse response of the output j.The discrete lineaqr time invariant system is also called as a convolutional encoder. The own transfer function itself describes the output of an encoder. The transfer function is affiliated by the impulse response through z transform.+(Z)= +,(Z)=1+.The second order transfer functions are(z)=We Define m byWhere for any rational functional, we define it asWhere m is the maximum polynomial degrees of the . And the constraint length is defined as .C.TRELLIS DIAGRAMA convolutional encoder is also referred as finite state machine. An encoder with n binary cells produce states. Just imagine the encoder which is shown in figure 1 has 1 in left memory cell(m0) in right one. We will refer such a stete as 10. By considering input bit the encoder at the next turn can convert either to 11 state or 01 state or 01 state. from the below figure we ge t rhe idea of decoding if the received sequence doesnot suit this graph then it errors can be obtained and we should select nearest correct sequence. All transitions which are possible are shown as belowThe red line is the path of the trellis. The lines their indicate transitions where the dotted lines are 1 input and the proper lines rae 0 input. The path here represents an actual encoded sequence. One of the path is shown in red colour as an example. The figure provides an idea about the decoding.D.FREE outdo AND THE ERROR DISTANCEThe minimum distance between the different encoded sequences is called as the free distance. The number of errors that can be corrected by code is given as correcting capability. It is given as correcting capability. It is given as,F. DECODING CONVOLUTIONAL CODESDecoding the convolutional codes can be done by several algorithms . thye vviterbi algorithm is universally used for relatively small values of K as it is highly parallelizable and it also prov ides maximum likelihood performance. By using SIMD instruction set the viterbi decoders are easy to implement the software on cpu and also in VLSI hardware.Fano algorithm is the best one among the several sequential decoding algorithms through which the longer constraint lenght codes re more practically decoded. Unlike viterbi decoding sequential decoding is not have maximum probability but there will be the handsome incresase in the knottyity with their constraint lengfht. Giving the chance to us the long , short constraint Length codes. In early 1970s these codes are used in the pionner program in Jupiter and Saturn but made it shorter, these viterbi decoded codes, often connecyed with large reed Solomon error correction codes which generally steeps the complete bit error rate curve and it also produces the very low residual undetected error rates. These two sequential and viterbi decoding algorithm gives the hard decisions..The codeword which is most likely formed by the bits. by using the soft output viterbi algorithm an appropriate confidence measure can be added to each bit in use with the BCJR algorithm the maximum a posterior(MAP)Soft decisions can be obtained.G. POPULAR CONVOLUTIONAL CODESThe viterbi decoded convolutional codes is used for the voyagev program which has a constraint length of K=7, and also rate=1/2. The more powerful codes are produced by the longer constraint lengths but the viterbi algorithm complexity increases exponentially with respect to the constraint lengths , controlling these more powerful codes to deep space machines where the extra performance increases decoder complexity. Cassine probe, mars path finder, mars exploration rover to Saturn use of K=15 AND rove OF 1/6, 2DB Bbetter than the simplex k=7, code at a rate of 256* in decoding complexity iss performed by this code.H. MERITSTo correct the errors in noisy carry these convolutional codes are frequently used.These convolutional codes perform well on very bad conditio ns also.These convolutional codes are generally used often in satellite communications, mobile phone and also in voice halo modems.Convolutional codes generally gives vertical results in lower noise environments.Convolutional codes are also used in the voice band modems( v.,32, v.17, v.34) and also in the GSM mobile phones.I. DEMERITSEven though the convolutional encoder has the simplest procedure, decoding of it is very complex task.Convolutional codes cannot provide more protective covering against noise.J. APPLICATIONSConvolutional codes are used in the number of the applications which aims to achieve reliable data transfer, including radio, mobile communications, digital video communications, satellite communications.By using hard decision code, these codes are implemented in concatenation( example reed Solomon).These are more efficient.(references tutorial on convolutional coding and decoding, the online text book IT, inference and leaving algorithms by david J.C.Mackay).3. PRINCIPLES OF TURBO CODESBy using the block code with convolutional code and also withthe longer constraint length and large block length it is theortically possible to approach the Shannon set apart. But this approach became impractical due to the touch power required to decode. By using the recursive coders and iterative soft decoders the turbo codes overcome this drawback. The main aim of the recursive coder is to make the convolutional codes with shorter constraint lengths which appears as a large constraint length of a block codes and iterative soft decoder efficiently improves the estimation of the obtained message. The below shown figure generally gives the encoder for carrying into action which describes a clasiical encoder which provides the general design of the turbo codes. The 3 bits of the subblock can be sended by using this encoder implementation. The m-bit block of payload data is in the first subblock. The n/2 parity bits for a well known permutation includes in the 3rd subblock which is once again computed with the RSC code. The total number of bits in the code are . Interleaver is a device which generally carry outb the permutation of the payload data. This turbo code encoder includes two RSC codes (identical) , , as shown in the figure. By using the concatenation scheme these are connected to each othere which is reffered as parallel concatenation.Here M acts as a memory register. The input bits are forced to appear in different sequences by the dealy line and interleaver. The input sequence appears at both outputs after completion of first iteration , , because of the encoders systematic nature.==DECODERThe serial connection of two elemental decoders is said to be a decoder. Generally decoder only operates on the lower speed( ) , thus it is planned for the encoder and is for encoder. The produces delays and also hold a soft decision. Where as produces delay respectively. The error bursts which are coming from the output are scattered by the interleaver whicvh is installed between between the two decoders. Here DI refers to demultiplexing and insertion module here it acts as a switch, it generally redirects input bits to once and to at another. It feeds both , inputs with padding bits( zeroes) in off state. While considering a memory less AWGN channel and assumption is made that at the iteration, the decoder receives a pair of random variables.=(-1)+,=2(-1)+ .Here is a bit from output of encoder and , are independent noise components consists of alike produces a soft decision ie) = and pass it to the . generally referred as logarithm of likelihood ratio. The posterior probability (APP) of bit data is defined as which shows the probability of explaining a received bit as i. But where as produces a hard decision which is a decoded bit. Viterbi algorithm is unappropiate to calculate APP, it is not used in a BCJR algorithm is used but where as for viterbi algorithm is an appropriate one.MERITSUp to now, we know so m any practical error correction methods but turbo codes and low density parity check codes come very close to approach the Shannon limit , which is the theoretical limit of maximum transfer reate of information over a noisy channel.Turbo codes increases data rate with out increasing the transmitted power, or to transmit at a certain data rate they are used to decrease the amount of the power.It generally produces the best performance results pof the throughput and latency.DEMERITSIts main disadavantage is the complexity of the relative high decoding.It also has relatively high latency, which is not suitable for some applications.It is also not verymuch useful for the satellites why because the latency is produced by the transmission distance due to the limited speed of the light.These algorithms are complex in nature.APPLICATIONS3G and 4G mobile telephony standards uses turbo codes widely. Examples HSPA and LTE .Used in media FLO and QUALCOMMTurbo codes are also useful in NASA missio ns such as mars re connaissance orbiter which is another option to RS- viterbi codes.IEEE802.16 generally uses the block turbo coding and CTC(WIMAX) a wireless metropolitan standard.Turbo codes are used in the designs of the digital communication systems and also in detection, demodulation.Turbo codes in the field of the error correcting codes represents a significant development.The cable transmission, short distance communications or data storage are the future applications of the turbo codes.(references Berrou, claude glavieux, alian , punva, near Shannon limit error- correcting, Berrou, claude, the ten years old turbo codes are entering in to the service, france).4. CRITICAL COMPARISIONLINEAR BLOCK CODES AND CONVOLUTIONAL CODESConvolutional codes have a regular, natural, trellis structure which are useful in the implementation of the viterbi algorithm. Where as the linear block codes have a natural but it has minimal trellis structure.In both convolutional codes and linear block codes, the estimation of viterbi algorithm can be done by number of trellis edge symbol per encoded bit.Generally convolutional codes does not provide more protection against noise than the linear block codes.Convolutional codes offers greater simplicity of implementation over a linear block codes in many cases.Convolutional codes may not have minimal trellis representation but where as the linear block codes has the minimal trellis structure.The main differences between the linear block codes and convolutional codes is that a system which is using the block codes will transmits the k data bits and then transmits the n-k reductant bits. Where as the system which uses the convolutionla codes produces n coded bits from k data bits and the codeword need not claim unaltered k data bits.The main difference between the convolutional encoder and the block encoder is that the block code related not only to the current , but also on the previous (u) number of fixed information blocks ,, Th at is given as,=++..With out the memory a linear block code is merely a convolutional code.The theory of the linear block codes seems to be degenerate of the theory of the convolutional codes, the usual approach of the convolutional codes is to note the small and fixed values of n and k and also the vary values of .TURBO CODES V/S CONVOLUTIONAL CODESThere are some differences between the behaviour of turbo codes and the convolutional codes. As we know that the performance of the convolutional codes imcreases with the increasing constraint length( code complexity). This is not for the turbo codes where the best codes of the turbo codes have the small constraint lengths .With decreasing code rate the performance of the convolutional codes doesnot improve significantly. Where as for the turbo codes even for the lower coding rates it achieves a very significant coding gain.While seeing from the implementation point of view, soft output encoders and recursive decoders are essential in t urbo code schemes, where as they are not that much essential in convolutional codes excluding the concatenated configuration.CriteriaConvolutional codesturbo codesLarger constraint lengthgoodbadLower coding rateindifferentgoodLarger free distancegoodindifferentSoft output decodersindifferntgoodRecursive encodersIndifferentgoodBoth the linear block codes and convolutional codes are most frequently used type codes in the practice of enginnering for the designing of the most useful codes.TURBO VS LINEAR BLOCK CODESThe turbo codes are the hybrid model of the both the linear block codes and the convolutional codes respectively.The linear block codes used in the turbo codes instead of the convolutional codes to form a turbo product code(TPC). Two hamming codes are concatenated serially with the absence seizure of an interleaver in between them. They can perfoem well in low SNR and can be formed by any block code.The convolutional codes used in the turbo codes normally forms a parallel con catenated convolutional code(PCCC) which have the small constraint length. They are systematic.5. SIMULATION RESULTSThe simulation results of linear block codes are as follows The dual adaptive splendour sampling (DAIS) technique evaluates the word and bit error rates of linear block codes down to extreme low values.(reference Slin and D. Costello, error correction coding rudiments and applications).T,he simulation of the convolutional codes are as followsSimulations with two recursive , equal convolutional codes with an interleaver whose length is 400 and k=50, this generally provides manufacturer interoperability, because this technique achieves the improvements which is over 3.5 dB respectively over the trellis codification modulation.(referenceConcatened trellis/reed Solomon coding in DMT systems- Kschischarg).The simulation results of the turbo codes are as followsAt a rate of R=1/2 the simulation results of the turbo codes are obtained. Here length of the data block is 400 bits( assumption) and also a MAP decoder used in the simulation. From the results it is observed that for the first few iterations yield the most significant improvements in BER for any given value of .After that the results appear to converge on to a BER for each value of .( reference Andersen JD and zyablov vv, interleaver design for turbo coding).6. CONCLUSIONSo, hence in this research paper we have discussed the principles, applications, merits and demerits of the linear block codes, convolutional codes, turbo codes respectively. We also discussed the main differences between the block codes, convolutional codes, turbo codes and presented their respective simulation results. Personally, by doing this research i learned a lot on this coding theory and also came to know the importance of these codes in the present world. The future applications of these codes are to provide the substantial improvements in communication for the satellites.7. REFERNCES(1) code tables bounds on the p arameters of various types of the codes.(2) q- ary code generator program.(3) error correcting codes (ECC) page(4) Wikipedia articles in corporating text from the federal standarad convolutional codes with a 1037c.(5) turbo decoding as an instance of pearls belief propogation algorithm,.(6) IEEE journal on selected areas in communication- Robert J Mackay, David J.C.(7) Turbo equalisation principles and new results an IEEE paper on communications.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.