History and some definitions that you need to know

History about coding techniques in deep space communications

Due to the hard conditions in such environment, the development of error-correction codes was essential. Especially, because of the limit power available in space probes and the losses of the signal power across very long distance.

From 1968, the first two error-correcting codes were implemented in space missions called convolutional code and Reed-Muller code.

In 1977, Voyager 1 and voyager 2 were launched into space for a ambitious mission to across the solar system and then the interstellar. These missions were designed to send color image for Jupiter and Saturn. To do this, some improvements were done on the correcting code by concatenating convolutional codes with Golay (24, 12, 8) for the voyager 1, and Reed-Solomon code for voyager 2 which has high error correcting capacity and then able to deliver information about Uranus and Neptune.

After 1989, due to ECC system upgrades, both probes used Reed Solomon Viterbi – Version 2.

In these days, concatenated codes are less used in space missions and being replaced by more powerful codes such as Turbo codes and LDPC codes. However, due to different kinds of orbital missions and deep space missions, another problem will be ongoing is about finding “one size fits all” error correction system due to the different amount of noise from earth orbit to deep space.

Technical definitions

A lot of technical terms will be present in this project, so for purpose of simplicity, the section below is appropriated to give an idea about some definitions and technical terms.

Error detection: Is the detection for errors occurred due to noise and impairments in the transmission from the transmitter to the receiver

Cyclic redundancy check (CRC): it is an error-detecting code used to detect changes on data transmitted, by adding a check value (redundancy) on the system input. On retrieval, after doing the same calculation as in input, if check values do not match that means an error occurs.

Parity bits: it is a bit added to the source bits in order to obtain an even or odd number of set bits (with value 1) in the outcome.

Redundancy: In order to correct a message, some additional bits are added to the data transmitted which will be used by the receiver to check consistency of the delivered message and then recover the erroneous data.

Error correction: Is the reconstruction of the data after detecting errors in the receiver side.

Forward error correction (FEC) or channel coding: it is a technique during which additional information (redundant bits) are encoded to the message by the sender using error-correction code (ECC). This redundancy allows the receiver to detect a limited number of errors, and then recovers the original data without relying on retransmissions of data. It is used in applications such as mobile communication

Systematic code scheme: In this scheme, parity data (check bits) derived from an algorithm are attached to the original message. The same algorithm is implemented on the receiver. Once the data is received, the algorithm will be applied, and by comparing the output with the received check bits, the receiver detects error if the results do not match (Golay code is a systematic code).

Hamming distance: it is the number of bits between two binary numbers.
For example, the Hamming distance between bytes 4Ah and 68h is weight(4Ah XOR 6Fh) = weight(22h) = 2 bits.

Weight: it is the number of ones in a binary word.
For example, the byte word 11001010 has 4 as weight as it contains four 1s.

References

K. Andrews et al., The Development of Turbo and LDPC Codes for Deep-Space Applications, Proceedings of the IEEE, Vol. 95, No. 11, Nov. 2007
Jump up^ Huffman, William Cary; Pless, Vera S. (2003). Fundamentals of Error-Correcting Codes.Cambridge University Press. ISBN 978-0-521-78280-7