# Code rate

In telecommunication and information theory, the code rate (or information rate[1]) of a forward error correction code is the proportion of the data-stream that is useful (non-redundant). That is, if the code rate is ${\displaystyle k/n}$ for every ${\displaystyle k}$ bits of useful information, the coder generates a total of ${\displaystyle n}$ bits of data, of which ${\displaystyle n-k}$ are redundant.

If ${\displaystyle R}$ is the gross bitrate or data signalling rate (inclusive of redundant error coding), the net bitrate (the useful bit rate exclusive of error-correction codes) is ${\displaystyle \leq R\cdot k/n}$.

For example: The code rate of a convolutional code will typically be ${\displaystyle 1/2}$, ${\displaystyle 2/3}$, ${\displaystyle 3/4}$, ${\displaystyle 5/6}$, ${\displaystyle 7/8}$, etc., corresponding to one redundant bit inserted after every single, second, third, etc., bit. The code rate of the octet oriented Reed Solomon block code denoted RS(204,188) is 188/204, meaning that ${\displaystyle 204-188=16}$ redundant octets (or bytes) are added to each block of 188 octets of useful information.

A few error correction codes do not have a fixed code rate—rateless erasure codes.

Note that bit/s is a more widespread unit of measurement for the information rate, implying that it is synonymous with net bit rate or useful bit rate exclusive of error-correction codes.