Shannon-Hartley Theorem¶
The Shannon-Hartley theorem, also known as the Shannon capacity or the Shannon limit, is the fundamental result in information theory that establishes the maximum data rate at which information can be transmitted over a noisy communication channel (or AWGN channel) without error. It was formulated by Claude Shannon in 1944 and later extended by Ralph Hartley.
Shannon–Hartley theorem¶
The Shannon–Hartley theorem defines the channel capacity "C" as the theoretical upper limit on the information rate (excluding error-correction coding) at which data can be transmitted with arbitrarily low bit error rate. This theorem takes into account all possible multi-level and multi-phase modulation schemes. This limit applies to an analog communication channel with a given average signal power S in the presence of additive white Gaussian noise (AWGN) with power N, and is given by:
$$C=B\times{}log_2\left(1+\cfrac{\text{S}}{\text{N}}\right)$$
or,
$$C=B\times{}log_2\left(1+\text{SNR}\right)$$
Where:
C is the channel capacity in Bits per second (bps). B is the available bandwidth of the channel in Hz. SNR is the signal to noise ratio, expressed as a linear value (SNR = signal-power / noise-power).
In simpler terms, the greater the bandwidth, the higher the capacity. Also, higher the signal power relative to the noise power, the higher the capacity.
Additive white Gaussian noise (AWGN)¶
Additive white Gaussian noise (AWGN) is a simple but very useful noise model of a channel. It can describe how noise affects the reception of a signal sent over a channel and processed by the receiver. In this model, noise is:
Additive¶
Given a received sample value y[k] at the kth sample time, the receiver interprets it as the sum of two components: the first is the noise-free component y0[k] and the second is the noise component n[k], assumed independent of the input waveform. We can thus write :
$$y[k]=y_0[k]+n[k]$$
Gaussian¶
The noise component n[k] is random and we assume it is drawn from a fixed Gaussian distribution. A Gaussian model is chosen because according to central limit theorem, the sum of independent random variables is well approximated (under rather mild conditions) by a Gaussian random variable, with the approximation improving as more variables are summed in.
White¶
This property describes how noise varies over time at the sample level. The term white comes from the frequency-domain interpretation: the noise has a flat power spectral density, meaning it contains equal average power at all frequencies.
Trade-off between bandwidth and SNR¶
By increasing bandwidth generally brings more noise, reducing SNR, while boosting SNR often requires more signal power or narrower bandwidth, limiting data rate; it's about choosing between signal quality/clarity (high SNR, less data) or speed/capacity (high Bandwidth, potentially lower SNR).