In mathematics, the Radon–Nikodym theorem is a result in measure theory. It involves a measurable space ${\displaystyle (X,{\mathit {\Sigma }})}$ on which two σ-finite measures are defined, ${\displaystyle \mu }$ and ${\displaystyle \nu }$. It states that, if ${\displaystyle \nu \ll \mu }$ (i.e. ${\displaystyle \nu }$ is absolutely continuous with respect to ${\displaystyle \mu }$), then there is a measurable function ${\displaystyle f:X\rightarrow [0,\infty )}$, such that for any measurable set ${\displaystyle A\subseteq X}$,

${\displaystyle \nu (A)=\int _{A}f\,d\mu }$

The function f is called the Radon–Nikodym derivative and is denoted by ${\displaystyle {\frac {d\nu }{d\mu }}}$.[1]

The theorem is named after Johann Radon, who proved the theorem for the special case where the underlying space is n in 1913, and for Otto Nikodym who proved the general case in 1930.[2] In 1936 Hans Freudenthal generalized the Radon–Nikodym theorem by proving the Freudenthal spectral theorem, a result in Riesz space theory; this contains the Radon–Nikodym theorem as a special case.[3]

If Y is a Banach space and the generalization of the Radon–Nikodym theorem also holds, mutatis mutandis, for functions with values in Y, then Y is said to have the Radon–Nikodym property. All Hilbert spaces have the Radon–Nikodym property.

The function f satisfying the above equality is uniquely defined up to a μ-null set, that is, if g is another function which satisfies the same property, then f = g μ-almost everywhere. f is commonly written ${\displaystyle \scriptstyle {\frac {d\nu }{d\mu }}}$ and is called the Radon–Nikodym derivative. The choice of notation and the name of the function reflects the fact that the function is analogous to a derivative in calculus in the sense that it describes the rate of change of density of one measure with respect to another (the way the Jacobian determinant is used in multivariable integration). A similar theorem can be proven for signed and complex measures: namely, that if μ is a nonnegative σ-finite measure, and ν is a finite-valued signed or complex measure such that νμ, i.e. ν is absolutely continuous with respect to μ, then there is a μ-integrable real- or complex-valued function g on X such that for every measurable set A,

${\displaystyle \nu (A)=\int _{A}g\,d\mu .}$

## Examples

In the following examples, the set X is the real interval [0,1], and ${\displaystyle {\mathit {\Sigma }}}$ is the Borel sigma-algebra on X.

1. ${\displaystyle \mu }$ is the length measure on X. ${\displaystyle \nu }$ assigns to each subset Y of X, twice the length of Y. Then, ${\textstyle {\frac {d\nu }{d\mu }}=2}$.
2. ${\displaystyle \mu }$ is the length measure on X. ${\displaystyle \nu }$ assigns to each subset Y of X, the number of points from the set {0.1, ..., 0.9} that are contained in Y. Then, ${\displaystyle \nu }$ is not absolutely-continuous with respect to ${\displaystyle \mu }$ since it assigns non-zero measure to zero-length points. Indeed, there is no derivative ${\textstyle {\frac {d\nu }{d\mu }}}$: there is no finite function that, when integrated e.g. from ${\displaystyle (0.1-\epsilon )}$ to ${\displaystyle (0.1+\epsilon )}$, gives ${\displaystyle 1}$ for all ${\displaystyle \epsilon >0}$.
3. ${\displaystyle \mu =\nu +\delta _{0}}$, where ${\displaystyle \nu }$ is the length measure on X and ${\displaystyle \delta _{0}}$ is the Dirac measure on 0 (it assigns a measure of 1 to any set containing 0 and a measure of 0 to any other set). Then, ${\displaystyle \nu }$ is absolutely continuous with respect to ${\displaystyle \mu }$, and ${\textstyle {\frac {d\nu }{d\mu }}=1_{X\setminus \{0\}}}$ – the derivative is 0 at ${\displaystyle x=0}$ and 1 at ${\displaystyle x>0}$.[4]

## Applications

The theorem is very important in extending the ideas of probability theory from probability masses and probability densities defined over real numbers to probability measures defined over arbitrary sets. It tells if and how it is possible to change from one probability measure to another. Specifically, the probability density function of a random variable is the Radon–Nikodym derivative of the induced measure with respect to some base measure (usually the Lebesgue measure for continuous random variables).

For example, it can be used to prove the existence of conditional expectation for probability measures. The latter itself is a key concept in probability theory, as conditional probability is just a special case of it.

Amongst other fields, financial mathematics uses the theorem extensively, in particular via the Girsanov theorem. Such changes of probability measure are the cornerstone of the rational pricing of derivatives and are used for converting actual probabilities into those of the risk neutral probabilities.

## Properties

• Let ν, μ, and λ be σ-finite measures on the same measure space. If νλ and μλ (ν and μ are both absolutely continuous with respect to λ), then
${\displaystyle {\frac {d(\nu +\mu )}{d\lambda }}={\frac {d\nu }{d\lambda }}+{\frac {d\mu }{d\lambda }}\quad \lambda {\text{-almost everywhere}}.}$
• If ν ≪ μ ≪ λ, then
${\displaystyle {\frac {d\nu }{d\lambda }}={\frac {d\nu }{d\mu }}{\frac {d\mu }{d\lambda }}\quad \lambda {\text{-almost everywhere}}.}$
• In particular, if μν and νμ, then
${\displaystyle {\frac {d\mu }{d\nu }}=\left({\frac {d\nu }{d\mu }}\right)^{-1}\quad \nu {\text{-almost everywhere}}.}$
• If μλ and g is a μ-integrable function, then
${\displaystyle \int _{X}g\,d\mu =\int _{X}g{\frac {d\mu }{d\lambda }}\,d\lambda .}$
• If ν is a finite signed or complex measure, then
${\displaystyle {d|\nu | \over d\mu }=\left|{d\nu \over d\mu }\right|.}$

## Further applications

### Information divergences

If μ and ν are measures over X, and μ ≪ ν

• The Kullback–Leibler divergence from μ to ν is defined to be
${\displaystyle D_{\text{KL}}(\mu \parallel \nu )=\int _{X}\log \left({\frac {d\mu }{d\nu }}\right)\;d\mu .}$
• For α > 0, α ≠ 1 the Rényi divergence of order α from μ to ν is defined to be
${\displaystyle D_{\alpha }(\mu \parallel \nu )={\frac {1}{\alpha -1}}\log \left(\int _{X}\left({\frac {d\mu }{d\nu }}\right)^{\alpha -1}\;d\mu \right).}$

## The assumption of σ-finiteness

The Radon–Nikodym theorem makes the assumption that the measure μ with respect to which one computes the rate of change of ν is σ-finite. Here is an example when μ is not σ-finite and the Radon–Nikodym theorem fails to hold.

Consider the Borel σ-algebra on the real line. Let the counting measure, μ, of a Borel set A be defined as the number of elements of A if A is finite, and otherwise. One can check that μ is indeed a measure. It is not σ-finite, as not every Borel set is at most a countable union of finite sets. Let ν be the usual Lebesgue measure on this Borel algebra. Then, ν is absolutely continuous with respect to μ, since for a set A one has μ(A) = 0 only if A is the empty set, and then ν(A) is also zero.

Assume that the Radon–Nikodym theorem holds, that is, for some measurable function f one has

${\displaystyle \nu (A)=\int _{A}f\,d\mu }$

for all Borel sets. Taking A to be a singleton set, A = {a}, and using the above equality, one finds

${\displaystyle 0=f(a)}$

for all real numbers a. This implies that the function f, and therefore the Lebesgue measure ν, is zero, which is a contradiction.

## Proof

This section gives a measure-theoretic proof of the theorem. There is also a functional-analytic proof, using Hilbert space methods, that was first given by von Neumann.

For finite measures μ and ν, the idea is to consider functions f with f dμ. The supremum of all such functions, along with the monotone convergence theorem, then furnishes the Radon–Nikodym derivative. The fact that the remaining part of μ is singular with respect to ν follows from a technical fact about finite measures. Once the result is established for finite measures, extending to σ-finite, signed, and complex measures can be done naturally. The details are given below.

### For finite measures

First, suppose μ and ν are both finite-valued nonnegative measures. Let F be the set of those measurable functions f : X → [0, ∞) such that:

${\displaystyle \forall A\in {\mathit {\Sigma }}:\qquad \int _{A}f\,d\mu \leq \nu (A)}$

F ≠ ∅, since it contains at least the zero function. Now let f1,  f2F, and suppose A is an arbitrary measurable set, and define:

{\displaystyle {\begin{aligned}A_{1}&=\left\{x\in A:f_{1}(x)>f_{2}(x)\right\},\\A_{2}&=\left\{x\in A:f_{2}(x)\geq f_{1}(x)\right\},\end{aligned}}}

Then one has

${\displaystyle \int _{A}\max \left\{f_{1},f_{2}\right\}\,d\mu =\int _{A_{1}}f_{1}\,d\mu +\int _{A_{2}}f_{2}\,d\mu \leq \nu \left(A_{1}\right)+\nu \left(A_{2}\right)=\nu (A),}$

and therefore, max{ f1,  f2} ∈ F.

Now, let { fn } be a sequence of functions in F such that

${\displaystyle \lim _{n\to \infty }\int _{X}f_{n}\,d\mu =\sup _{f\in F}\int _{X}f\,d\mu .}$

By replacing fn with the maximum of the first n functions, one can assume that the sequence { fn } is increasing. Let g be an extended-valued function defined as

${\displaystyle g(x):=\lim _{n\to \infty }f_{n}(x).}$

By Lebesgue's monotone convergence theorem, one has

${\displaystyle \lim _{n\to \infty }\int _{A}f_{n}\,d\mu =\int _{A}\lim _{n\to \infty }f_{n}(x)\,d\mu (x)=\int _{A}g\,d\mu \leq \nu (A)}$

for each AΣ, and hence, gF. Also, by the construction of g,

${\displaystyle \int _{X}g\,d\mu =\sup _{f\in F}\int _{X}f\,d\mu .}$

Now, since gF,

${\displaystyle \nu _{0}(A):=\nu (A)-\int _{A}g\,d\mu }$

defines a nonnegative measure on Σ. Suppose ν0 ≠ 0; then, since μ is finite, there is an ε > 0 such that ν0(X) > ε μ(X). Let (P, N) be a Hahn decomposition for the signed measure ν0ε μ. Note that for every AΣ one has ν0(AP) ≥ ε μ(AP), and hence,

{\displaystyle {\begin{aligned}\nu (A)&=\int _{A}g\,d\mu +\nu _{0}(A)\\&\geq \int _{A}g\,d\mu +\nu _{0}(A\cap P)\\&\geq \int _{A}g\,d\mu +\varepsilon \mu (A\cap P)\\&=\int _{A}\left(g+\varepsilon 1_{P}\right)\,d\mu ,\end{aligned}}}

where 1P is the indicator function of P. Also, note that μ(P) > 0; for if μ(P) = 0, then (since ν is absolutely continuous in relation to μ) ν0(P) ≤ ν(P) = 0, so ν0(P) = 0 and

${\displaystyle \nu _{0}(X)-\varepsilon \mu (X)=\left(\nu _{0}-\varepsilon \mu \right)(N)\leq 0,}$

contradicting the fact that ν0(X) > εμ(X).

Then, since

${\displaystyle \int _{X}\left(g+\varepsilon 1_{P}\right)\,d\mu \leq \nu (X)<+\infty ,}$

g + ε 1PF and satisfies

${\displaystyle \int _{X}\left(g+\varepsilon 1_{P}\right)\,d\mu >\int _{X}g\,d\mu =\sup _{f\in F}\int _{X}f\,d\mu .}$

This is impossible; therefore, the initial assumption that ν0 ≠ 0 must be false. Hence, ν0 = 0, as desired.

Now, since g is μ-integrable, the set {xX : g(x) = ∞} is μ-null. Therefore, if a f is defined as

${\displaystyle f(x)={\begin{cases}g(x)&{\text{if }}g(x)<\infty \\0&{\text{otherwise,}}\end{cases}}}$

then f has the desired properties.

As for the uniqueness, let f, g : X → [0, ∞) be measurable functions satisfying

${\displaystyle \nu (A)=\int _{A}f\,d\mu =\int _{A}g\,d\mu }$

for every measurable set A. Then, gf is μ-integrable, and

${\displaystyle \int _{A}(g-f)\,d\mu =0.}$

In particular, for A = {xX : f(x) > g(x)}, or {xX : f(x) < g(x)}. It follows that

${\displaystyle \int _{X}(g-f)^{+}\,d\mu =0=\int _{X}(g-f)^{-}\,d\mu ,}$

and so, that (gf )+ = 0 μ-almost everywhere; the same is true for (gf ), and thus, f  = g μ-almost everywhere, as desired.

### For σ-finite positive measures

If μ and ν are σ-finite, then X can be written as the union of a sequence {Bn}n of disjoint sets in Σ, each of which has finite measure under both μ and ν. For each n, by the finite case, there is a Σ-measurable function fn : Bn → [0, ∞) such that

${\displaystyle \nu _{n}(A)=\int _{A}f_{n}\,d\mu }$

for each Σ-measurable subset A of Bn. The sum ${\displaystyle \left(\sum _{n}f_{n}\right)=f}$of those functions is then the required function such that ${\displaystyle \nu (A)=\int _{A}fd\mu }$.

As for the uniqueness, since each of the fn is μ-almost everywhere unique, then so is f.

### For signed and complex measures

If ν is a σ-finite signed measure, then it can be Hahn–Jordan decomposed as ν = ν+ν where one of the measures is finite. Applying the previous result to those two measures, one obtains two functions, g, h : X → [0, ∞), satisfying the Radon–Nikodym theorem for ν+ and ν respectively, at least one of which is μ-integrable (i.e., its integral with respect to μ is finite). It is clear then that f = gh satisfies the required properties, including uniqueness, since both g and h are unique up to μ-almost everywhere equality.

If ν is a complex measure, it can be decomposed as ν = ν1 + 2, where both ν1 and ν2 are finite-valued signed measures. Applying the above argument, one obtains two functions, g, h : X → [0, ∞), satisfying the required properties for ν1 and ν2, respectively. Clearly, f  = g + ih is the required function.

## Generalisation

If the condition that ${\displaystyle \nu }$ is absolutely continuous with respect to ${\displaystyle \mu }$ is dropped, then the following is true. Given a σ-finite measure ${\displaystyle \mu }$ on the measure space ${\displaystyle (X,{\mathit {\Sigma }})}$ and a σ-finite, signed measure ${\displaystyle \nu }$ on ${\displaystyle {\mathit {\Sigma }}}$, there exist unique signed measures ${\displaystyle \nu _{a}}$ and ${\displaystyle \nu _{s}}$ on ${\displaystyle {\mathit {\Sigma }}}$ such that ${\displaystyle \nu =\nu _{a}+\nu _{s}}$, ${\displaystyle \nu _{a}\ll \mu }$, and ${\displaystyle \nu _{s}\perp \mu }$ (this is Lebesgue's decomposition theorem). Moreover, there exists some extended ${\displaystyle \mu }$-integrable function ${\displaystyle f}$ such that

${\displaystyle \nu _{a}(E)=\int _{E}f\,d\mu }$. We write ${\displaystyle f={\frac {dv_{a}}{d\mu }}}$.

## Notes

1. Billingsley, Patrick (1995). Probability and Measure (Third ed.). New York: John Wiley & Sons,. pp. 419–427. ISBN 0-471-00710-2.CS1 maint: extra punctuation (link)
2. Nikodym, O. (1930). "Sur une généralisation des intégrales de M. J. Radon" (PDF). Fundamenta Mathematicae (in French). 15: 131–179. JFM 56.0922.02. Retrieved 2018-01-30.
3. Zaanen, Adriaan C. (1996). Introduction to Operator Theory in Riesz Spaces. Springer. ISBN 3-540-61989-5.
4. "Calculating Radon Nikodym derivative". Stack Exchange. April 7, 2018.

## References

• Lang, Serge (1969). Analysis II: Real analysis. Addison-Wesley. Contains a proof for vector measures assuming values in a Banach space.
• Royden, H. L.; Fitzpatrick, P. M. (2010). Real Analysis (4th ed.). Pearson. Contains a lucid proof in case the measure ν is not σ-finite.
• Shilov, G. E.; Gurevich, B. L. (1978). Integral, Measure, and Derivative: A Unified Approach. Richard A. Silverman, trans. Dover Publications. ISBN 0-486-63519-8.
• Stein, Elias M.; Shakarchi, Rami (2005). Real analysis: measure theory, integration, and Hilbert spaces. Princeton lectures in analysis. Princeton, N.J: Princeton University Press. ISBN 978-0-691-11386-9. Contains a proof of the generalisation.
• Teschl, Gerald. "Topics in Real and Functional Analysis". (lecture notes).