# Dynkin system

A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set ${\displaystyle \Omega }$ satisfying a set of axioms weaker than those of σ-algebra. Dynkin systems are sometimes referred to as λ-systems (Dynkin himself used this term) or d-system.[1] These set families have applications in measure theory and probability.

A major application of λ-systems is the π-λ theorem, see below.

## Definitions

Let Ω be a nonempty set, and let ${\displaystyle D}$ be a collection of subsets of Ω (i.e., ${\displaystyle D}$ is a subset of the power set of Ω). Then ${\displaystyle D}$ is a Dynkin system if

1. Ω ∈ ${\displaystyle D}$,
2. if A, B${\displaystyle D}$ and AB, then B \ A${\displaystyle D}$,
3. if A1, A2, A3, ... is a sequence of subsets in ${\displaystyle D}$ and AnAn+1 for all n ≥ 1, then ${\displaystyle \bigcup _{n=1}^{\infty }A_{n}\in D}$.

Equivalently, ${\displaystyle D}$ is a Dynkin system if

1. Ω ∈ ${\displaystyle D}$,
2. if A${\textstyle D}$, then Ac${\displaystyle D}$,
3. if A1, A2, A3, ... is a sequence of subsets in ${\displaystyle D}$ such that AiAj = Ø for all ij, then ${\displaystyle \bigcup _{n=1}^{\infty }A_{n}\in D}$.

The second definition is generally preferred as it usually is easier to check.

An important fact is that a Dynkin system which is also a π-system (i.e., closed under finite intersections) is a σ-algebra. This can be verified by noting that conditions 2 and 3 together with closure under finite intersections imply closure under countable unions.

Given any collection ${\displaystyle {\mathcal {J}}}$ of subsets of ${\displaystyle \Omega }$, there exists a unique Dynkin system denoted ${\displaystyle D\{{\mathcal {J}}\}}$ which is minimal with respect to containing ${\displaystyle {\mathcal {J}}}$. That is, if ${\displaystyle {\tilde {D}}}$ is any Dynkin system containing ${\displaystyle {\mathcal {J}}}$, then ${\displaystyle D\{{\mathcal {J}}\}\subseteq {\tilde {D}}}$. ${\displaystyle D\{{\mathcal {J}}\}}$ is called the Dynkin system generated by ${\displaystyle {\mathcal {J}}}$. Note ${\displaystyle D\{\emptyset \}=\{\emptyset ,\Omega \}}$. For another example, let ${\displaystyle \Omega =\{1,2,3,4\}}$ and ${\displaystyle {\mathcal {J}}=\{1\}}$; then ${\displaystyle D\{{\mathcal {J}}\}=\{\emptyset ,\{1\},\{2,3,4\},\Omega \}}$.

## Dynkin's π-λ theorem

If ${\displaystyle P}$ is a π-system and ${\displaystyle D}$ is a Dynkin system with ${\displaystyle P\subseteq D}$, then ${\displaystyle \sigma \{P\}\subseteq D}$. In other words, the σ-algebra generated by ${\displaystyle P}$ is contained in ${\displaystyle D}$.

One application of Dynkin's π-λ theorem is the uniqueness of a measure that evaluates the length of an interval (known as the Lebesgue measure):

Let (Ω, B, λ) be the unit interval [0,1] with the Lebesgue measure on Borel sets. Let μ be another measure on Ω satisfying μ[(a,b)] = b  a, and let D be the family of sets S such that μ[S] = λ[S]. Let I = { (a,b),[a,b),(a,b],[a,b] : 0 < ab < 1 }, and observe that I is closed under finite intersections, that ID, and that B is the σ-algebra generated by I. It may be shown that D satisfies the above conditions for a Dynkin-system. From Dynkin's π-λ Theorem it follows that D in fact includes all of B, which is equivalent to showing that the Lebesgue measure is unique on B.

### Application to probability distributions

The π-λ theorem motivates the common definition of the probability distribution of a random variable ${\displaystyle X\colon (\Omega ,{\mathcal {F}},\operatorname {P} )\rightarrow \mathbb {R} }$ in terms of its cumulative distribution function. Recall that the cumulative distribution of a random variable is defined as

${\displaystyle F_{X}(a)=\operatorname {P} \left[X\leq a\right],\qquad a\in \mathbb {R} ,}$

whereas the seemingly more general law of the variable is the probability measure

${\displaystyle {\mathcal {L}}_{X}(B)=\operatorname {P} \left[X^{-1}(B)\right],\qquad B\in {\mathcal {B}}(\mathbb {R} ),}$

where ${\displaystyle {\mathcal {B}}(\mathbb {R} )}$ is the Borel σ-algebra. We say that the random variables ${\displaystyle X\colon (\Omega ,{\mathcal {F}},\operatorname {P} )}$, and ${\displaystyle Y\colon ({\tilde {\Omega }},{\tilde {\mathcal {F}}},{\tilde {\operatorname {P} }})\rightarrow \mathbb {R} }$ (on two possibly different probability spaces) are equal in distribution (or law), ${\displaystyle X\,{\stackrel {\mathcal {D}}{=}}\,Y}$, if they have the same cumulative distribution functions, FX = FY. The motivation for the definition stems from the observation that if FX = FY, then that is exactly to say that ${\displaystyle {\mathcal {L}}_{X}}$ and ${\displaystyle {\mathcal {L}}_{Y}}$ agree on the π-system ${\displaystyle \left\{(-\infty ,a]\colon a\in \mathbb {R} \right\}}$ which generates ${\displaystyle {\mathcal {B}}(\mathbb {R} )}$, and so by the example above: ${\displaystyle {\mathcal {L}}_{X}={\mathcal {L}}_{Y}}$.

A similar result holds for the joint distribution of a random vector. For example, suppose X and Y are two random variables defined on the same probability space ${\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )}$, with respectively generated π-systems ${\displaystyle {\mathcal {I}}_{X}}$ and ${\displaystyle {\mathcal {I}}_{Y}}$. The joint cumulative distribution function of (X,Y) is

${\displaystyle F_{X,Y}(a,b)=\operatorname {P} \left[X\leq a,Y\leq b\right]=\operatorname {P} \left[X^{-1}((-\infty ,a])\cap Y^{-1}((-\infty ,b])\right],\qquad a,b\in \mathbb {R} .}$

However, ${\displaystyle A=X^{-1}((-\infty ,a])\in {\mathcal {I}}_{X}}$ and ${\displaystyle B=Y^{-1}((-\infty ,b])\in {\mathcal {I}}_{Y}}$. Since

${\displaystyle {\mathcal {I}}_{X,Y}=\{A\cap B:A\in {\mathcal {I}}_{X},\,B\in {\mathcal {I}}_{Y}\}}$

is a π-system generated by the random pair (X,Y), the π-λ theorem is used to show that the joint cumulative distribution function suffices to determine the joint law of (X,Y). In other words, (X,Y) and (W,Z) have the same distribution if and only if they have the same joint cumulative distribution function.

In the theory of stochastic processes, two processes ${\displaystyle (X_{t})_{t\in T},(Y_{t})_{t\in T}}$ are known to be equal in distribution if and only if they agree on all finite-dimensional distributions. i.e. for all ${\displaystyle t_{1},\ldots ,t_{n}\in T,\,n\in \mathbb {N} }$.

${\displaystyle (X_{t_{1}},\ldots ,X_{t_{n}})\,{\stackrel {\mathcal {D}}{=}}\,(Y_{t_{1}},\ldots ,Y_{t_{n}}).}$

The proof of this is another application of the π-λ theorem.[2]

## Notes

1. Aliprantis, Charalambos; Border, Kim C. (2006). Infinite Dimensional Analysis: a Hitchhiker's Guide (Third ed.). Springer. Retrieved August 23, 2010.
2. Kallenberg, Foundations Of Modern probability, p. 48

## References

• Gut, Allan (2005). Probability: A Graduate Course. New York: Springer. doi:10.1007/b138932. ISBN 0-387-22833-0.
• Billingsley, Patrick (1995). Probability and Measure. New York: John Wiley & Sons, Inc. ISBN 0-471-00710-2.
• Williams, David (2007). Probability with Martingales. Cambridge University Press. p. 193. ISBN 0-521-40605-6.