# Half-normal distribution

In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.

Parameters Probability density function${\displaystyle \sigma =1}$ Cumulative distribution function${\displaystyle \sigma =1}$ ${\displaystyle \sigma >0}$ — (scale) ${\displaystyle x\in [0,\infty )}$ ${\displaystyle f(x;\sigma )={\frac {\sqrt {2}}{\sigma {\sqrt {\pi }}}}\exp \left(-{\frac {x^{2}}{2\sigma ^{2}}}\right)\quad x>0}$ ${\displaystyle F(x;\sigma )=\operatorname {erf} \left({\frac {x}{\sigma {\sqrt {2}}}}\right)}$ ${\displaystyle Q(F;\sigma )=\sigma {\sqrt {2}}\operatorname {erf} ^{-1}(F)}$ ${\displaystyle {\frac {\sigma {\sqrt {2}}}{\sqrt {\pi }}}}$ ${\displaystyle \sigma {\sqrt {2}}\operatorname {erf} ^{-1}(1/2)}$ ${\displaystyle 0}$ ${\displaystyle \sigma ^{2}\left(1-{\frac {2}{\pi }}\right)}$ ${\displaystyle {\frac {{\sqrt {2}}(4-\pi )}{(\pi -2)^{3/2}}}\approx 0.9952717}$ ${\displaystyle {\frac {8(\pi -3)}{(\pi -2)^{2}}}}$ ${\displaystyle {\frac {1}{2}}\log \left({\frac {\pi \sigma ^{2}}{2}}\right)+{\frac {1}{2}}}$

Let ${\displaystyle X}$ follow an ordinary normal distribution, ${\displaystyle N(0,\sigma ^{2})}$, then ${\displaystyle Y=|X|}$ follows a half-normal distribution. Thus, the half-normal distribution is a fold at the mean of an ordinary normal distribution with mean zero.

## Properties

Using the ${\displaystyle \sigma }$ parametrization of the normal distribution, the probability density function (PDF) of the half-normal is given by

${\displaystyle f_{Y}(y;\sigma )={\frac {\sqrt {2}}{\sigma {\sqrt {\pi }}}}\exp \left(-{\frac {y^{2}}{2\sigma ^{2}}}\right)\quad y\geq 0,}$

where ${\displaystyle E[Y]=\mu ={\frac {\sigma {\sqrt {2}}}{\sqrt {\pi }}}}$.

Alternatively using a scaled precision (inverse of the variance) parametrization (to avoid issues if ${\displaystyle \sigma }$ is near zero), obtained by setting ${\displaystyle \theta ={\frac {\sqrt {\pi }}{\sigma {\sqrt {2}}}}}$, the probability density function is given by

${\displaystyle f_{Y}(y;\theta )={\frac {2\theta }{\pi }}\exp \left(-{\frac {y^{2}\theta ^{2}}{\pi }}\right)\quad y\geq 0,}$

where ${\displaystyle E[Y]=\mu ={\frac {1}{\theta }}}$.

The cumulative distribution function (CDF) is given by

${\displaystyle F_{Y}(y;\sigma )=\int _{0}^{y}{\frac {1}{\sigma }}{\sqrt {\frac {2}{\pi }}}\,\exp \left(-{\frac {x^{2}}{2\sigma ^{2}}}\right)\,dx}$

Using the change-of-variables ${\displaystyle z=x/({\sqrt {2}}\sigma )}$, the CDF can be written as

${\displaystyle F_{Y}(y;\sigma )={\frac {2}{\sqrt {\pi }}}\,\int _{0}^{y/({\sqrt {2}}\sigma )}\exp \left(-z^{2}\right)dz=\operatorname {erf} \left({\frac {y}{{\sqrt {2}}\sigma }}\right),}$

where erf is the error function, a standard function in many mathematical software packages.

The quantile function (or inverse CDF) is written:

${\displaystyle Q(F;\sigma )=\sigma {\sqrt {2}}\operatorname {erf} ^{-1}(F)}$

where ${\displaystyle 0\leq F\leq 1}$ and ${\displaystyle \operatorname {erf} ^{-1}}$ is the inverse error function

The expectation is then given by

${\displaystyle E(Y)=\sigma {\sqrt {2/\pi }},}$

The variance is given by

${\displaystyle \operatorname {var} (Y)=\sigma ^{2}\left(1-{\frac {2}{\pi }}\right).}$

Since this is proportional to the variance σ2 of X, σ can be seen as a scale parameter of the new distribution.

The entropy of the half-normal distribution is exactly one bit less the entropy of a zero-mean normal distribution with the same second moment about 0. This can be understood intuitively since the magnitude operator reduces information by one bit (if the probability distribution at its input is even). Alternatively, since a half-normal distribution is always positive, the one bit it would take to record whether a standard normal random variable were positive (say, a 1) or negative (say, a 0) is no longer necessary. Thus,

${\displaystyle H(Y)={\frac {1}{2}}\log \left({\frac {\pi \sigma ^{2}}{2}}\right)+{\frac {1}{2}}.}$

## Parameter estimation

Given numbers ${\displaystyle \{x_{i}\}_{i=1}^{n}}$ drawn from a half-normal distribution, the unknown parameter ${\displaystyle \sigma }$ of that distribution can be estimated by the method of maximum likelihood, giving

${\displaystyle {\hat {\sigma }}={\sqrt {{\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}}}}$

The bias is equal to

${\displaystyle b\equiv \operatorname {E} {\bigg [}\;({\hat {\sigma }}_{\mathrm {mle} }-\sigma )\;{\bigg ]}=-{\frac {\sigma }{4n}}}$

which yields the bias-corrected maximum likelihood estimator

${\displaystyle {\hat {\sigma \,}}_{\text{mle}}^{*}={\hat {\sigma \,}}_{\text{mle}}-{\hat {b\,}}.}$