# Generalized normal distribution

The generalized normal distribution or generalized Gaussian distribution (GGD) is either of two families of parametric continuous probability distributions on the real line. Both families add a shape parameter to the normal distribution. To distinguish the two families, they are referred to below as "version 1" and "version 2". However this is not a standard nomenclature.

## Version 1

Parameters Probability density function Cumulative distribution function ${\displaystyle \mu \,}$ location (real)${\displaystyle \alpha \,}$ scale (positive, real)${\displaystyle \beta \,}$ shape (positive, real) ${\displaystyle x\in (-\infty ;+\infty )\!}$ ${\displaystyle {\frac {\beta }{2\alpha \Gamma (1/\beta )}}\;e^{-(|x-\mu |/\alpha )^{\beta }}}$ ${\displaystyle \Gamma }$ denotes the gamma function ${\displaystyle {\frac {1}{2}}+\operatorname {sgn} (x-\mu ){\frac {\gamma \left[1/\beta ,\left({\frac {|x-\mu |}{\alpha }}\right)^{\beta }\right]}{2\Gamma (1/\beta )}}}$ ${\displaystyle \gamma }$ denotes the lower incomplete gamma function ${\displaystyle \mu \,}$ ${\displaystyle \mu \,}$ ${\displaystyle \mu \,}$ ${\displaystyle {\frac {\alpha ^{2}\Gamma (3/\beta )}{\Gamma (1/\beta )}}}$ 0 ${\displaystyle {\frac {\Gamma (5/\beta )\Gamma (1/\beta )}{\Gamma (3/\beta )^{2}}}-3}$ ${\displaystyle {\frac {1}{\beta }}-\log \left[{\frac {\beta }{2\alpha \Gamma (1/\beta )}}\right]}$[1]

Known also as the exponential power distribution, or the generalized error distribution, this is a parametric family of symmetric distributions. It includes all normal and Laplace distributions, and as limiting cases it includes all continuous uniform distributions on bounded intervals of the real line.

This family includes the normal distribution when ${\displaystyle \textstyle \beta =2}$ (with mean ${\displaystyle \textstyle \mu }$ and variance ${\displaystyle \textstyle {\frac {\alpha ^{2}}{2}}}$) and it includes the Laplace distribution when ${\displaystyle \textstyle \beta =1}$. As ${\displaystyle \textstyle \beta \rightarrow \infty }$, the density converges pointwise to a uniform density on ${\displaystyle \textstyle (\mu -\alpha ,\mu +\alpha )}$.

This family allows for tails that are either heavier than normal (when ${\displaystyle \beta <2}$) or lighter than normal (when ${\displaystyle \beta >2}$). It is a useful way to parametrize a continuum of symmetric, platykurtic densities spanning from the normal (${\displaystyle \textstyle \beta =2}$) to the uniform density (${\displaystyle \textstyle \beta =\infty }$), and a continuum of symmetric, leptokurtic densities spanning from the Laplace (${\displaystyle \textstyle \beta =1}$) to the normal density (${\displaystyle \textstyle \beta =2}$).

### Parameter estimation

Parameter estimation via maximum likelihood and the method of moments has been studied.[2] The estimates do not have a closed form and must be obtained numerically. Estimators that do not require numerical calculation have also been proposed.[3]

The generalized normal log-likelihood function has infinitely many continuous derivates (i.e. it belongs to the class C of smooth functions) only if ${\displaystyle \textstyle \beta }$ is a positive, even integer. Otherwise, the function has ${\displaystyle \textstyle \lfloor \beta \rfloor }$ continuous derivatives. As a result, the standard results for consistency and asymptotic normality of maximum likelihood estimates of ${\displaystyle \beta }$ only apply when ${\displaystyle \textstyle \beta \geq 2}$.

#### Maximum likelihood estimator

It is possible to fit the generalized normal distribution adopting an approximate maximum likelihood method.[4][5] With ${\displaystyle \mu }$ initially set to the sample first moment ${\displaystyle m_{1}}$, ${\displaystyle \textstyle \beta }$ is estimated by using a Newton–Raphson iterative procedure, starting from an initial guess of ${\displaystyle \textstyle \beta =\textstyle \beta _{0}}$,

${\displaystyle \beta _{0}={\frac {m_{1}}{\sqrt {m_{2}}}},}$

where

${\displaystyle m_{1}={1 \over N}\sum _{i=1}^{N}|x_{i}|,}$

is the first statistical moment of the absolute values and ${\displaystyle m_{2}}$ is the second statistical moment. The iteration is

${\displaystyle \beta _{i+1}=\beta _{i}-{\frac {g(\beta _{i})}{g'(\beta _{i})}},}$

where

${\displaystyle g(\beta )=1+{\frac {\psi (1/\beta )}{\beta }}-{\frac {\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\log |x_{i}-\mu |}{\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }}}+{\frac {\log({\frac {\beta }{N}}\sum _{i=1}^{N}|x_{i}-\mu |^{\beta })}{\beta }},}$

and

{\displaystyle {\begin{aligned}g'(\beta )={}&-{\frac {\psi (1/\beta )}{\beta ^{2}}}-{\frac {\psi '(1/\beta )}{\beta ^{3}}}+{\frac {1}{\beta ^{2}}}-{\frac {\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }(\log |x_{i}-\mu |)^{2}}{\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }}}\\[6pt]&{}+{\frac {\left(\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\log |x_{i}-\mu |\right)^{2}}{\left(\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\right)^{2}}}+{\frac {\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\log |x_{i}-\mu |}{\beta \sum _{i=1}^{N}|x_{i}-\mu |^{\beta }}}\\[6pt]&{}-{\frac {\log \left({\frac {\beta }{N}}\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\right)}{\beta ^{2}}},\end{aligned}}}

and where ${\displaystyle \psi }$ and ${\displaystyle \psi '}$ are the digamma function and trigamma function.

Given a value for ${\displaystyle \textstyle \beta }$, it is possible to estimate ${\displaystyle \mu }$ by finding the minimum of:

${\displaystyle \min _{\mu }=\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }}$

Finally ${\displaystyle \textstyle \alpha }$ is evaluated as

${\displaystyle \alpha =\left({\frac {\beta }{N}}\sum _{i=1}^{N}|x_{i}-\mu |^{\beta }\right)^{1/\beta }.}$

### Applications

This version of the generalized normal distribution has been used in modeling when the concentration of values around the mean and the tail behavior are of particular interest.[6][7] Other families of distributions can be used if the focus is on other deviations from normality. If the symmetry of the distribution is the main interest, the skew normal family or version 2 of the generalized normal family discussed below can be used. If the tail behavior is the main interest, the student t family can be used, which approximates the normal distribution as the degrees of freedom grows to infinity. The t distribution, unlike this generalized normal distribution, obtains heavier than normal tails without acquiring a cusp at the origin.

### Properties

#### Moments

Let ${\displaystyle X_{\beta }}$ be zero mean generalized Gaussian distribution of shape ${\displaystyle \beta }$ and scaling paramater ${\displaystyle \alpha }$ . The moments of ${\displaystyle X_{\beta }}$ exist and are finite for any k greater than −1. For any non-negative integer k, the plain central moments are[8]

${\displaystyle \operatorname {E} \left[X_{\beta }^{k}\right]={\begin{cases}0&{\text{if }}k{\text{ is odd,}}\\\alpha ^{k}\Gamma \left({\frac {k+1}{\beta }}\right){\Big /}\,\Gamma \left({\frac {1}{\beta }}\right)&{\text{if }}k{\text{ is even.}}\end{cases}}}$

#### Connection to Positive-Definite Functions

The probability density function of this version of the generalized normal distribution is a positive-definite function for ${\displaystyle \beta \in (0,2]}$.[9][10]

#### Infinite divisibility

This version of generalized Gaussian distribution is an infinitely divisible distribution if and only if ${\displaystyle \beta \in (0,1]\cup \{2\}}$.[11]

### Generalizations

The multivariate generalized normal distribution, i.e. the product of ${\displaystyle n}$ exponential power distributions with the same ${\displaystyle \beta }$ and ${\displaystyle \alpha }$ parameters, is the only probability density that can be written in the form ${\displaystyle p(\mathbf {x} )=g(\|\mathbf {x} \|_{\beta })}$ and has independent marginals.[12] The results for the special case of the Multivariate normal distribution is originally attributed to Maxwell.[13]

## Version 2

Parameters Probability density function Cumulative distribution function ${\displaystyle \xi \,}$ location (real)${\displaystyle \alpha \,}$ scale (positive, real)${\displaystyle \kappa \,}$ shape (real) ${\displaystyle x\in (-\infty ,\xi +\alpha /\kappa ){\text{ if }}\kappa >0}$${\displaystyle x\in (-\infty ,\infty ){\text{ if }}\kappa =0}$${\displaystyle x\in (\xi +\alpha /\kappa ;+\infty ){\text{ if }}\kappa <0}$ ${\displaystyle {\frac {\phi (y)}{\alpha -\kappa (x-\xi )}}}$, where ${\displaystyle y={\begin{cases}-{\frac {1}{\kappa }}\log \left[1-{\frac {\kappa (x-\xi )}{\alpha }}\right]&{\text{if }}\kappa \neq 0\\{\frac {x-\xi }{\alpha }}&{\text{if }}\kappa =0\end{cases}}}$${\displaystyle \phi }$ is the standard normal pdf ${\displaystyle \Phi (y)}$, where ${\displaystyle y={\begin{cases}-{\frac {1}{\kappa }}\log \left[1-{\frac {\kappa (x-\xi )}{\alpha }}\right]&{\text{if }}\kappa \neq 0\\{\frac {x-\xi }{\alpha }}&{\text{if }}\kappa =0\end{cases}}}$${\displaystyle \Phi }$ is the standard normal CDF ${\displaystyle \xi -{\frac {\alpha }{\kappa }}\left(e^{\kappa ^{2}/2}-1\right)}$ ${\displaystyle \xi \,}$ ${\displaystyle {\frac {\alpha ^{2}}{\kappa ^{2}}}e^{\kappa ^{2}}\left(e^{\kappa ^{2}}-1\right)}$ ${\displaystyle {\frac {3e^{\kappa ^{2}}-e^{3\kappa ^{2}}-2}{(e^{\kappa ^{2}}-1)^{3/2}}}{\text{ sign}}(\kappa )}$ ${\displaystyle e^{4\kappa ^{2}}+2e^{3\kappa ^{2}}+3e^{2\kappa ^{2}}-6}$

This is a family of continuous probability distributions in which the shape parameter can be used to introduce skew.[14][15] When the shape parameter is zero, the normal distribution results. Positive values of the shape parameter yield left-skewed distributions bounded to the right, and negative values of the shape parameter yield right-skewed distributions bounded to the left. Only when the shape parameter is zero is the density function for this distribution positive over the whole real line: in this case the distribution is a normal distribution, otherwise the distributions are shifted and possibly reversed log-normal distributions.

### Parameter estimation

Parameters can be estimated via maximum likelihood estimation or the method of moments. The parameter estimates do not have a closed form, so numerical calculations must be used to compute the estimates. Since the sample space (the set of real numbers where the density is non-zero) depends on the true value of the parameter, some standard results about the performance of parameter estimates will not automatically apply when working with this family.

### Applications

This family of distributions can be used to model values that may be normally distributed, or that may be either right-skewed or left-skewed relative to the normal distribution. The skew normal distribution is another distribution that is useful for modeling deviations from normality due to skew. Other distributions used to model skewed data include the gamma, lognormal, and Weibull distributions, but these do not include the normal distributions as special cases.

The two generalized normal families described here, like the skew normal family, are parametric families that extends the normal distribution by adding a shape parameter. Due to the central role of the normal distribution in probability and statistics, many distributions can be characterized in terms of their relationship to the normal distribution. For example, the lognormal, folded normal, and inverse normal distributions are defined as transformations of a normally-distributed value, but unlike the generalized normal and skew-normal families, these do not include the normal distributions as special cases.
Actually all distributions with finite variance are in the limit highly related to the normal distribution. The Student-t distribution, the Irwin–Hall distribution and the Bates distribution also extend the normal distribution, and include in the limit the normal distribution. So there is no strong reason to prefer the "generalized" normal distribution of type 1, e.g. over a combination of Student-t and a normalized extended Irwin–Hall – this would include e.g. the triangular distribution (which cannot be modeled by the generalized Gaussian type 1).
A symmetric distribution which can model both tail (long and short) and center behavior (like flat, triangular or Gaussian) completely independently could be derived e.g. by using X = IH/chi.

## References

1. Nadarajah, Saralees (September 2005). "A generalized normal distribution". Journal of Applied Statistics. 32 (7): 685–694. doi:10.1080/02664760500079464.
2. Varanasi, M.K.; Aazhang, B. (October 1989). "Parametric generalized Gaussian density estimation". Journal of the Acoustical Society of America. 86 (4): 1404–1415. doi:10.1121/1.398700.
3. Domínguez-Molina, J. Armando; González-Farías, Graciela; Rodríguez-Dagnino, Ramón M. "A practical procedure to estimate the shape parameter in the generalized Gaussian distribution" (PDF). Retrieved 2009-03-03. Cite journal requires |journal= (help)
4. Varanasi, M.K.; Aazhang B. (1989). "Parametric generalized Gaussian density estimation". J. Acoust. Soc. Am. 86 (4): 1404–1415. doi:10.1121/1.398700.
5. Do, M.N.; Vetterli, M. (February 2002). "Wavelet-based Texture Retrieval Using Generalised Gaussian Density and Kullback-Leibler Distance". Transaction on Image Processing. 11 (2): 146–158. doi:10.1109/83.982822. PMID 18244620.
6. Liang, Faming; Liu, Chuanhai; Wang, Naisyin (April 2007). "A robust sequential Bayesian method for identification of differentially expressed genes". Statistica Sinica. 17 (2): 571–597. Archived from the original on 2007-10-09. Retrieved 2009-03-03.
7. Box, George E. P.; Tiao, George C. (1992). Bayesian Inference in Statistical Analysis. New York: Wiley. ISBN 978-0-471-57428-6.
8. Saralees Nadarajah (2005) A generalized normal distribution, Journal of Applied Statistics, 32:7, 685-694, DOI: 10.1080/02664760500079464
9. Dytso, Alex; Bustin, Ronit; Poor, H. Vincent; Shamai, Shlomo (2018). "Analytical properties of generalized Gaussian distributions". Journal of Statistical Distributions and Applications. 5 (1): 6. doi:10.1186/s40488-018-0088-5.
10. Bochner, Salomon (1937). "Stable laws of probability and completely monotone functions". Duke Mathematical Journal. 3 (4): 726–728. doi:10.1215/s0012-7094-37-00360-0.
11. Dytso, Alex; Bustin, Ronit; Poor, H. Vincent; Shamai, Shlomo (2018). "Analytical properties of generalized Gaussian distributions". Journal of Statistical Distributions and Applications. 5 (1): 6. doi:10.1186/s40488-018-0088-5.
12. Sinz, Fabian; Gerwinn, Sebastian; Bethge, Matthias (May 2009). "Characterization of the p-Generalized Normal Distribution". Journal of Multivariate Analysis. 100 (5): 817–820. doi:10.1016/j.jmva.2008.07.006.
13. Kac, M. (1939). "On a characterization of the normal distribution". American Journal of Mathematics. 61 (3): 726–728. doi:10.2307/2371328. JSTOR 2371328.
14. Hosking, J.R.M., Wallis, J.R. (1997) Regional frequency analysis: an approach based on L-moments, Cambridge University Press. ISBN 0-521-43045-3. Section A.8
15. Documentation for the lmomco R package