Statistical risk is a quantification of a situation's risk using statistical methods. These methods can be used to estimate a probability distribution for the outcome of a specific variable, or at least one or more key parameters of that distribution, and from that estimated distribution a risk function can be used to obtain a single non-negative number representing a particular conception of the risk of the situation.
One measure of the statistical risk of a continuous variable, such as the return on an investment, is simply the estimated variance of the variable, or equivalently the square root of the variance, called the standard deviation. Another measure in finance, one which views upside risk as unimportant compared to downside risk, is the downside beta. In the context of a binary variable, a simple statistical measure of risk is simply the probability that a variable will take on the lower of two values.
There is a sense in which one risk A can be said to be unambiguously greater than another risk B (that is, greater for any reasonable risk function): namely, if A is a mean-preserving spread of B. This means that the probability density function of A can be formed, roughly speaking, by "spreading out" that of B. However, this is only a partial ordering: most pairs of risks cannot be unambiguously ranked in this way, and different risk functions applied to the estimated distributions of two such unordered risky variables will give different answers as to which is riskier.
In the context of statistical estimation itself, the risk involved in estimating a particular parameter is a measure of the degree to which the estimate is likely to be inaccurate.