Maximum Likelihood Method and Liquidity Stress in Simple Terms

Maximum likelihood estimation is a statistical technique used to estimate the parameters of a probabilistic model from observed data. It aims to find the parameter values that make the observed data most probable, i.e., those that maximize the probability of observing these data given the model.

 

Defining a likelihood function

 

The first step is to define a likelihood function, which is a function of the unknown parameters of the model based on the observed data. If we have a set of data that follows a probability distribution, where  is a vector of unknown parameters, the likelihood function is the joint probability of observing this set of data.

 

The likelihood function is therefore:

 

L(θ) = P(X₁ = x₁, X₂ = x₂, …, Xₙ = xₙ ; θ)

 

If the observations  are independent, the likelihood function reduces to the product of the individual probability densities:

 

L(θ) = Π_{i=1}^{n} f(X_i; θ)

 

Maximizing the likelihood function

 

The goal is to find the values of the parameters  that maximize this function. Since the likelihood function is often a product of probabilities, it is common to work with the logarithm of this function, called the log-likelihood, as this transforms the product into a sum and simplifies the calculations:

 

log L(θ) = Σ_{i=1}^{n} log f(X_i; θ)

 

Calculating derivatives and solving

 

To maximize the log-likelihood, we take the derivative of this function with respect to each parameter of the model and solve the equation by setting this derivative to zero. This allows us to find the values of the parameters that maximize the log-likelihood, and therefore the likelihood.

 

If  is a vector of parameters, we solve the following system to find the estimates of :

 

d(log L(θ)) / dθ = 0

 

Checking for a maximum

 

It is necessary to verify that the solutions obtained correspond to a maximum, using, for example, the second derivative of the log-likelihood or other tests to ensure that a minimum or an inflection point has not been found.

  • Example: Estimating the mean of a normal distribution

Take the example where the data come from a normal distribution with a mean  and a variance , and we wish to estimate  (assuming  is known).

 

The probability density of a normal variable is:

 

f(x; μ, σ²) = (1 / sqrt(2πσ²)) * exp(-(x - μ)² / 2σ²)

 

The likelihood function for a sample is then:

 

L(μ) = Π_{i=1}^{n} (1 / sqrt(2πσ²)) * exp(-(X_i - μ)² / 2σ²)

 

Taking the logarithm, we obtain the log-likelihood:

 

log L(μ) = -(n / 2) * log(2πσ²) - (1 / 2σ²) * Σ_{i=1}^{n} (X_i - μ)²

 

To maximize this log-likelihood, we take the derivative with respect to  and solve the equation:

 

d(log L(μ)) / dμ = (1 / σ²) * Σ_{i=1}^{n} (X_i - μ) = 0

 

This gives the maximum likelihood estimator for :

 

μ̂ = (1 / n) * Σ_{i=1}^{n} X_i

 

This is simply the average of the observations, which is the estimate that maximizes the likelihood.

 

Application to liquidity stress tests

 

The maximum likelihood method can be used to estimate the parameters of the Generalized Pareto Distribution (GPD) to model values beyond a critical threshold in the context of extreme redemptions of investment funds.

 

The density of the GPD is given by:

 

f(y; σ, ξ) = (1 / σ) * (1 + ξ * (y / σ))^(-1 / ξ - 1)

 

where  is the scale parameter and  is the shape parameter.

 

The scale parameter is a statistical parameter that controls the spread or range of a probability distribution. It influences the width of the distribution without changing its general shape. In different distributions, the scale parameter essentially determines the “range” of values a random variable can take.

 

The larger the scale parameter, the more the distribution will be “stretched” horizontally, and the more extreme values will be farther from the mean or center of the distribution.

 

Conversely, a small scale parameter compresses the distribution, with values concentrated around the center.

 

In the normal distribution, the variance (or standard deviation ) acts as the scale parameter. If  is large, the distribution spreads over a wide range of values, and if  is small, the data are concentrated around the mean.

 

The log-likelihood is then:

 

log L(ξ, σ) = -n * log σ - (1 + 1 / ξ) * Σ_{i=1}^{n} log(1 + ξ * (y_i / σ))

 

By taking the derivative of this equation with respect to  and , we obtain the estimated values of ξ and σ ,which maximize the likelihood of the data. These parameters are then used to calculate risk metrics such as Value-at-Risk (VaR) and Expected Shortfall (ES).


Maximum Likelihood Method and Liquidity Stress in Simple Terms
Maximum Likelihood Method and Liquidity Stress in Simple Terms

Écrire commentaire

Commentaires: 0