Predicting Heavy and Extreme Losses in Real-Time for Portfolio Holders (2)

This part is awesome. Trust me! Previously, in Part 1, we examined two independent methods in order to estimate the probability of a very rare event (heavy or extreme loss) that an asset could experience on the next day. The first one was based on the computation of the tail probability, i.e.: given the upper threshold $L_{thr}$ find the corresponding $\alpha$ such that $ Pr(L < -L_{thr}) = \alpha$ where a random variable $L$ was referred to as a daily asset return (in percent).

In this case, we showed that this approach was strongly dependent on the number of similar events in the past that would contribute to the changes of $\alpha$’s in time. The second method was based on a classical concept of the conditional probability. For example, given that the index (benchmark, etc.) drops by $L’$ what is the probability that an asset will lose $L\le L’$? We found that this methods fails if the benchmark never experienced a loss of $L’$.

In both cases, the lack of information on the occurrence of a rare event was the major problem. Therefore, is there any tool we can use to predict an event that has not happened yet?!

This post addresses an appealing solution and delivers a black-box for every portfolio holder he or she can use as an integrated part of the risk management computer system.

1. Bayesian Conjugates

Bayesian statistics treats the problem of finding the probability of an event a little bit differently than what we know from school. For example, if we flip a fair coin, in long run, we expect the same proportion of heads and tails. Formally, it is known as a frequentist interpretation. Bayesian interpretation assumes nothing about the coin (is it fair or weighted) and with every toss of the coin, it builds a grand picture on the underlying probability distribution. And it’s fascinating because we can start with an assumption that the coin is made such that we should observe a number of heads more frequently whereas, after $N$ tosses, we may find that the coin is fair.

Let me skip the theoretical part on that aspect of Bayesian statistics which has been beautifully explained by Mike Halls-Moore of QuantStart.com that I recommend you to study before reading this post further down.

Looking along the return-series for any asset (spanned over $M$ days), we may denote by $x_i = 1$ ($i=1,…,M$) a day when a rare event took place (e.g. an asset lost more than -16% or so) and by $x_i = 0$ a lack of such event. Treating days as a series of Bernoulli trials with unknown probability $p$, if we start with an uninformed prior, such as Beta($\alpha, \beta$) with $\alpha=\beta=1$ then the concept of Bayesian conjugates can be applied in order to update a prior belief. The update is given by:
$$
\alpha’ = \alpha + n_e = \alpha + \sum_{i=1}^{N\le M} x_i \\
\beta’ = \beta + N – n_e = \beta + N – \sum_{i=1}^{N\le M} x_i
$$ what simply means that after $N$ days from some initial moment in the past, by observing a rare event $n_e$ times, the probability distribution is given by Beta($\alpha’, \beta’$). Therefore, a posterior predictive mean for Bernoulli likelihood can be interpreted as a new probability of an event to take place on the next day:
$$
\mbox{Pr}(L < -L_{thr}) = \frac{\alpha’}{\alpha’ + \beta’}
$$ And that’s the key concept we are going to use now for the estimation of probability for heavy or extreme losses that an asset can experience in trading.

As a quick visualisation, imagine you analyse the last 252 trading days of the AAPL stock. The stock never lost more than 90% in one day. No such extreme event took place. According to our new method, the probability that AAPL will plummet 90% (or more) in next trading session is not zero(!) but

$$
\mbox{Pr}(L < -90\%) = \frac{\alpha’}{\alpha’ + \beta’} = \frac{1+0}{1+252-0} = 0.3953\%
$$ with 95% confidence interval:
$$
\left[ B^{-1}(0.05, \alpha’, \beta’);\ B^{-1}(0.95, \alpha’, \beta’) \right] $$ where $B^{-1}$ denotes the percent point function (quantile function) for Beta distribution.

The estimation of a number of events corresponding to the loss of $L < -L_{thr}$ during next 252 trading day, we find by:
$$
E(n_e) = \left\lceil \frac{252}{[B^{-1}(0.95, \alpha’, \beta’)]^{-1}} \right\rceil
$$ where $\lceil \rceil$ denotes the operation of rounding.

2. Analysis of Facebook since its IPO

Having a tool that includes information whether an event took place or not is powerful. You may also notice that the more information we gather (i.e. $N\to\infty$) the more closer we are to the “true” probability function. It also means that the spread around a new posterior predictive mean:
$$
\sigma = \sqrt{ \frac{\alpha’ \beta’} { (\alpha’+\beta’)^2 (\alpha’+\beta’+1) } } \ \to 0 .
$$ It is logical that the analysis of an asset traded daily for the past 19 years will deliver “more trustful” estimation of $\mbox{Pr}(L < -L_{thr})$ rather than a situation where we take into account only a limited period of time. However, what if the stock is a newbie in the market?

Let’s investigate the case of Facebook (NASDAQ:FB). The information we have the record of, spans a bit over 3.5 years back, when Facebook’s initial public offering (IPO) took place on May 18th, 2012. Since that day till today (December 6, 2015) the return-series (daily data) is $M=892$ points long. Using the analytical formulations as given above and writing a Python code:

# Predicting Heavy and Extreme Losses in Real-Time for Portfolio Holders (2)
# (c) 2015 Pawel Lachowicz, QuantAtRisk.com

import numpy as np
from scipy.stats import beta
import matplotlib.pyplot as plt
import pandas_datareader.data as web


# Fetching Yahoo! Finance for Facebook
data = web.DataReader("FB", data_source='yahoo',
                  start='2012-05-18', end='2015-12-04')['Adj Close']
cp = np.array(data.values)  # daily adj-close prices
ret = cp[1:]/cp[:-1] - 1    # compute daily returns
N = len(ret)

# Plotting IBM price- and return-series
plt.figure(num=2, figsize=(9, 6))
plt.subplot(2, 1, 1)
plt.plot(cp)
plt.axis("tight")
plt.ylabel("FB Adj Close [USD]")
plt.subplot(2, 1, 2)
plt.plot(ret, color=(.6, .6, .6))
plt.axis("tight")
plt.ylabel("Daily Returns")
plt.xlabel("Time Period [days]")
#plt.show()

# provide a threshold
Lthr = -0.21

# how many events of L &lt; -21% occured?
ne = np.sum(ret &lt; Lthr)
# of what magnitude?
ev = ret[ret &lt; Lthr]
# avgloss = np.mean(ev)  # if ev is non-empty array


# prior
alpha0 = beta0 = 1
# posterior
alpha1 = alpha0 + ne
beta1 = beta0 + N - ne
pr = alpha1/(alpha1+beta1)
cl1 = beta.ppf(0.05, alpha1, beta1)
cl2 = beta.ppf(0.95, alpha1, beta1)
ne252 = np.round(252/(1/cl2))

print("ne = %g" % ne)
print("alpha', beta' = %g, %g" % (alpha1, beta1))
print("Pr(L &lt; %3g%%) = %5.2f%%\t[%5.2f%%, %5.2f%%]" % (Lthr*100, pr*100,
                                                       cl1*100, cl2*100))
print("E(ne) = %g" % ne252)

first we visualise both the price- and return-series:
figure_2
and while the computations we find:

ne = 0
alpha', beta' = 1, 893
Pr(L &lt; -21%) =  0.11%	[ 0.01%,  0.33%]
E(ne) = 1

There is no record of an event within 892 days of FB trading, $n_e = 0$, that the stock lost 21% or more in a single day. However, the probability that such even may happen at the end of the next NASDAQ session is 0.11%. Non zero but nearly zero.

A quick verification of the probability estimate returned by our Bayesian method we obtain by changing the value of a threshold from -0.21 to Lthr = 0. Then, we get:

alpha', beta' = 422, 472
Pr(L &lt;   0%) = 47.20%	[44.46%, 49.95%]
E(ne) = 126

i.e. 47.2% that on the next day the stock will close lower. It is in an excellent agreement with what can be calculated based on the return-series, i.e. there were 421 days with the negative daily returns, thus $421/892 = 0.4719$. Really cool, don’t you think?!

Now, imagine for a second that after first few days since the Facebook IPO we recalculate the posterior predictive mean, day-by-day, eventually to reach the most current estimate of the probability that the stock may lose -21% or more at the end of the next trading day — such a procedure allows us to plot the change of $\mbox{Pr}(L < -L_{thr})$ during the whole trading history of FB. We achieve that by running a modified version of the previous code:

import numpy as np
from scipy.stats import beta
import matplotlib.pyplot as plt
import pandas_datareader.data as web


data = web.DataReader("FB", data_source='yahoo',
                  start='2012-05-18', end='2015-12-05')['Adj Close']
cp = np.array(data.values)  # daily adj-close prices
returns = cp[1:]/cp[:-1] - 1   # compute daily returns

Lthr = -0.21

Pr = CL1 = CL2 = np.array([])
for i in range(2, len(returns)):
    # data window
    ret = returns[0:i]
    N = len(ret)
    ne = np.sum(ret &lt; Lthr)
    # prior
    alpha0 = beta0 = 1
    # posterior
    alpha1 = alpha0 + ne
    beta1 = beta0 + N - ne
    pr = alpha1/(alpha1+beta1)
    cl1 = beta.ppf(0.05, alpha1, beta1)
    cl2 = beta.ppf(0.95, alpha1, beta1)
    #
    Pr = np.concatenate([Pr, np.array([pr*100])])
    CL1 = np.concatenate([CL1, np.array([cl1*100])])
    CL2 = np.concatenate([CL2, np.array([cl2*100])])

plt.figure(num=1, figsize=(10, 5))
plt.plot(Pr, "k")
plt.plot(CL1, "r--")
plt.plot(CL2, "r--")
plt.axis("tight")
plt.ylim([0, 5])
plt.grid((True))
plt.show()

what displays:
figure_1
i.e. after 892 days we reach the value of 0.11% as found earlier. The black line tracks the change of probability estimate while both red dashed lines denote the 95% confidence interval. The black line is smooth, i.e. because of a lack of -21% event in the past, a recalculated probability did not changed its value “drastically”.

This is not the case when we consider the scenario for Lthr = -0.11, delivering:
figure_1
what stays in agreement with:

ne = 1
alpha', beta' = 2, 892
Pr(L &lt; -11%) =  0.22%	[ 0.04%,  0.53%]
E(ne) = 1

and confirms that only one daily loss of $L < -11\%$ took place while the entire happy life of Facebook on NASDAQ so far.

Since the computer can find the number of losses to be within a certain interval, that gives us an opportunity to estimate:
$$
\mbox{Pr}(L_1 \le L < L_2)
$$ in the following way:

import numpy as np
from scipy.stats import beta
import matplotlib.pyplot as plt
import pandas_datareader.data as web


data = web.DataReader("FB", data_source='yahoo',
                  start='2012-05-18', end='2015-12-05')['Adj Close']
cp = np.array(data.values)  # daily adj-close prices
ret = cp[1:]/cp[:-1] - 1    # compute daily returns
N = len(ret)

dL = 2
for k in range(-50, 0, dL):
    ev = ret[(ret &gt;= k/100) &amp; (ret &lt; (k/100+dL/100))] avgloss = 0 ne = np.sum((ret &gt;= k/100) &amp; (ret &lt; (k/100+dL/100))) # # prior alpha0 = beta0 = 1 # posterior alpha1 = alpha0 + ne beta1 = beta0 + N - ne pr = alpha1/(alpha1+beta1) cl1 = beta.ppf(0.05, alpha1, beta1) cl2 = beta.ppf(0.95, alpha1, beta1) if(len(ev) &gt; 0):
        avgloss = np.mean(ev)
        print("Pr(%3g%% &lt; L &lt; %3g%%) =\t%5.2f%%\t[%5.2f%%, %5.2f%%]  ne = %4g"
          "  E(L) = %.2f%%" % \
          (k, k+dL, pr*100, cl1*100, cl2*100, ne, avgloss*100))
    else:
        print("Pr(%3g%% &lt; L &lt; %3g%%) =\t%5.2f%%\t[%5.2f%%, %5.2f%%]  ne = " \
              "%4g" %  (k, k+dL, pr*100, cl1*100, cl2*100, ne))

By doing so, we visualise the distribution of potential losses (that may occur on the next trading day) along with their magnitudes:

Pr(-50% &lt; L &lt; -48%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-48% &lt; L &lt; -46%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-46% &lt; L &lt; -44%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-44% &lt; L &lt; -42%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-42% &lt; L &lt; -40%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-40% &lt; L &lt; -38%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-38% &lt; L &lt; -36%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-36% &lt; L &lt; -34%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-34% &lt; L &lt; -32%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-32% &lt; L &lt; -30%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-30% &lt; L &lt; -28%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-28% &lt; L &lt; -26%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-26% &lt; L &lt; -24%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-24% &lt; L &lt; -22%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-22% &lt; L &lt; -20%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-20% &lt; L &lt; -18%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-18% &lt; L &lt; -16%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-16% &lt; L &lt; -14%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-14% &lt; L &lt; -12%) =	 0.11%	[ 0.01%,  0.33%]  ne =    0
Pr(-12% &lt; L &lt; -10%) =	 0.34%	[ 0.09%,  0.70%]  ne =    2  E(L) = -11.34%
Pr(-10% &lt; L &lt;  -8%) =	 0.67%	[ 0.29%,  1.17%]  ne =    5  E(L) = -8.82%
Pr( -8% &lt; L &lt;  -6%) =	 0.89%	[ 0.45%,  1.47%]  ne =    7  E(L) = -6.43%
Pr( -6% &lt; L &lt;  -4%) =	 2.91%	[ 2.05%,  3.89%]  ne =   25  E(L) = -4.72%
Pr( -4% &lt; L &lt;  -2%) =	 9.84%	[ 8.26%, 11.53%]  ne =   87  E(L) = -2.85%
Pr( -2% &lt; L &lt;   0%) =	33.11%	[30.54%, 35.72%]  ne =  295  E(L) = -0.90%

For example, based on a complete trading history of FB, there is a probability of 0.34% that the stock will close with a loss to be between -12% and -10%, and if so, the “expected” loss would be -11.3% (estimated based solely on 2 events).

Again. The probability, not certainty.

 

11 comments
  1. If you apply this approach to a stock that’s gone from zero to X then it will always assume that the price of the stock is more likely to go up and go down… which is just because that’s the information you’re feeding it.

    1. True. By feeding with what we’ve got, it determines the outcome. It’s a building block, day by day. In long run, with this method, we believe we “approach” “true” distribution of daily returns for a stock (or other asset under investigation).

      1. The problem is though that in the long-term this method will just converge to a linear regression of the history (I think, I haven’t gone through it in great detail but it looks a lot like this method presents a probabilistic view of the linear regression).

        1. Yes and no. Imagine a stock that has a significantly higher number of positive daily returns for first 5 years since its inception (or IPO). The probability of a loss will be much smaller, as expected. Then, the stock/company suffers and for next 3 years its share price travels south. Therefore, after 8 years of data, the update on negative returns shifts the posterior mean (thus the mass of the pdf) to the left. And if after 8 years the stock skyrockets again on good financial results, the updated pdf changes again. No convergence to anything firm is expected. It’s a living organism ;)

  2. You could try to use this approach for shorting volatility when your series show that there is some great probability that no heavy tail event might take place, right?

    Just thinking out loud here.

    1. You could try to backtest it. I view this approach more as a cumulative (or rolling) update on probability of events. If a loss of a great magnitude hasn’t happened yet, it may take place any day. But, statistically, its probability is very low.

  3. Very interesting article! Maybe of lesser interest, but I guess the same approach could be applied for strong / extreme gains?

    1. Surely that’s trivial… you can just adjust the for-loop in the python code to:

      for k in range(0, 50, dL):

      ?

Leave a Reply

Your email address will not be published. Required fields are marked *