Although the relationship is not immediately clear and obvious, the structural engineering has a lot in common with financial assets. In both cases we deal with the objects under stress over their entire lifetimes. There are two possible outcomes: something can break or perform well. The engineers run lots of analyses and tests before a given element is ready for its use and service. It is designed to resist structural failures. Can the same be applied to, say, crypto-assets?
Let’s dive in.
Gentle Introduction to Structural Design of an Element
If we consider a single structural element, a subject to a random force or stress process $S(t)$, the goal of its design would be to select an element size so the probability of failure during the intended service life is acceptably small. For real physical objects, one often considers yielding, excessive deformation, brittle fracture, ductile fracture, buckling (instability under compressive loads), and fatigue as the elementary modes of failure associated with structural components.
Structural engineering aims to examine methods of designing to avoid a level exceedance or quasi-static failure mode that relates to abovementioned possibilities. One of the fundamental methods here is 3-$\sigma$ design method. It applies to these objects or systems not likely to see prolonged exposure to the design vibration environment where a fatigue failure might occur (e.g. spacecraft structures or rockets).
If $S(t)$ is a stationary Gaussian stress process with root-mean-square (rms) of $\sigma_S$, there exists a strength $R$ that is deterministic, related to the object under investigation. $S(t)$ implies that sample space for $S$ is $-\infty \lt S \lt \infty$, therefore no matter how large we make $R$, it is impossible to ensure that stress $S$ will be less than $R$ for all time $t$ with probability $1$.
In order to design a safe element, one employs in design criteria that
$$
R \ge 3\sigma_S
$$ under an unstated assumption that $\mu_s = 0$. Since $S$ follows Gaussian distribution, the probability that stress will exceed three times is rms value is
$$
P\left[ |S(t)| \gt 3\sigma_S \right] = 0.0026
$$
In other words, $S$ exceeds $R$ only about $0.3\%$ of the time if $R=3\sigma_S$. We need to keep in mind that $R$ is a random variable (though deterministic), the design value is usually taken as a characteristically low value such that the implied risk is likely to be significantly less than that given by above relationship.
Now, if we allow the mean stress to be not zero and $R$ to be random variable having a normal distribution (with a mean and standard deviation of $\mu_R$ and $\sigma_R$, respectively), then we can generalise 3$\sigma$ criterion for a safe design such that
$$
\mu_R \ge \zeta \sigma_S
$$ where the factor of $\zeta$ is a function of $Q=\mu_S/ \sigma_S$ and the coefficient of variation of $R$ i.e. $C_R = \sigma_R / \mu_R$ and has been derived numerically:
All above, allows us to illustrate the whole concept of stationary stress process $S(t)$ with non-zero mean and random strength $R$ as follows:
We will come back to an exemplary illustration of calculations using $\zeta$, shortly.
Now, the things are getting interesting when we consider first-passage failure concept. Failure, within our framework, is defined as the first time that the stress process exceeds strength $R$, i.e. $|S(t)| \gt R$. If we allow $R$ now to be a function of time, $R(t)$, the first-passage failure could be visualised as follows:
The relaxation of $R$ to $R(t)$ is not necessary but, as we will see, makes a whole sense for crypto-assets. For the chart above, a decreasing function of $R(t)$ as $t\rightarrow \infty$ denotes a deterioration physical structure.
The goal of design is to ensure that the probability of a first-passage failure in service is acceptable small. If we define $T$ as a time to failure (a random variable), and $T_S$ as a service lifetime, $S(t)$ as a stationary random process where $0 \le t \le T_S$, then the probability of first-passage failure is
$$
p_f = P(T \lt T_S)
$$ Ideally, by default, we wish $R(t)$ to be large relative to $S(t)$ so that crossings of $R(t)$ by $S(t)$ are very rare. Now, if we consider small time invervals, $\Delta t$, one can assume the the up crossing of $S(t)$ past $R(t)$ form a point process whose rate of arrival is described by a Poisson process.
If we denote the rate of occurrence as $\nu^{+}_R(t)$ then
$$
\mbox{Probability of no up crossing in}\ \Delta t = \exp\left[ -\nu^{+}_R(t) \Delta t \right]
$$ Given the element service time of $T_S$ which can be divided into $k$ small equal increments of $\Delta t$, the probability of no up crossing in $T_S$ is the probability of mutual intersection of the events of no up crossing in all $\Delta t$. Assuming the events mutual independence, we arrive at
$$
\mbox{Probability of no up crossing in}\ T_S = \prod_{i=1}^k \exp\left[ -\nu^{+}_R(t) \Delta t \right]
$$
$$
= \exp \left[ – \sum_{i=1}^k \nu^{+}_R(t) \Delta t \right]
$$ what in the limit $\Delta \rightarrow 0$ gives us
$$
\mbox{Probability of no up crossing in}\ T_S = \exp \left[ – \int_0^{T_S} \nu^{+}_R(t) dt \right] \ .
$$ The probability of failure is given as the probability of the event of one (or more) up crossings in $T_S$, therefore
$$
p_f = 1 – \exp \left[ – \int_0^{T_S} \nu^{+}_R(t) dt \right] \ .
$$ For a special case of $S(t)$ to be a Gaussian process, one can show that
$$
\nu^{+}_R(t) = \nu^{+}_0 \exp \left[ -\frac{1}{2} \left( \frac{R(t) – \mu_S}{\sigma_S} \right)^2 \right] \ .
$$
3$\sigma$ Criterion for Crypto-Asset
The framework outlined above can be mapped onto any financial asset or crypto-asset available for trading in the markets. Here, by stress process $S(t)$ one can consider the rate of return at the selected time-scale (for example, daily log price changes). The underlying assumption of $S(t)$ to be a stationary process can be verified and confirmed but usually it holds. In contrast to the stress and strength for material elements, the stress process for crypto-assets $S'(t)$ is such that $ -1 \lesssim S'(t)\lt \infty $ (returns) while $ R’ \sim N(\mu_{R’} = -1, \sigma_{R’}) $ and ${R’} \in [-1, \infty)$ to be precise. The latter assumption of ${R’}$ to follow normal distribution is arbitrary and serves as the initial approximation we can make.
From the obvious reasons it is difficult to impose certain boundary conditions on the performance of the crypto-asset when it enters trading at the exchanges, both CEX or DEX. By the strength $R'(t)$ one can assume a complex system of actions (incl. buying/selling pressure, market manipulators, or highly variable liquidity) justifying $R’$ indeed not to be constant over time. Interestingly, the abovementioned rudimentary analysis may allow us to build a tracker of “safe design” of an asset not experiencing rare and painful daily returns.
The entire concept is very similar to the calculation of VaR or ES risk measures but allows for more freedom in modeling of the strength process $R'(t)$.
Let’s have a look at the classical crypto-asset failure of the LUNA coin.
LUNA coin, the native cryptocurrency of the Terra blockchain, gained significant attention for its innovative algorithmic stablecoin ecosystem, featuring assets like UST. However, in late 2021, the crypto market experienced a turbulent period marked by regulatory crackdowns and a general market correction. During this time, LUNA’s price surged to all-time highs before experiencing a dramatic collapse, falling by over 90% in a matter of weeks. This decline was exacerbated by concerns surrounding the sustainability of Terra’s stablecoin model and its complex governance mechanisms. While Terra and LUNA have since demonstrated resilience and continue to play a significant role in the blockchain ecosystem, the mmid-2021 collapse serves as a reminder of the inherent volatility and risks associated with the cryptocurrency market.
Using Python we can illustrate what happened by fetching historical daily price-series and inspecting the corresponding rate of 1-day returns, here making use of Binance resources:
# Design Crypto-Asset to Avoid Structural Failures Due to Random Vibrations # (c) 2023 QuantAtRisk.com import numpy as np import pandas as pd from matplotlib import pyplot as plt import matplotlib.gridspec as grd from statsmodels.graphics.tsaplots import plot_acf from statsmodels.tsa.stattools import adfuller import cayolargo as cayo # pip install cayolargo==0.0.23 from mykeys import * # fetch time-series luna = cayo.cex.get_binance_timeseries(cpair='LUNABUSD', interval='1d', t_start='2021-01-01 00:00', t_end='2022-05-12 20:00', api_key=binance_api_key, api_secret=binance_api_secret) df = luna[['close']].copy() df['ret'] = df.close.pct_change() # calculate return-series # plotting fig = plt.figure(figsize=(15,7), dpi=90) gs = grd.GridSpec(2, 2, height_ratios=[1,1], width_ratios=[1.5,1], wspace=0.1) # price chart ax = plt.subplot(gs[0]) ax.plot(df.close) plt.ylabel("LUNA/BUSD", fontsize=11) ax.grid() # returns chart ax2 = plt.subplot(gs[2]) ax2.plot(df.ret) plt.ylabel(r"1-Day Returns $\rightarrow S'(t)$", fontsize=11) ax2.grid()
where in code I made use of a fresh (under development) Python library cayolargo dedicated to cryptocurrency markets analysis and crypto algo-trading. For setting up Binance API key check details in my previous article.
The daily return-series, corresponding to the stress process of the crypto-asset, $S'(t)$, appears to meet the condition of stationarity. We check that quickly by executing the following piece of code:
# check stationarity # by inspection of the ACF plot f, ax = plt.subplots(figsize=(7, 3), dpi=90) chart = plot_acf(df['ret'].dropna(), lags=10, ax=ax) ax.grid(); plt.title('ACF', fontsize=11) # using Augmented Dickey-Fuller Test for Stationarity adftest = adfuller(df['ret'].dropna(), autolag='AIC', regression='ct') print("ADF Test Results") print("Null Hypothesis: The series has a unit root (non-stationary)") print("ADF-Statistic:", adftest[0]) print("P-Value:", adftest[1]) print("Number of lags:", adftest[2]) print("Number of observations:", adftest[3]) print("Critical Values:", adftest[4]) print("Note: If P-Value is smaller than 0.05, we reject the null hypothesis. The series is stationary.")
borrowed from Juan D’Amico’s neat article on How to Test for Stationarity in Time Series Data Using Python. The results for our stress process are:
ADF Test Results Null Hypothesis: The series has a unit root (non-stationary) ADF-Statistic: -2.120507516173466 P-Value: 0.5345954048389927 Number of lags: 6 Number of observations: 489 Critical Values: {'1%': -3.97740352965905, '5%': -3.41950649955196, '10%': -3.13235413124263} Note: If P-Value is smaller than 0.05, we reject the null hypothesis. The series is stationary.
Now, if we focus on the pre-collapse period of LUNA coin as will treat it as our in-sample,
insample = df[df.index < '2022-05-01'].dropna() insample['e_mean'] = insample['ret'].expanding().mean() insample['e_std'] = insample['ret'].expanding().std(ddof=1) display(insample)
then a basic criterion for safety can be written as:
$$
\mu_{R’} \le -\zeta \sigma_{S’}
$$ where
$$
Q = \frac{\mu_{S’}}{\sigma_{S’}} = \frac{0.014775}{0.101026} = 0.146252
$$ Assuming a coefficient of variation $C_{R’} = 0.2$ where $ R \sim N(\mu_{R’}=-1, |C_{R’}\mu_{R’}|) $, from the third chart we read out that
$$
\zeta = \frac{\mu_{R’}}{\sigma_{S’}} \simeq 3.8
$$ therefore the mean strength must be
$$
\mu_{R’} \lesssim -3.8 \sigma_{S’} \ .
$$
for a first-passage lack of failure. This can be illustrated as follows:
muS, sigmaS = insample.iloc[-1,-2], insample.iloc[-1,-1] zeta = 3.8 fig, ax = plt.subplots(figsize=(8, 5), dpi=90) ax.plot(df.ret, color=(.7, .7, .7)) ax.plot(insample['e_mean'], 'r', label=r"Expanding Mean ($\mu_{S'}$)") ax.plot(insample['e_std'] * -1., label=r"Expanding Std ($1\sigma_{S'}$)") insample['mu_R'] = sigmaS * -zeta ax.plot(insample['mu_R'], '--', label=r"$\mu_{R'}$ (lower bound)") plt.ylabel(r"1-Day Returns $\rightarrow S'(t)$", fontsize=11) ax.legend(loc=1) ax.grid()
As you can notice, the lower bound for $\mu_{R’}$ is based on the latest value available (2022-04-31) however this value does not change a lot during the lifetime of LUNA/BUSD. The employed pandas’ function of expanding() replaces a classical rolling window, thus its size grows in time with every new data line appended to a DataFrame.
Unfortunately, LUNA experienced its own “structural failure” allowing to hit nearly 100% loss in May 2022. It was an emotional damage for those who held the coin in their wallets.
The model for $R'(t)$ employed above can be improved further. There is no need to assume $ R \sim N(\mu_{R’}=-1, |C_{R’}\mu_{R’}|) $ as we did. Instead, the distribution of most severe daily losses can be done on the periodic basis and measured more accurately by the application of Generlised Extreme Value theory. If you are interested in details and the exemplary use of it for the financial time-series, read Extreme VaR for Portfolio Managers article where I explained the concept of GEV for protecting your holding against very rare events.
References
Wirsching, P.H., Paez, T.L, Ortix K. (1995), Random Vibrations. Theory and Practice.,
Dover Publications
Marley, M. J., 1991, Time Variant Reliability Under Fatigue Degradation, Norwegian
Institute of Technology
Explore Deeper
→ XRP-based Crypto Investment Portfolio Inspired by Ripple vs SEC Lawsuit
→ Probability of Black Swan Events at NYSE
→ Does It Make Sense to Use 1-Hour 1% VaR and ES for Bitcoin?
→ Recalibrating Expected Shortfall to Match Value-at-Risk for Discrete Distributions
→ Conditional Value-at-Risk for Normal and Student t Linear VaR Model