Risk Management Process
Chainrisk's Risk Management Methodologies have been co-developed with ETH Zurich, University of Alberta & IIT BHU.
The Risk Analysis
Target : Cost of Corruption > Profits
The 3 Measurable Risk Metrics
Quantified Risk : $Dollar Value Impact
Breach Likelihood in the next 3 Mo, 6 Mo & 12 Mo
Risk Parameter Recommendations
Quantified Risk is majorly categorised into 3 parts :-
Value at Risk: conveys capital at risk due to insolvencies when markets are under duress (i.e., Black Thursday). The current VaR in the system is broken down by collateral type. Chainrisk computes VaR (based on a measure of protocol insolvency) at the 99th percentile of our simulation runs.
Liquidations at Risk: conveys capital at risk due to liquidations when markets are under duress (i.e., Black Thursday). The current LaR in the system is broken down by collateral type. Chainrisk computes LaR (based on a measure of protocol liquidations) at the 99th percentile of our simulation runs.
Borrowing Power: measures capital efficiency, representing potential upside for the protocol. Borrowing power represents the total available borrows based on collateral supplied to the protocol, calculated as supplies multiplied by the collateral factors of each collateral asset.
We aggregate this to a system level by taking a weighted sum of all the assets used as collateral.
Risk Parameter Testing Methodology
To test whether a set of risk parameters, we determine the protocol’s value at risk in a scenario where they have those risk parameters.
Definition (Value at Risk). Value at Risk (VaR) is the 99th percentile of the protocol losses that accrue due to below-water accounts over the course of 24 hours. We run simulations to estimate the VaR, and we only recommend parameters that will keep VaR below some pre-specified upper bound K. 2.1 Estimating Value at Risk To estimate the value at risk, we take the following steps.
Run a set of parameters, s, for 10,000 iterations of our simulation. Calculate the 99th percentile loss across those 10,000 iterations.
Run another 10,000 simulations with the same parameters, and calculate the 99th percentile loss of the 20,000 total iterations. If the difference between the 99th percentiles is less than some pre-specified noise bound, ε, then we say that the value-at-risk is the 99th percentile over all of the simulations run. Otherwise, we repeat this step until we hit ε convergence.
This approach can be understood as running simulations until we reach a convergence on a value for the 99th percentile of losses. We base this off of the approach taken by risk desks at top banks, whereby they run simulations until a particular convergence bound is met. Alternatively, one could utilize a statistical testing framework to determine the probability of passing a particular VaR.
VaR Calculation Example.
Suppose we are calculating VaR using 10000 simulations of bad debt accrual over a 24 hour period for a certain parameter set s. Denote each batch of simulations as a round. The result of each simulation is the total bad debt accrued over the specified period, and these results are ranked in ascending order. The final VaR estimate for round i is then the 9900th result in the ordered list, denote this Ri We compute Ri starting at i = 0. For each subsequent round we compute Ri and check the convergence condition |Ri−1 − Ri | < ε. If the convergence condition is passed, we return Ri as our final VaR estimate for the parameter set s. We now provide more depth on the methodology used to simulate protocol losses.
Simulating Protocol Losses
When estimating the VaR above, we abstracted a “simulation” as some stochastic function L : S → 0∪R +
that simulates the protocol losses. We now describe in greater detail how we compute L. To compute L, we first compute a realization of price trajectories for each of the assets that can be supplied on Aave. We then utilize these price trajectories to estimate the on-chain DEX liquidity that would be present, given the price trajectories. Once the price and liquidity trajectories are determined, we utilize an agent-based model, with lender-borrower agents and liquidator agents.
Assumptions
Look-forward period. We only look forward by a single day worth of blocks.
Price correlations. We assume that asset prices are correlated, and we use a historical window of 30 days to find this correlation.
Price process. We assume that the log-prices obey a
(1,1) GARCH
process on short timescales (i.e. minute-by-minute), and that they obey a normal distribution on longer timescales (i.e. hourly and longer). We have validated these assumptions visually by analyzing (partial) autocorrelation plots.Exogenous prices. Price trajectories are not affected by liquidations nor other actions taken by agents in the simulation. With that said, the Aave protocol existed at the time that our historical price trajectories were sampled, and so our price trajectory distributions should account for price-liquidation feedback loops, even though we do not explicitly model this relationship.
Static liquidity. We assume that over the next day, decentralized exchange liquidity will remain relatively constant.
At most one liquidation per account per block. We assume that liquidators do not execute more than one liquidation on each account. Any liquidation that occurs on an account makes it impossible for other accounts to liquidate.
Only DEX liquidity. We assume that liquidators only utilize on-chain DEX liquidity to perform liquidations. We have seen empirical evidence that this is correct. This assumption is supported by empirical on-chain evidence, where we see that many liquidations utilize on-chain spot liquidity to execute their trades. Although this assumption will become less valid as on-chain lending markets become more developed, this assumption is currently accurate, and this assumption errs on the side of lower-risk parameter recommendations.
One-of-n non-collusion. We assume that there is at least one liquidator that does not collude with other liquidators and that maximizes their profits from liquidations. Specifically, there would not be scenarios where all liquidators decide to wait to liquidate an account’s collateral, in hopes that the reward will be larger if they wait until the next block. Due to the accessibility and low profit margins that are arise empirically with on-chain lending markets, we believe this assumption is fair.
Responsive liquidations. If an account does not have a liquidation within c blocks of its health going below zero, then its loans should be marked as bad debt. We set c = 2 blocks in our analysis. This assumption errs on the side of lower risk parameter recommendations.
Finite time-to-liquidation. An account’s collateral value will be liquidated by the protocol at a time
cprotocol
blocks after an account’s loans are marked as bad debt. This is a total ofc + cprotocol
blocks after the account’s health goes below zero. This assumption comes from the idea that the protocol will get rid of its bad debt at some point in the future 3 .Not statistically testing black swan events. We do not attempt to reach statistical significance on the probability of black-swan events, since black-swan events are categorically immune to statistical testing. Instead, we perform a separate “stressed VaR” methodology – which we will describe in greater detail in a subsequent paper – that utilizes the Chainrisk r-EVM to model example black-swan events. Modeling events like these is useful for qualitatively determining how bad the fallout would be for Protocols in the case of spectacular events, such as a large stablecoin depeg, or a mass exodus of on-chain liquidity.
Computing price trajectories
We use a discrete-time random walk stochastic processes to generate price trajectories of assets. Geometric Brownian Motion.
The canonical model for generating price trajectories is geometric brownian motion (GBM). In a simple GBM model, the price (S) changes according to a stochastic differential equation:
, where Wt is a Brownian motion, σ is a volatility scaling parameter, and µ is a forcing parameter. This stochastic DE can be integrated with Ito’s formula to get that
This analytic solution makes it possible to sample a price trajectory’s outcome without running each time-step.
Issues with GBM. Although GBM is a popular model in finance literature, it does not hold true in practice at short time intervals. We often see erratic price jump behavior at short timescales, along with short bursts of high or low volatility, which are not reflected by the constant-σ
GBM model. We also see times of new information that lead to a short-term jump or drop in returns, which is also not reflected in the constant-σ GBM model. To correct for these, we forego the GBM modeling and instead utilize variable volatility price trajectories. For a clear evidence of the non-normality of this data, see the distribution of price trajectories in figure 2.
We see that the distribution of returns contains outliers that are many standard deviation events. To model returns as following a GBM process, we would either need to entirely ignore long-tail events by using the real σ, or we would need to make these long-tail events possible by increasing the σ, which would overestimate the volatility of returns in the majority of cases. Clearly, a GBM with constant σ is not the right model for returns behavior, and this is of particular importance due to the fact that long-tail returns have a major impact on the health of Aave accounts. Not only is the returns distribution fat-tailed, but it is also autoregressive.
Figure 3 shows a plot of the minutely log returns for a period of over 1 month, during which there were a number of high-volatility events. In some of these periods, the median across 100 minutes elevates as well. This demonstrates that a minute with a large absolute return is likely to be close in proximity to other high absolute return minutes.
If we assume that the mean return is approximately zero, then the returns are simply the residuals of a timeseries process with mean 0, and the variance of these residuals demonstrate autoregressive tendencies.
Modelling Volatility with GARCH. We can extend our GBM model to track a more realistic volatility by modeling the autoregressive tendency of the variance of returns. By looking at historical data, we are able to see that a GARCH(1,1)
model of the volatilities is the most fitting. From here, we can fit a GARCH model to historical data to find the baseline volatility (ω), the ARCH parameter (α), and the GARCH parameter (β).
We then compute the following (assuming that the drift, µ, is equal to zero):
The term here is a white noise term with mean of 0, and it is commonly set to . This new process is quite similar to the GBM model, except it is discrete, it assumes zero drift, and it has a time-varying volatility parameter σ. With this process, we are able to generate price paths for assets in the same way that we could for the GBM model.
Correlated GARCH. Although our GARCH model improves upon the GBM model, we do not yet capture the fact that returns are correlated. For GBM models, this is typically performed by requiring the white noise term for any two assets, i and j, their Wiener process terms, and , must satisfy , where is the Pearson correlation coefficient of the log returns. Instead of using the independent returns white noise distribution , we sample all of the white noises from a multivariate normal distribution with mean 0 and covariance matrix :
In the case where assets’ returns have no linear correlation, our formulation is identical to the white noise distribution for independent returns. In practice, many cryptoassets are correlated. See figure 4 for a comparison of the multivariate normal distribution that we sample for ETH and BTC returns, as compared to the returns that have arisen historically. See also figure 5 for an example of a single day’s change in asset prices.
Estimating on-chain liquidity
Calculating the profitability of a liquidation over a randomly generated price trajectory requires estimating the slippage on selling the liquidated collateral. One could attempt to use an analytical model, based on historical swaps. However utilizing assumption (4), and the capability of the Chainrisk platform to interact with on-chain liquidity pools, we wish to rely on agents’ operation as well as the AMMs design to achieve a better estimation of the on-chain liquidity over the course of the simulation, which would allow calculating slippage more accurately. Liquidator agents will close the trade in the liquidity pools over the liquidation path as they would in a real-world scenario, arbitrageurs will rebalance the pools back to market price, and liquidity providers will pull liquidity at times of high volatility.
Ideally, we could initialize each pool with the current on-chain liquidity, and simply simulate the agents. However, since optimal arbitrage is computationally intensive, we use DEX aggregators, as well as historical liquidations data to pre-map the possible liquidation routes for each pair of Aave listed assets. That allows us to avoid the routing problem, and limit the number of pools we have to manage. We then initialize each pool with its time-weighted average of last 14 daily on-chain liquidity snapshots. Once the route to liquidate a pair of assets is known and pools are initialized, liquidator agents will execute swaps across the pools using the relevant AMM swap functions.
Borrower agents
We initialize borrowers’ portfolios based on historical on-chain data. When examining new risk parameters, such as increasing an asset’s liquidation threshold by 5%, we adjust borrower agents to their original health, by increasing their borrows against that asset by 5%. For a reduction of the asset’s liquidation threshold, we would simply withdraw 5% of the collateral asset. Borrower agents in our model are passive, once adjusted to the examined parameter configuration, they will not be adjusted throughout the course of the simulation. Borrowers’ portfolios can only change due to a liquidation event.
We are examining only significant borrowers who are at risk - their health factor is less than 2, and eliminate those with portfolio value under $1000.
Liquidator agents
Liquidator agents are simulated as rational agents that interact with exclusively on-chain liquidity sources. At the beginning of each block, each liquidator agent examines each below-1-health account, finds the most profitable liquidation on that account, and executes that liquidation by selling the collateral on-chain and paying back the loan with the proceeds of the on-chain sale. The liquidator will only conduct the liquidation if it is profitable to execute via on-chain liquidity sources.
For the liquidator agents, we compute the optimal liquidations via a single-peak search over the space of liquidation sizes. The proportion of liquidated collateral is in , and we can search this space on each borrowed asset to find the optimal proportion of collateral that is liquidated. For m collateral assets and n borrowed assets, our algorithm takes time to find the collateral and borrowed assets, along with the optimal proportion of liquidated collateral asset.
We assume that only one liquidation happens on an account in each block. This simplifies and speeds up the simulation, since it allows us to ignore intra-block competitive liquidator behaviors. Furthermore, this assumption leads to a systematic overestimate in our simulation’s estimates of protocol losses, which leads to more conservative parameter estimates.
The assumption that only on-chain liquidity sources are used leads to a systematic underestimate of the amount of liquidations that would occur in real life, and thus leads our simulations to overestimate the true value at risk. However, we have verified with on-chain data that this assumption is quite accurate for historical liquidations, and any error introduced by this assumption will only lead to more conservative risk parameter recommendations.
Protocol losses
Calculating an appropriate metric to represent protocol losses is as much of an art as it is a science. There does not exist a failsafe mechanism that caps the downside that the protocol may face on an underwater position that is not liquidatable. It requires a governance vote to clear out bad debt from the protocol, and there is no guarantee that these governance votes reach an actionable conclusion in a finite period of time. Thus, there is no protocol-enforced upper bound on the losses that may be incurred by a bad position, other than marking that position’s collateral to zero.
With that said, we model this uncertainty of flushing out bad debt by using a time delay parameter k. When a position goes underwater and is not liquidated for 2 consecutive blocks, we say that the protocol recognizes the position as bad debt. We say that the protocol has a delay of k blocks following the time they recognize the position as bad debt, at which point they liquidate the position at market using on-chain liquidity. Specifically, the protocol sells all of the position’s collateral, pays it all toward the loan, and then pays the additional loan that is not covered by the sold collateral. This amount – the paid back loan value minus the sold collateral value – is the protocol’s loss on the position.
To find the protocol’s total loss in a period, we sum up the loss that it incurs from each of its positions in the period. The great majority of positions will not dip below 1 health, and thus will not lead to a loss for the protocol.
Last updated