Appendix
A Mathematical Exposition Repository.
Last updated
A Mathematical Exposition Repository.
Last updated
In Risk Management Process, we discuss our methodology for determining if a risk parameter s ∈ S leads to a lower VaR than our upper bound K. Although this is important, it is trivial to find risk parameters that solve this, e.g. by setting the liquidation threshold to 0. Clearly we would not want to set the liquidation threshold so low, because that would render the protocol extremely capital inefficient. We must balance the protocol’s objectives of a high liquidation threshold and low liquidation bonus with the VaR constraint that pushes for a low liquidation threshold and high liquidation bonus. Naturally, we can represent this as a mathematical program:
where is the amount that the protocol values parameter configuration s, and is the value at risk under parameter configuration s. We use the previous definition of value at risk, and we define here the utility function .
Definition A.1 (Protocol Utility Function). The protocol utility function, , is given by ,
,
where the 99th percentile largest amount of losses for the protocol. This value is strictly positive.
Given the utility function U, we can now utilize numerical optimization techniques to optimize our risk parameter configuration s. In particular, since the parameter space S is only two-dimensional, we utilize a simple grid search on parameters in S, compute VaR, and move on to other parameters with higher utility if the current parameters’ VaR is less than K. We repeat this process to create a small list of parameters, and we then pass these parameters to the on-chain simulation test.
Our current parameter search optimization is far from perfect, but it is sufficiently fast for us to pass risk parameters into the on-chain simulation engine to check for statistical significance. There are a number of ways that we may improve the parameter search process in the future: further speeding up the surrogate function calculation, introducing Bayesian optimization techniques to reduce the number of VaR calculations, representing VaR as a component of the utility function rather than as a constraint, and utilizing previous simulations to warm-start our parameter search.
Here we provide a statistical testing framework that can be used to achieve a confidence level on the probability of losses exceeding a bound. In particular, we aim to make a statistical test of the statement of the form,
Binomial Experiment Testing Algorithm.
This utility function is inspired by the Sharpe ratio
, which is a metric used in finance to report a portfolio’s risk-adjusted returns. Here, our utility function aims to grow expected returns while keeping losses as small as possible. Unlike the Sharpe ratio, which uses the standard deviation of returns, we use the of protocol losses. Our reasoning is that the distribution of protocol losses is not known to us except through our simulations, and the p99\of protocol losses is a more risk-sensitive value for fat-tailed distributions than standard deviation.
for some pre-specified bound loss K and bound probability . Furthermore, let α be the significance level for our statistical test, & let . We define our null hypothesis as follows, . The aim of our statistical test is to utilize simulations of protocol losses to reject the null hypothesis with α significance level. Since each time we generate protocol losses is independent and identically distributed (IID)
, we can perform our statistical test as a binomial experiment.
We run our simulation n times to generate a vector of protocol losses, . We count the number of instances where the loss is greater than K; call this value m.
Suppose, to the contrary, that the null hypothesis is true, and that . Then the probability that we observe m or fewer instances where the protocol losses are less than or equal to K is given by the binomial distribution CDF, F, with probability p:
For two binomial probabilities, p1 and p2, we know that the binomial CDF is pointwise less than the binomial CDF if . We do not provide a proof for this statement here. Thus, we know that since , then either (a) and , or (b) and . Thus, we know that pointwise.
Therefore, we know that the probability that we observe m of fewer instances where the protocol losses are less than or equal to K must be less than or equal to . We can calculate this value as
Let . This is an upper bound on the probability that we observe m or fewer instances where the protocol losses are less than or equal to K. Thus, if q < α, then we can reject the null hypothesis with α significance level. Otherwise, we fail to reject the null hypothesis.
Example. Suppose we want to show with a 0.05 significance level (95% confidence level) that the Pr(losses greater than $300,000) is less than 0.1%. Then we run 100,000 simulations, and we observe that losses are greater than $300,000 in m = 80 of the simulations. We calculate , which is less than our significance level of 0.05. Thus, we reject the null hypothesis with 95% confidence.