🌊

OSP-5 Control Variables

Techniques

Basic Idea
Suppose we want to estimate . If we have another variable with known expectation , then
and we can choose such that . If so, since is already known, can be simulated and computed.
Note that
As long as , it is possible to find such that .
In the case of multivariate
and thus
and
For uni-variate case
Sample Splitting and Cross-fitting
Note that the optimal
is unknown. We can estimate from an independent sample of .
We can use the 2-splitting (cross-fitting) method:
  • Generate independent observations
  • Split this samples into two parts each with observations. Call the first split and the second split .
  • Find
    • and store
  • Then find
    • and store
  • Return
Note that because is independent of , we have
and the same for . Therefore, is unbiased for .
Because are close in probability to , the asymptotic variance of can be estimated by , where
One template implementation using the cross-fitting method is as below
import numpy as np def control_variable_cross_fitting(Y, X, E_X): """ to estimate E[Y], with X as control variable. E[X] = E_X is known """ n = len(X) I_1, I_2 = (X[:n//2], Y[:n//2]), (X[n//2:], Y[n//2:]) splits = [(I_1, I_2), (I_2, I_1)] split_means = [] split_ssq = [] for (X_1, Y_1), (X_2, Y_2) in splits: # compute b_star on one split b_star = -np.sum((X_1 - E_X) * (Y_1 - np.mean(Y_1))) / np.sum((X_1 - E_X)**2) # estimate theta on the other split H_i = Y_2 + b_star * (X_2 - E_X) H_i_bar = np.mean(H_i) ssq = np.sum((H_i - H_i_bar) ** 2) split_means.append(H_i_bar) split_ssq.append(ssq) est_mean = np.mean(split_means) est_var = np.sum(split_ssq) / (n * (n - 1)) return est_mean, est_var
Leave-one-out version
Instead of splitting into two parts (which can yield external randomness), we can use the leave-one-out method based on the fact that
for any computed using .
This yields the estimator
where
Note that here we do not compute for different to avoid the computation cost but use correction terms in the denominator (based on the Sherman-Morrison Formula, proof omitted).
The variance estimator is , where

Examples

Example (0)
Suppose we want to estimate where .
We can use as a control variable with known expectation of 3/2.
Example (1)
Consider the option pricing, the price is given by
Since under the risk-neutral measure, forms a martingale which implies
Hence, is a random variable with known mean and it’s reasonable to expect that and have a non-zero convariance.
For example,
S_0 = 50 K = 60 r = 0.05 sigma = 0.3 T = 1 n = 10000 f1 = (r - 0.5 * sigma ** 2) * T f2 = sigma * np.sqrt(T) z = rng.normal(size=n) S_T = S_0 * np.exp(f1 + f2 * z) pv = np.exp(-r * T) C = pv * np.maximum(S_T - K, 0) E_X = S_0 * np.exp(r * T) # known E[X] est_mean, est_var = control_variable_cross_fitting(C, S_T, E_X)
Example (2)
An Asian arithmetic average call option has a payoff
which does not have a closed-form solution under the BS model.
A Geometric average Asian call option, however, has a closed-form solution under the BS model. Its payoff is
thus we can use (or just as a control variable for the .

Loading Comments...