🌊

OSP-7 Conditional Monte Carlo

Techniques

Basic idea:
E[Y]=E[E[Y∣X]] and Var(Y)β‰₯Var[E(Y∣X)]\Bbb{E}[Y]=\Bbb{E}[\Bbb{E}[Y|X]]\text{ and }Var(Y)\ge Var[\Bbb{E}(Y|X)]
Hence,
ΞΈ^=1nβˆ‘i=1nΞΌ(Xi) where ΞΌ(x)=E[Y∣X=x]\hat{\theta}=\frac{1}{n}\sum_{i=1}^n\mu(X_i)\text{ where }\mu(x)=\Bbb{E}[Y|X=x]
has less variance than the standard Monte Carlo estimator nβˆ’1βˆ‘i=1nYin^{-1}\sum_{i=1}^n Y_i.
In other words, part of the expectation calculation can help reduce the variance of the simulation.
Remarks:
  • For any choice of the random variable XX, the conditional Monte Carlo method yields a variance reduction. But a good choice of XX yields a high correlation between E[Y∣X]\Bbb{E}[Y|X] and YY
  • We can find E[Y∣X]\Bbb{E}[Y|X] precisely in an analytical form only when we assume a β€œnice” model for the underlying stochastic process
  • Under several examples of SDE, such conditional expectations are readily computable

Examples

Example (1): Rare event probability
Suppose we want to estimate ΞΈ=P(ST>u)\theta=\Bbb{P}(S_T>u) for some uu. This arises in the case of a digital option. For example, ST=exp⁑(βˆ‘i=1TYi)S_T=\exp(\sum_{i=1}^T Y_i) for some random variables Y1,...,YTY_1,...,Y_T. Hence
ΞΈ=P(βˆ‘i=1TYi>log⁑(u))\theta=\Bbb{P}(\sum_{i=1}^T Y_i>\log(u))
Denote LT=βˆ‘i=1TYiL_T=\sum_{i=1}^T Y_i and MT=max⁑1≀i≀TYiM_T = \max_{1\le i\le T}Y_i, we can then introduce the conditioning on the maximum,
1{LT>log⁑(u)}=βˆ‘j=1T1{LT>log⁑(u),MT=Yj}\bold{1}\{L_T>\log(u)\}=\sum_{j=1}^T\bold{1}\{L_T>\log(u),M_T=Y_j\}
Therefore
ΞΈ=E[1{LT>log⁑(u)}]=P(LT>log⁑(u))=βˆ‘j=1TP(LT>log⁑(u),MT=Yj)=Tβ‹…P(LTβˆ’1+YT>log⁑(u),max⁑1≀j≀Tβˆ’1Xj<YT)=Tβ‹…P(YT>log⁑(u)βˆ’LTβˆ’1,max⁑1≀j≀Tβˆ’1Xj<YT)=Tβ‹…P(YT>max⁑{log⁑(u)βˆ’LTβˆ’1,max⁑1≀j≀Tβˆ’1Xj})=Tβ‹…E[P(max⁑{log⁑(u)βˆ’LTβˆ’1,max⁑1≀j≀Tβˆ’1Xj}∣X1,...,XTβˆ’1)]\begin{aligned} \theta &=\Bbb{E}\Big[\bold{1}\{L_T>\log(u)\}\Big]\\ &=\Bbb{P}\Big(L_T>\log(u)\Big)\\ &=\sum_{j=1}^T\Bbb{P}\Big(L_T>\log(u),M_T=Y_j\Big)\\ &=TΒ·\Bbb{P}\Big(L_{T-1}+Y_T>\log (u),\max_{1\le j\le T-1}X_j<Y_T\Big)\\ &=TΒ·\Bbb{P}\Big(Y_T>\log (u)-L_{T-1},\max_{1\le j\le T-1}X_j<Y_T\Big)\\ &=TΒ·\Bbb{P}\Big(Y_T>\max\{\log (u)-L_{T-1},\max_{1\le j\le T-1}X_j\}\Big)\\ &=TΒ·\Bbb{E}\Big[\Bbb{P}\Big(\max\{\log (u)-L_{T-1},\max_{1\le j\le T-1}X_j\}|X_1,...,X_{T-1}\Big)\Big] \end{aligned}
Denote FΛ‰(t)=P(Y>t)\bar{F}(t)=P(Y>t), which is the complementary of the CDF F(t)=P(Y≀t)F(t)=P(Y\le t), then
ΞΈ=Tβ‹…E[FΛ‰(max⁑{log⁑(u)βˆ’LTβˆ’1,max⁑1≀j≀Tβˆ’1Xj})]\theta=TΒ·\Bbb{E}\Big[\bar{F}\Big(\max\{\log (u)-L_{T-1},\max_{1\le j\le T-1}X_j\}\Big)\Big]
Example (2): Barrier Options
Suppose we want to find the price of a European option that has a payoff at expiration given by
h(ST)=(STβˆ’K1)+1{ST/2≀L}+(STβˆ’K2)+1{ST/2>L}h(S_T)=(S_T-K_1)_+\bold{1}\{S_{T/2}\le L\}+(S_T-K_2)_+\bold{1}\{S_{T/2}>L\}
The price of this option is given by
ΞΈ=eβˆ’rTE[(STβˆ’K1)+1{ST/2≀L}+(STβˆ’K2)+1{ST/2>L}]\theta=e^{-rT}\Bbb{E}\Big[(S_T-K_1)_+\bold{1}\{S_{T/2}\le L\}+(S_T-K_2)_+\bold{1}\{S_{T/2}>L\}\Big]
If we know the transition density of STS_T given ST/2=xS_{T/2}=x, then we can use the conditional Monte Carlo method:
ΞΈ=eβˆ’rTE[1{ST/2≀L}E[(STβˆ’K1)+∣ST/2]]+eβˆ’rTE[1{ST/2>L}E[(STβˆ’K2)+∣ST/2]}]\begin{aligned} \theta =&e^{-rT}\Bbb{E}\Big[\bold{1}\{S_{T/2}\le L\}\Bbb{E}[(S_T-K_1)_+|S_{T/2}]\Big]\\ &+e^{-rT}\Bbb{E}\Big[\bold{1}\{S_{T/2}>L\}\Bbb{E}[(S_T-K_2)_+|S_{T/2}]\}\Big] \end{aligned}
where eβˆ’rT/2E[(STβˆ’K1)+∣ST/2]e^{-rT/2}\Bbb{E}[(S_T-K_1)_+|S_{T/2}] and eβˆ’rT/2E[(STβˆ’K2)+∣ST/2]e^{-rT/2}\Bbb{E}[(S_T-K_2)_+|S_{T/2}] can be calculated with BS European call formula (with S0=ST/2S_0=S_{T/2}, maturity time T/2T/2, interest rate rr, and strike price K1K_1, K2K_2).
Example (3): Knock-in Option
Consider the digital knock-in option with a payoff
h(ST)=1{ST>K}1{min⁑1≀k≀mS(tk)<H}h({S}_T)=\bold{1}\{S_T>K\}\bold{1}\{\min_{1\le k\le m} S(t_k)< H\}
With S(tn)=S(0)exp⁑(Ln),Ln=βˆ‘i=1nXiS(t_n)=S(0)\exp(L_n),L_n=\sum_{i=1}^n X_i, we get
h(ST)=1{Lm>log⁑(K/S(0))}1{Ο„<m}h({S}_T)=\bold{1}\{L_m>\log(K/S(0))\}\bold{1}\{\tau < m\}
where Ο„\tau is the first kk such that Lk<log⁑(H/S(0))L_k<\log(H/S(0))
To find E[h(ST)]\Bbb{E}[h(\bold{S}_T)], we can use the importance sampling method, which requires choices of two changes of measure. Alternatively, we have
ΞΈ=E[E[h(ST)∣(Ο„,SΟ„)]]\theta=\Bbb{E}[\Bbb{E}[h(\bold{S}_T)|(\tau, S_\tau)]]