Nov 14, 2012

Model Validation: Risk Management’s First Line of Defense

At last week’s Annual SIFMA meeting, several presenters focused on ‘better risk management’ as one of the keys for restoring confidence in the financial services industry. Specific to the use and management of derivatives – managing model risk through the validation process is critical to the trading operations success.  Prudent risk management is not singularly focused on portfolio level models (e.g. VaR) but in the specific valuation models utilized per financial instrument. Given the complexity of the derivative types, wide number risk factors have to be captured by the model thus creating many situations during the implementation that can potentially go awry -- not just the issues around data quality but specifically in the model selection process and in the calibration of the model.  This is true not only for the models pricing exotic derivatives and structured products, but for standardized models of ‘vanilla’ instruments as well.   Moreover, given the growing importance of portfolio level analytics such as CVA, even small errors in the pricing model at the trade level can result in significant issues in the calculation of the portfolio, resulting in inaccurate hedging (over or under hedged), impacting regulatory cost of capital and can lead to significant trading losses. 

 

Conventional wisdom states the best indication of model performance is that the price outputs compares to market prices.  While this is important for marking trades to market, a greater economic impact is felt when the model produces good hedge ratios, because this is what enables an issuer to hold a large number of trades on the books without endangering the firm, and what enables an investor to understand his exposure, what risk factors he is really exposed to. But these can fail when a poor model choice is made.  In fact, poor implementations of a model can lead to very bad hedge ratios even while the model price is accurate. 

Model validation is not an easy thing to do, because there is very often no known source for verifiably correct answers.  It can be difficult or expensive, with bespoke products, to find any second opinion at all, and when there are multiple sources for prices, they often disagree with one another.  Furthermore, regulators are no longer content to rely on a third party model for validation – they will ask how the third-party itself model has been validated.  And any tests that are done are polluted by additional assumptions that are baked into the inputs, and the usage of the outputs – assumptions about choices of calibration and hedge instruments, calibration methods, data proxies, and hedge ratio computations. 

With that in mind, we have sought out tests that do not rely on comparison to other models, and encompass all the additional assumptions on calibration, hedging, and data sources within the test. Our approach is to ask what the derivative model is supposed to accomplish, and test directly whether it accomplishes its purpose.  Because the Black-Scholes-Merton framework is based on finding hedges that eliminate (or minimize, for incomplete markets) the variance of the hedged portfolio, we propose to test models by simulating the hedging process in a Historical VaR setting.  Our test measures the mean and the variance of the difference between the changes in the option value and the changes in the hedge value (the “residuals”).  If the model were perfect, the mean of the difference would be the price of the option at inception, and the variance would be zero.  We can rank models in order of their quality, i.e. their fidelity to true market behavior, by seeing how close they come to this ideal for the mean and variance. 

These tests are similar in flavor to standard “PnL attribution tests” that are commonly done on trading desks, which regress the changes in option value on the hedge instrument increments.  We have added this computation to our test suite.  Our approach systematizes this test, and bolsters it with enhanced statistical methods.  Using these, we have enhanced the standard PnL Attribution test, by regressing the residuals themselves back onto the hedge instruments, or other risk factors, to look for places where we are under- or over-hedging, or where important risk factors have been left unhedged altogether.

These tests bring us towards our goal of being able to validate a model with clearly defined and objective criteria, and without reference to other models that must themselves be validated, somehow. Other, more standard tests are also in our test suite, which measure the calibration errors that a model has experienced in history, and the instabilities therein. Additionally, there are tests that measure the quality of the implementation of the model, its smoothness, its convergence rate, the ability of its calibrator to find optimal calibrations.  Taken together, they make it very hard for bad models to get through to production, and damage financial institutions. 

To learn more about best practices in model validation and managing model risk of derivatives pricing models, watch Dr. David Eliezer’s web conference.

Need Assistance?

Want More From Numerix? Subscribe to our mailing list to stay current on what we're doing and thinking

Want More from Numerix?

Subscribe to our mailing list to stay current on what we're doing and thinking at Numerix

Subscribe Today!