AI Case Study
Natixis stress-tests trading portfolio for risk and regulatory compliance using machine learning
Fund and asset manager Natixis uses machine learning algorithms to stress test their models for extreme events. This enables regulatory compliance for making the process efficient and ensuring regulatory compliance.
Fund And Asset Management
Natixis' "equity derivatives business has utilised this type of ML to detect anomalous projections generated by its stress-testing models. Every night, these models produce over 3 million computations to inform regulatory, internal capital allocations and limit monitoring. A small fraction of these are incorrect, knocked out of the normal distribution of results by a quirk of the computation cycle or faulty data inputs."
"This ML algorithm helps us to determine which results are suspicious, so that we can analyse them and automatically replay the computation in case it was caused by a transient error. All results are scanned and evaluated by the ML regardless of the final use of the projections, whether for regulatory or trading purposes. This use of ML hands validators a valuable tool for the ongoing monitoring of their stress-testing models, as it can help determine whether they are performing within acceptable tolerances or drifting from their original purpose... where the analysis becomes marginally useful is its ability to capture when the bank is moving from between a risk-on and risk-off position across trading desks, which may signify a change in the structural position of the book – for example, in relation to dividends, volatility or correlation. For instance, a trader in Hong Kong may have executed a client trade, going long on a semi-illiquid emerging market bond, creating an unusual non-diversifiable risk concentration for a couple of weeks until the position is resold in the market. Alternatively, a trader in London may be building some inventory to prepare for a client order, creating a similarly unusual risk spike, or an equity options book may be building a vega position in anticipation of bearish/bullish price dynamics. In these cases, the unexpected risk factor will suddenly become a major component of one of the RSTSs and will be clearly flagged to senior management and the trading desk concerned. This is the main contribution of the screening process: an enhanced ability to spot unexpected material risk-taking for a given desk. As most seasoned risk managers would attest, unexplained or unexpected material risk-taking deserves rapid follow-up with the desk concerned. Moreover, the screening fits with other regulatory directives that discourage directional risk-taking, such as the US Dodd-Frank Act's Volcker rule. The RSTSs are a welcome addition to the first line of defence against market risk levels that exceed the appetite of the firm.
Results undisclosed, however the benefits are described thusly:
"In short, the use of RSTSs allows us to scan, in a dollar-weighted and systematic way, tens of thousands of scenarios generated by the VAR [value at risk] engine, and find the scenarios that have a material impact on tail risk. The overall results are stable, which makes sense, as overall VAR usage tends to be stable, but some explanatory risk factors are volatile. The last of these is the real point of interest for risk managers. The stress tests, besides providing useful additional insight, also benefit from being easy to implement and reportable to senior management. The latter requirement is invaluable for the development of a firm's risk policies, appetite and limits".
"Our objective is now to summarise the 100 scenarios into a limited set that can be analysed and handled by risk managers. To achieve this, we use the robust k-means algorithm for cluster analysis. It determines the intrinsic grouping in a set of unlabelled data. We try to summarise these 100 scenarios into two or three representative scenarios. The algorithm iteratively attempts various groupings into two sets, until the central scenario of each group (the two red dot scenarios) are closest to each specific scenario in their group. Therefore, one can assert the two central scenarios represent and summarise the 100 individual scenarios. We do the same with the results of the Monte Carlo VAR. The handful of clustered scenarios derived by the algorithm from the initial 100 will be called reverse stress test scenarios (RSTSs). The clustered scenarios can be understood as the traditional geometric barycenter of similar scenarios, dividing the 100 most adverse scenarios into an understandable, plausible and manageable list of scenarios that can be processed by risk managers.
On a normal day, one to three scenarios are generated. Next, the P&L result of the RSTSs can be exactly divided into the bank's trading desk structure to isolate which desk is carrying the bulk of the tail risk. Where necessary, this will allow risk mitigation techniques to be put in place. Importantly, the development of this analysis was relatively cheap in terms of IT resources, as it is based on the outputs of the existing Basel II risk infrastructure. A dedicated data scientist can produce good results in less than six months using R or Matlab – applications that will be familiar to most banks' quantitative and statistical teams."
Legal And Compliance
"Regulatory initiatives in the US, European Union and UK have turned the spotlight on to banks’ model risk management processes. The resulting increased workload on model risk managers is sparking interest in automated processes to help alleviate the burden of certain tasks, such as data cleansing and model validation."
"Stress testing on a desk-by-desk basis, or on a company- wide basis, is a common risk methodology used both by regulators and risk managers – both for capital requirements and for risk limit consumption. This tends to be done systematically, on a daily basis, and generally relies on fixed historical data, pre-defined hypothetical scenarios, or a systematic approach for any given desk. But the classical approach to stress testing may miss significant and likely market events – for example, if the portfolio is properly hedged for historical or hypothetical scenarios, but not for likely future scenarios. On the other hand, these ‘fixed scenarios' may lead to very unrealistic results with a market move that looks unlikely to a market participant. The latter could be a result of statistical data mining of the worst-case scenario of each individual risk factor, without considering the likelihood of individual risk factor moves, or the joint likelihood of different risk factor moves. In the equity business, for example, a 25% sell-off with short-term volatility collapsing simultaneously could be considered extreme and beyond even a ‘black swan' event."
"First, we use the regulatory VAR engine to extract all the Monte Carlo-generated scenario vectors and their respective P&Ls. Next, the scenarios are sorted by P&L and theconstituent of the 1% worst-case scenario is identified. The least adverse of those would therefore be the VAR 1% and their average the expected shortfall 1%. If 10,000 scenarios are generated, the new set will have 100 scenarios."