Hedge Fund Operational Risk: How Bad is the Problem?
An operational failure is a mistake in execution such as placing a buy order when you meant to sell or buying 10,000 contracts instead of 1,000. ‘Execution’ here refers to the entire investment plan, not just individual trades, which is why very few operational mistakes are as obvious and trivial as these examples imply. For example, a natural disaster that completely wipes out a hedge fund’s trading records and computer systems is, in part, an operational failure. It is management’s responsibility to provide backups and redundancy.
According to Capital Market Company’s widely sited and widely believed study, operational failures are the cause of 50% of all hedge fund breakdowns. Even if this study is off by an entire order of magnitude; that is, even if the percentage of failures is 5% rather than 50%, this is a number well worth paying attention to.
There is low lying fruit here. On February 6, 2002, Allied Irish Banks reported a fraud in its Baltimore-based subsidiary Allfirst. According to the report, around 1997 John Rusnak, one of their internal traders, lost a large amount of money. For five years, Mr. Rusnak covered his tracks by writing non-existent options and booking their equally non-existent profits as income. Compound interest being what it is, Mr. Rusnak’s problems and, hence, the bank’s, eventually grew to $700 million dollars. One Monday morning, Mr. Rusnak failed to show up for work and the entire fraud collapsed. If this sounds familiar, it should. With a few words changed here and there, this would be the story of Nick Leeson and Barings Bank.
Barings Bank had collapsed ten years previously and Allied Irish Bank was, at that time, Ireland’s second largest. Assuming only that such problems occasionally occur in Hedge Funds, and that Hedge Fund management is sometimes as oblivious as Allied Irish Banks was, investors and fund of funds managers can beat the hedge fund averages by doing little more than identifying and avoiding funds which have a higher than average operational risk.
There are at least three issues involved in estimating operational risk. First, we need to develop baseline estimates of the risks here. How big and bad is it? Second, we need to develop a typology of loss. What kinds of operational mistakes do hedge funds tend to make and how often? Third, what should we look for when we perform an operational audit? I plan to discuss the first issue in this essay and the second and third issues in future essays.
Anna S. Chernobai, Svetlozar T. Rachev and Frank J. Fabozzi’s new book Operational Risk: A Guide to Basel II Capital Requirements, Models, and Analysis is the place to start to develop baseline estimates. As an alert reader might guess, this book is about bank capital requirements, not hedge fund capital requirements, and the issues here are not quite the same. On the one hand, a hedge fund’s accounting and operational issues are simpler and more transparent than banks are. On the other hand, banks typically have a radically longer operating history and understand their operational risk issues in much greater depth than we do.
Unfortunately, this book is not much of a start. I do not fault the author’s scholarship, competence or work ethic. Mr. Fabozzi is one of the heaviest of the investment industry’s heavy hitters. The problem is that we know almost nothing about modeling operational risk.
For example, early in the book, there is a very strange discussion of operational loss severity and frequency, which lists the possibilities as low frequency/low severity, high frequency/low severity, high frequency/high severity, and of low frequency/high severity losses. They note that Samad-Khan who tells us that, “high frequency/high severity losses are unrealistic.” The authors tell us, “Recently, the financial industry has agreed that the first group is not feasible.” I guess I missed that meeting. Reading this kind of sludge, it strikes me that the authors either believe you and I are truly unobservant and stupid or they are desperately trying to present anything of even the most marginal value in an area where almost nothing is known. I suppose the authors may be telling a joke that I am too slow to get, but I dismiss that possibility. This book has many virtues, but big yucks are not among them.
Modeling operational losses wouldn’t seem to be a problem. We have wonderful statistical techniques for modeling most of the operational loss distribution and, if we bother to collect data at all, we quickly find we have more data than we can handle. Losses below a certain size are typically ignored.
In other words, we know how to model the middle of a loss distribution and, by making certain statistical adjustments, we can safely ignore the left hand side, which is where small, frequent losses are located. It is the far right hand side of the distribution that causes all the problems. The basic problem is that there is no practical upper limit to the size of operational losses. Modeling the losses a John Rusnak or Nick Leeson can produce adds layers of complexity and uncertainty. We have limited data on such losses. Much worse, we don’t even know how operational losses are distributed. We know that the distributions have fat tails, but we do not know how fat and those differences matter a lot.
The authors provide an excellent introduction to the mathematics involved, to topics such as extreme value theory, for example. But when it comes to applying the theory, the book falls apart or, at least, crumbles a bit. For example, late in the book is a chapter on robust estimation of operational loss data. When distributions have fat tails, estimators like the mean and standard deviation are of little value. The mean is heavily influenced by the largest and smallest values in the sample, after all, and those are precisely the values most likely to be wrong. Robust estimators, such as the median, throw away a certain proportion of large and small observations and use only the remaining values.
Each robust estimator uses a carefully defined set of rules for which data to keep and which data to discard. There are hundreds, if not thousands, of such estimators and they are not all of equal value. The median throws away too much data, for example.
Statisticians have spent more than thirty years working on these issues and have learned a great deal about what types of statistics can be expected to work on what kind of distributions. The fact that we almost never know exactly what distribution we are working on is less important than it might seem. We generally know enough to make a reasonable guess, and that is enough to select a reasonable estimator. In fact, there really is no alternative. Using robust estimators, we will almost always get a better estimate of location or scale than we would by using, say, the mean and standard deviation. Not incidentally, I was the first to introduce robust estimation techniques to the alternative investment industry and I received a lot of abuse for championing those ideas.
A reasonable person might assume that using such techniques to estimate extreme values makes no sense. After all, when you are trying to estimate extreme value, wouldn’t you want to keep even the most extreme values in your dataset? Bizarrely, this turns out not to be the case. The problem is that making a really good estimate of extreme values demands immense data sets, far, far exceeding what we have to work with. Roughly speaking, the best we can do is estimate the easy parameters, using robust equivalents of the mean and standard deviation, for example, and hope the tails of our distributions are about as thick as we have found them in other, larger samples. If this approach does not make much sense, well, it is the best we can do with the knowledge and technique we have right now. Have you ever tried to break out of a Chinese Finger Trap? The straightforward, logic, reasonable approach is to pull your hands apart. That doesn’t work. To escape, you have to push your hands together. Estimating extreme values is something like that.
The authors give an example where they toss out the largest 5% of the operational loss values. They then forecast future losses based on estimators using trimmed and estimators using untrimmed data and they find that the trimmed data works better. These results are interesting, but I have seen too many investment studies with excellent out of sample tests fail in the market. I do not suggest their techniques cannot be used. As far as I know, we have nothing better. I just wouldn’t take the results too seriously.
The authors argue for their approach, saying that including the most extreme data point would produce ‘unrealistic’ capital requirements. If by ‘unrealistic’ the authors mean that they cannot sell the results to Banking Industry upper management, then their point is well taken. If they mean they are following the most realistic statistical principles we have right now, I believe them. If they mean that they know what the values really should be and they are making the necessary corrections, well, then I have my doubts. I have been betrayed before by talk like that.
|