New York – Earlier this week, the FT reported the disturbing news that “Moody’s awarded incorrect triple A ratings to billions of dollars worth of a type of complex debt product due to a bug in its computer models”. Even worse, investors and clients were not informed when the error was discovered, and the code was corrected simultaneously with a methodological change. To say that this is an embarrassment is an understatement. Moody’s shares have fallen by nearly a quarter since the news broke, and certain legislators, smelling blood in the water, have begun to circle around this case. Ironically enough, S&P has placed Moody’s on ratings watch negative, and it’s possible that we will see lawsuits and regulatory action.
While the brouhaha will focus on whether Moody’s intentionally misled investors, we find the firm’s official explanation, that the error was caused by a “computer bug”, to be fairly damning by itself.
First of all, there is no such thing, strictly speaking, as a computer bug. Computers do what humans tell them to; bugs in an application are a result of human errors in design, coding, or quality control. Using the phrase “computer bug” merely obscures the fact that any error in a Moody’s rating is directly attributable, not to a computer, but to a human being who is paid to design, implement, or check the ratings model. What follows is speculation as to what actually caused this error, and some lessons that research firms and investors should draw from it.
If Moody’s uses a professional IT services firm, or formally trained software engineers, to write its models, it would seem less likely that this is merely a coder’s error. Those who have formal training in software development and write code for a living generally take testing and quality control of code very seriously, especially when they know that their work will be used by major institutional clients. However, it is quite possible that the models are actually written by amateurs – young analysts with little formal training in programming. Manipulating and creating models is seen as a pretty inglorious task at many financial-services firms, and is regularly shunted off on the most junior members of the team. This may well have been the practice at Moody’s, and, without further information as to who actually caused the error, it casts their entire model development and quality control process in a negative light.
It has been suggested that the errors might have been in the actual design or methodology of the ratings model, as CPDOs are notoriously difficult to analyze, and not just in the code. Independent credit research firm CreditSights says: “A glitch in computer code is the least of the rating agencies’ concern. Rating a CPDO requires predicting 11 different variables 10-years into future with a high degree of accuracy. A skill we are not convinced that anyone possesses.” It is possible that the “computer bug” explanation is an attempt to shift the blame towards more expendable parts of the process – external IT consultants or junior coders/analysts – and away from Moody’s core analytical process.
Finally, it seems indisputable at this point that Moody’s failed to exercise sufficient quality control over this model. Possibly in a rush to get more business and keep up with S&P, the model seems to have undergone very limited testing before release. This has implications well beyond the CPDO business line. If this is any indication of the overall level of quality assurance practiced at Moody’s, then serious questions are raised about the credibility of Moody’s ratings as a whole, and the future of the franchise. Financial services firms are built on trust; a loss of that key asset can be disastrous.
One way for Moody’s to regain its credibility would be to release all of its model code to investors. As reported by the FT, “at Moody’s the CPDO model – as with most structured product models – came in two parts: the dll and the CDOROM. The dll was the ‘black box’ proprietary part: the secret mathematical model developed to spit out the rating. The ‘error’ in Moody’s code was in the dll.” Releasing the code behind this secret model would encourage smart analysts at investment research firms to go through it themselves and look for errors. If no errors are found, then the credibility of a Moody’s rating will only be strengthened by this transparency. Furthermore, the knowledge that any mistakes will be visible to the public will cause the firm to take its own quality-control process more seriously; in addition, the fact that the full code is available will give an incentive to investment managers to do more extensive due diligence before they rely on a rating. The overall effect will be greater transparency and efficiency for the market as a whole, and greater trust in the Moody’s franchise.
Some might complain that this proposal would endanger Moody’s business model; however, Moody’s revenue is generated primarily from issuers who pay them for their imprimatur. Releasing the algorithms under an appropriate license does not mean that other parties will then be able to use the Moody’s branding or issue their own ratings. Issuers will still have to come to Moody’s to get an official rating, and funds will still be required to use a rating issued by an NRSRO. Even if the code is available, it will not be easy for others to duplicate Moody’s analytical resources and model development expertise. Given that the users of such ratings are institutional investors, this is not a business where releasing the “secret formula” is likely to encourage customers to use “generic” alternatives to the official rating. In reality, we suspect that more transparent ratings models would only enhance the value of a Moody’s rating, and would help to affirm Moody’s supremacy in the field of model development and analysis. At any rate, given this incident, investors now have every right to demand the full source code behind Moody’s ratings.