Bank supervisory agencies, such as the Federal Reserve, need timely and reliable information about banks’ financial conditions in order to conduct effective supervision. On-site examinations of banks are an important source of such information: they not only permit supervisors to confirm the accuracy of regulatory reports that the banks themselves file, but they also allow supervisors to gather additional, confidential information on banks’ financial conditions.
- Bank examinations and CAMEL ratings
- The data set and the analytical method
- Empirical results
- Conclusions
- References
Bank supervisory agencies, such as the Federal Reserve, need timely and reliable information about banks’ financial conditions in order to conduct effective supervision. On-site examinations of banks are an important source of such information: they not only permit supervisors to confirm the accuracy of regulatory reports that the banks themselves file, but they also allow supervisors to gather additional, confidential information on banks’ financial conditions. However, since these exams absorb considerable resources on the part of both supervisors and banks, there is clearly a trade-off between the timeliness of the supervisory information gathered from bank exams and the costs of obtaining it.
The potential “time decay” of such supervisory information plays an important role in this trade-off and is a concern for policymakers. This Economic Letter reports on research by Hirtle and Lopez (1998) that assesses how the length of time between exams affects the quality of supervisory information; “quality” here refers to how accurately supervisory information from a previous exam reflects a bank’s current condition. The analysis suggests that, on average, supervisory information is of some use for about 6 to 12 quarters (one and a half to three years). For banks with low supervisory ratings, however, the information seems to be of use for about 3 to 6 quarters (nine months to one and a half years).
These results suggest that the yearly on-site examinations required by the 1991 Federal Deposit Insurance Corporation Improvement Act (FDICIA) are quite reasonable. The range of 6 to 12 quarters is an upper bound beyond which, on average, no useful information about a bank’s current condition remains. Thus, it is appropriate to examine banks more often than that, especially if they are financially troubled. The results also indicate that the decay rate of supervisory information is faster during periods of stress in the banking industry.
Bank examinations and CAMEL ratings
On-site, full-scope exams are the most resource-intensive and generally provide the greatest amount of confidential supervisory information. The frequency of such exams has varied over time and across supervisory agencies. For example, during the early to mid-1980s, some supervisory agencies reduced the exam frequency from an average of once a year to once every two years to cut the size of their examination staffs; however, as problems in the banking industry increased in the late 1980s, exams were, on average, conducted more frequently and examination staffs were increased.
Since the advent of FDICIA in 1991, supervisors have had less discretion to lengthen the time period between full-scope exams. However, supervisors can accelerate exams if there are indications that problems are developing at a bank. In fact, supervisors employ extensive off-site monitoringCincluding the use of statistical modelsCto help identify banks where problems might be emerging.
At the end of the exam, the examiners assign a CAMEL rating that indicates a bank’s overall financial condition. CAMEL refers to the five components of a bank’s condition that are assessed: Capital adequacy, Asset quality, Management, Earnings, and Liquidity. (A sixth component reflecting a bank’s sensitivity to market risk was added in 1997.) Examiners assign a rating for each component on a scale from 1 to 5, with 1 representing the highest rating, as well as a composite rating for the bank’s overall condition and performance. Banks with composite CAMEL ratings of 1 or 2 are considered to present few supervisory concerns, while banks with ratings of 3 or more present moderate to extreme degrees of supervisory concern. A bank’s CAMEL rating is highly confidential and known only by its senior management and the appropriate supervisory staff. While CAMEL ratings are not a comprehensive indicator of all the supervisory information gathered during a full-scope exam, they serve as a convenient summary measure for analysis.
The data set and the analytical method
The data used by Hirtle and Lopez (1998) consist of the CAMEL ratings assigned after full-scope bank exams by the Federal Reserve, the FDIC, the Office of the Comptroller of the Currency, and state bank supervisory agencies from 1989 to 1995. For each rating, the as-of date, which is the date as of which the bank’s condition is evaluated, and the identity of the bank are known. Each rating was matched to the corresponding bank’s income and balance sheet data for the quarter before the as-of date. These data serve as a proxy for the information available from regulatory reports and other public information sources about the bank’s condition at the time of the exam. To assess how quickly the supervisory information from a bank exam decays, each bank’s rating also was linked to the CAMEL rating from its previous exam.
Two econometric models were estimated for each year in the sample. The “off-site” model, based on the model used by the Federal Reserve for off-site monitoring purposes, uses banks’ balance sheet data to forecast their CAMEL ratings. The “exam” model includes all the variables used in the off-site model plus variables that control for the time since the most recent exam multiplied by the CAMEL rating for that exam. Because the exam model contains variables that control for information from updated regulatory reports, any additional explanatory power due to introducing the lagged CAMEL rating is assumed to arise from the supervisory information it contains.
By comparing the ability of the two models to forecast CAMEL ratings, the authors assess how long supervisory information provides additional useful information on banks’ current conditions. The two models were estimated using data from one year, say, 1989, and then were used to forecast the CAMEL ratings for the following year, say, 1990. Each model generates a probability for the rating that occurred. The higher the probability that a model places on the actual rating, the greater its accuracy. For example, a model forecasting an 80% probability of the actual rating is more accurate than the model forecasting just a 40% probability. Statistical tests were used to determine the statistical significance of the differences in the accuracy between the two models.
For 1990 and 1991, the results suggest that the exam model is more accurate than the off-site model for exam ratings up to 6 to 7 quarters old; in other words, the exam model produces more accurate forecasts than the off-site model, and the supervisory information is still of some use up to that point. After 1991, this cut-off point increases to between 10 and 12 quarters. Thus, the results suggest that supervisory information contained in lagged CAMEL ratings provides useful information regarding banks’ current conditions for 6 to 12 quarters after the previous exam. Beyond this upper bound, there appears to be little or no value in the information contained in the prior CAMEL rating.
The empirical results further suggest that there is important variation over the time period in the useful life of supervisory information from prior exams. This variation may reflect changes in the condition of the U.S. banking industry over the sample period. In particular, supervisory information contained in CAMEL ratings decays more rapidly during the early years of the sample period, when the U.S. banking industry was experiencing financial stress, than during the later part of the sample period, when the industry experienced more robust performance. Since the condition of banks is more likely to change rapidly during periods of financial stress, a faster rate of information decay seems reasonable during these periods.
To explore the results further, the authors divided the data into subsets according to the initial financial condition of each bank. Specifically, for each year, the data sample was divided into observations with lagged CAMEL ratings of 1 or 2 and with lagged CAMEL ratings of 3, 4, or 5. The results for both CAMEL forecasting models for each subset were then compared.
The empirical results for the subsample with lagged CAMEL ratings of 1 or 2 are similar to those for the overall sample. They indicate that the lagged CAMEL ratings cease to provide useful information about the current condition of a bank after 6 to 12 quarters and that this information decays faster in the early part of the sample. The similarity between these subsample results and the overall results is not surprising, since most observations have lagged CAMEL ratings of 1 or 2.
The results for observations with lagged CAMEL ratings of 3 or more are considerably different. This subsample consists of between 10% and 30% of the yearly samples. The point at which lagged CAMEL ratings cease to provide useful information about current CAMEL ratings is significantly earlier than for the overall sample: the information in these prior CAMEL ratings is no longer useful after just 3 to 6 quarters. Furthermore, the cyclical pattern evident in both the overall sample and in the subsample with lagged CAMEL ratings of 1 or 2 does not emerge in these results. Taken together, these findings suggest that the rate of decay in supervisory information is considerably faster for banks experiencing some degree of financial difficulty, regardless of the overall condition of the banking industry.
What do these results imply for the basic question of how frequently banks should be examined? To answer this question, it is important to understand that the tests described above provide an upper bound for the length of time that prior CAMEL ratings provide useful information about current conditions. That is, beyond 6 to 12 quarters the lagged CAMEL rating contains little or no useful information about the current condition of a bank. In practice, supervisors should probably examine a bank before this point, when the supervisory information gathered during the prior exam continues to have some–though diminished–value.
Finally, in thinking about the optimal time between exams, the results suggest that this horizon may vary. When the banking industry is facing financial stress, the quality of supervisory information appears to decay faster than when conditions are more stable, suggesting that the optimal time between exams may be shorter in these periods. Further, the rate of information decay is markedly greater for banks that are themselves financially troubled, regardless of the state of the overall industry. This finding implies, rather sensibly, that it is desirable to examine troubled institutions more often than healthy ones, although the optimal exam interval for any particular bank will vary from the averages discussed here.
In light of these results, FDICIA’s requirement for annual, full-scope exams seems reasonable, particularly for banks whose initial financial condition is troubled or when the banking system as a whole is experiencing financial stress.
Jose A. Lopez
Economist
Hirtle, B.J., and J.A. Lopez. 1998. “Supervisory Information and the Frequency of Bank Examinations.” Manuscript. Federal Reserve Bank of San Francisco.
Opinions expressed in FRBSF Economic Letter do not necessarily reflect the views of the management of the Federal Reserve Bank of San Francisco or of the Board of Governors of the Federal Reserve System. This publication is edited by Anita Todd and Karen Barnes. Permission to reprint portions of articles or whole articles must be obtained in writing. Please send editorial comments and requests for reprint permission to research.library@sf.frb.org