Tally Ferguson, SVP, Director, Market Risk Management, Bank of Oklahoma will be participating at CFP’s upcoming Stress Testing USA: DFAST Edition Congress. Ahead of the Congress Tally has kindly provided the following key insights into effective changes to build an effective governance, control and challenge process, and looking ahead at how stress testing requirements (DFAST & CCAR) will change in the next couple of years.
Tally, can you tell CFP’s readers a bit about yourself and your professional background?
I am a career financial industry professional. From my second summer job in high school through today, I have worked for a bank, bank consultant or bank regulator. My family has been “in banking” for three generations, albeit in the community bank space. Ranchers first, they sought more flexible borrowing terms than the industry allowed in the early 20th century, and they founded their own banks. Apparently they weren’t too flexible. Their banks survived the great depression. One still exists today.
I went to high school and college in the 80s. That period saw the rise of the Monetarists, the Monetary Control Act, the birth of swaps, and the great savings and loan crisis. How could I not be lured into banking? I started as an international examiner with the Federal Reserve Bank of New York. After about a decade in the regulator orchestra seats watching and evaluating new capital standards like Basel I and new profitability measures like RAROC, I transitioned to the private sector. I took the consulting route at EY, then called Ernst and Young. One of my clients, the Bank of Oklahoma, found me cheaper to hire full time than as a consultant, and I have been there ever since. All 19 years of my time there is in Risk management, mainly capital markets and price risk oriented.
At CFP’s Stress Testing USA: DFAST Edition you will be discussing building an effective governance, control and challenge process, how can institutions ensure effective changes towards these processes?
I expect all mid-size banks have taken the essential first step. Assign accountability. Without that, DFAST becomes a moon-lighting hobby that fails from neglect. Accountability needs to be recognized at the executive and board level. That is key because DFAST building takes resources, and executives are very good at finding and distributing resources.
Having assigned accountability, establish a dedicated DFAST team. Unlike with CCAR, DFAST is unlikely to be a full time job exclusively for one professional, let alone a team of professionals. That said, for between six and nine months of the year, a small team of finance oriented quant-phillic professionals need to make DFAST their priority. This team needn’t develop or even own all the models that comprise DFAST, but they do need to enforce quality, content and timeliness standards.
I am partial to a hub and spoke design for this. The small team of finance oriented quant-phillic professionals is the hub. They are critical. If the hub breaks, the wheel doesn’t turn, cart stops and we get a ‘non-reliable’ DFAST result. The spokes are important, but not critical. One or two could be replaced or even fail to deliver and the wheel can keep turning. Spokes reach out to business lines like commercial mortgage and support groups like financial accounting.
Next, create quality, content and timeliness standards for all models comprising DFAST. To be clear, not all models need the same level of content or quality of research, quantitative method or data integrity. All do need some prescribed standard. Here is a rule of thumb I find helpful. Perform enough work on a sub model to get a high degree of confidence that its predictions fall within an acceptable range. Projected losses, PPNR, balance sheet, risk-weighted assets, ALLL, and capital need to be materially accurate. The corollary to this rule is that documentation needs to be complete enough for an experienced financial professional or examiner to understand what the model produces, how and where it’s Achilles heels lie.
To illustrate, contrast a model that predicts loss rates to one that predicts fee income for a small line of business. Small errors in the first can lead to big enough changes in loss estimates to make the DFAST results look like a bank has sufficient capital when, in fact, it does not. For this model, the calculation method’s conceptual soundness needs to be well tested and broadly accepted. The assumptions must be well vetted. The outcome has to be stressed and so on. Codifying these details will take dozens of pages. In contrast, the fee income prediction model could wrong by a factor of 10 and not move the capital ratio dial. The entire model could well be captured in an excel workbook, documented in three paragraphs and validated by a trained monkey.
This DFAST team plays a key role in credible challenge and enforcing standards, but they need support from the fourth step, build a solid, independent validation team. This is separate from the dedicated DFAST team. Ideally part of your institution’s existing model validation unit, it comprises a core team of quant-oriented professionals with commercial banking experience. The DFAST team and validation team should communicate regularly during development, implementation and running of the model. While the validation team cannot build model components, they can serve as consultants as questions arise concerning quality, content and timeliness standards.
Do you believe organizations have an understanding of what the regulators expect in order to develop effective processes?
I am not convinced the regulators have an understanding of what they expect. Consider this. It took from 1988 until 2004 to go from Basel I to Basel II. Sixteen years to go from a blunt risk weighted, credit only environment for determining capital adequacy to a sophisticated approach inclusive of market, operational and credit risks. Sixteen years to recognize that proprietary loss projection models can better determine capital adequacy than broad risk weights. Basel III took another 9 years, although events in 2008 -2009 may have delayed implementation. Nine years to recognize that broad risk weights can better determine capital adequacy than proprietary loss projection models.
To those of us that never adopted Basel II, it seems as if regulators took 25 years to go from a broad set of risk weights to a more granular set of risk weights. Admittedly, this quarter century was interrupted by a brief interlude when economic capital grew popular. During this interlude, regulators never set economic capital standards for non Basel II adopters. Now every bank with over $10 billion in assets has to adopt either DFAST or CCAR which, arguably, regulators intend to replace economic capital. Congress expects the entire mid size bank community and their regulators to implement a whole new way of thinking about capital adequacy in four years! It should be no surprise to hear that we are a long way from steady state regulatory expectations for effective DFAST processes.
That said, regulators have done some decent evangelism. They issue annual assessments of DFAST processes and hold symposiums to share ideas and tell us their latest set of standards. Financial institutions appear to be on the same sheet of music that regulators use. Anecdotally, we hear that banks generally improved in their DFAST evaluations from 2014-2015. Regulators have hidden the tempo, and we are unsure which bar they are on, but for only two years of practice, results seem to reflect a decent understanding of regulatory expectations.
You will also be discussing effective documentation of the process, how can organizations prepare for better understanding of what to include, how much to include and how?
In my response to question 2, I shared my rule of thumb on this topic. Documentation needs to be complete enough for an experienced financial professional or examiner to understand what the model produces, how and where it’s Achilles heels lie. I will elaborate more at my conference presentation, but I see five components for sufficient and effective documentation.
First off, follow the directions. Lots of comments the first year, and even some this year, suggest that not all banks followed regulatory directions. In some cases, it was difficult for examiners to recognize that directions were followed because of poor document organization.
Second, be clear about what “documentation” is and what it isn’t. For example, documentation is not a shrink-wrapped product you buy from a consultant. Rather, it is a dynamic process that is available to users. I suppose it could start as a product you buy from a consultant. “Documentation” isn’t a PhD thesis on the relative value of cubic spline over linear interpolation for yield curve building. Rather, it is a readable users manual. Once we recognize what “documentation” is, we can create it.
Third, recognize documentation’s value proposition. Absent the written word, humans would still be reinventing wheels, competing with one another for scarce resources and repeating our mistakes without I-phones. Because of the written word, we can do all that with I-phones. Documentation’s lowest utility is that is satisfies regulators. Where it becomes really useful is that it makes model output repeatable and turns it into useful action. Recognizing where documentation has value helps us frame the content of that documentation.
Fourth, understand where documentation fits in Model Risk Governance Standards. I discuss quality, content and timeliness standards above. These apply to processes and documentation. Regulatory guidance prescribes documentation standards on many fronts including model development, processes, testing and validation. The DFAST and Validation teams I mention above should learn these prescriptions and see how they apply in their organizations for their various DFAST sub models.
Finally, cater to the uniqueness of DFAST Documentation. Describing 2013 DFAST observations, regulators concluded that documentation was a broad shortcoming. In some cases, it was entirely lacking or key details were missing. In other cases, a case of “too much paper” degraded the documentation evaluation. Regulators hold the DFAST process with equal prominence to DFAST results. Documentation delivers both to regulators. Results are easy to document. Process is less so. Regulators expect us to document our thought process. How we got to the model we used? What did we try that didn’t work? Write down what readers need to know to understand the process. . We need to write down who does what and when. What does each sub model produce and how? What are each model’s key assumptions and limitations? What did validators look at? What did they find and how were findings resolved?
Under effective documentation of the process, what remains to be done in terms of ensuring the process is more time and resource efficient?
At my presentation in this years’ Stress Testing USA 2015 DFAST edition, I will share some ideas for prioritizing documentation. I still believe in Tinkerbell, think the Cowboys will win another Super bowl and maintain that there is a kernel of value in regulatory pronouncements. Armed with those bona fide tenets, I can confidently state that the intersection of documentation’s value proposition and regulator’s documentation prescriptions is where documentation efforts should start. Other good ideas include leveraging non-DFAST documentation, cross referencing repeated processes and tying documentation detail to model scoring.
How do you foresee Stress Testing requirements (DFAST & CCAR) changing in the next couple of years?
Recent requirement trajectory points to more quantification at the expense of management overlays. It seems that at least one analytical arm of the OCC expects the 70 or so midsize banks to each have a Moody’s Analytics division. While regulatory expectations fall short of Hari Seldon’s Psychohistory, they do insist on granular loan data with both facility and borrower grades.
I expect better defined documentation and process standards from regulators. Here, outsourcing will get a cold shoulder, and versioning control with model lock-down will be expected. Regulators will look for more evidence of credible challenge, more integration across sub-models and clearer tie in to capital planning.
Hear more from Tally Ferguson by registering to CFP’s Stress Testing USA: DFAST Edition.