Learning objectives: Describe and distinguish between the different quantitative approaches and models used to analyze operational risk. Estimate operational risk exposures based on the fault tree model given probability assumptions. Describe approaches used to determine the level of operational risk capital for economic capital purposes, including their application and limitations. Describe and explain the steps to ensure a strong level of operational resilience, and to test the operational resilience of important business services.
Questions:
23.5.1. Sally is a risk analyst at an international bank who is leading a project to conduct a causal analysis of the bank's operational risk. The project will coordinate with, and inform modifications to, the bank's loss database. One of the project's goals is to analyze the organization's controls, control layers, and key risk indicators (KRIs), and key control indicators (KCIs).
She prefers to use a factor model, and her candidates include the following approaches:
a. Fault tree analysis (FTA) is simple and non-reliant on domain expertise; however, it is unrealistic because it can only assume independent events
b. The Bowtie tool is proactive because it assesses the likelihood and/or impact of risks before they materialize by examining controls and indirect causes; however, it natively lacks a quantitative framework such that prioritization is harder
c. The FAIR model is inclusive because it emphasizes subjective identification (aka, findability) of factors; however, it is not a probabilistic approach and may be insufficient when distributions are required
d. The Swiss cheese model is elegant because clusters and their centroids (aka, holes) are quantitatively retrieved via k-means clustering; however, it struggles when incorporating human factors into a holistic risk perspective
23.5.2. In order to determine his firm's internal operational risk capital requirements, Robert is employing the loss distribution approach (LDA). He will take the following steps: collect and clean data; categorize data according to well-defined risk classes (aka, units of measure. UoM); estimate (aka, fit) the frequency distributions; fit the severity distributions; calculate expected loss (EL) and unexpected loss (UL) for each risk class; and aggregate.
In regard to this LDA, each of the following statements is true EXCEPT which is false?
a. Monte Carlo simulation convolutes frequency and severity distributions into an aggregated loss distribution
b. In practice, LDA frequency distributions tend to be either Poisson or negative binomial; severity distributions tend to be lognormal, Weibull, or Generalized Pareto Distribution (GPD)
c. Within each unit of measure (UoM; aka, risk class), losses should be correlated and heterogeneous; however, the many UoMs should be aggregated according to an independent and identically distributed (i.i.d.) assumption
d. Because the history of internal loss data is insufficient to the task of describing tails, the dataset likely needs to be completed with extreme value theory's (EVT's) tail distributions and/or the external data must be added to the mix where key methodological decisions include scaling, cut-off mix, and filtering
23.5.3. Alice is developing her bank's operational resilience framework. Historically, her firm's language did not include the concept of resilience, much less operational resilience. According to the Bank for International Settlements (BIS), "operational resilience is an outcome that benefits from the effective management of operational risk."
Her workflow steps will include the following: identify important business services; set impact tolerances; map the important services end-to-end and identify required resources; design severe but plausible scenarios in order to test vulnerabilities; where tolerance(s) is exceeded, examine lessons learned and take action; ensure internal/external communications; get sign-off on annual self-assessment document.
In regard to operational resilience, each of the following is true EXCEPT which is false?
a. Single points of failure (SPOF) are likely to be key risk indicators (KRI), and whenever possible, they should be mitigated or removed
b. The important business services (IBS) vary by the nature of the institution, and prioritization assumes that not every process is important to an institution
c. The key difference between resilience frameworks and business continuity management (BCM) is that resilience is articulated around important business services (IBS)
d. The framework should be qualitative (because it complements the factor models), non-complex (to achieve successful implementation), and resource-light in generation (because participants already working on the ORM framework should not bias the resilience framework with dual assignments)
Answers here:
Questions:
23.5.1. Sally is a risk analyst at an international bank who is leading a project to conduct a causal analysis of the bank's operational risk. The project will coordinate with, and inform modifications to, the bank's loss database. One of the project's goals is to analyze the organization's controls, control layers, and key risk indicators (KRIs), and key control indicators (KCIs).
She prefers to use a factor model, and her candidates include the following approaches:
- Fault tree analysis (FTA): this deductive method will build a series of and/or conditions that can happen simultaneously or alternatively
- Root-cause analysis with bowtie tool: the visual tool will employ the 5-why analysis to probe deeply into the root causes of events
- FAIR methodology: this structured framework will generate scenarios, where each scenario will assume an asset at risk, a threat community, a threat type, and an effect
- Swiss cheese model; aka cumulative act effects: by anticipating active failures and identifying latent conditions, this model will promote effective control layering and identifying active failures
a. Fault tree analysis (FTA) is simple and non-reliant on domain expertise; however, it is unrealistic because it can only assume independent events
b. The Bowtie tool is proactive because it assesses the likelihood and/or impact of risks before they materialize by examining controls and indirect causes; however, it natively lacks a quantitative framework such that prioritization is harder
c. The FAIR model is inclusive because it emphasizes subjective identification (aka, findability) of factors; however, it is not a probabilistic approach and may be insufficient when distributions are required
d. The Swiss cheese model is elegant because clusters and their centroids (aka, holes) are quantitatively retrieved via k-means clustering; however, it struggles when incorporating human factors into a holistic risk perspective
23.5.2. In order to determine his firm's internal operational risk capital requirements, Robert is employing the loss distribution approach (LDA). He will take the following steps: collect and clean data; categorize data according to well-defined risk classes (aka, units of measure. UoM); estimate (aka, fit) the frequency distributions; fit the severity distributions; calculate expected loss (EL) and unexpected loss (UL) for each risk class; and aggregate.
In regard to this LDA, each of the following statements is true EXCEPT which is false?
a. Monte Carlo simulation convolutes frequency and severity distributions into an aggregated loss distribution
b. In practice, LDA frequency distributions tend to be either Poisson or negative binomial; severity distributions tend to be lognormal, Weibull, or Generalized Pareto Distribution (GPD)
c. Within each unit of measure (UoM; aka, risk class), losses should be correlated and heterogeneous; however, the many UoMs should be aggregated according to an independent and identically distributed (i.i.d.) assumption
d. Because the history of internal loss data is insufficient to the task of describing tails, the dataset likely needs to be completed with extreme value theory's (EVT's) tail distributions and/or the external data must be added to the mix where key methodological decisions include scaling, cut-off mix, and filtering
23.5.3. Alice is developing her bank's operational resilience framework. Historically, her firm's language did not include the concept of resilience, much less operational resilience. According to the Bank for International Settlements (BIS), "operational resilience is an outcome that benefits from the effective management of operational risk."
Her workflow steps will include the following: identify important business services; set impact tolerances; map the important services end-to-end and identify required resources; design severe but plausible scenarios in order to test vulnerabilities; where tolerance(s) is exceeded, examine lessons learned and take action; ensure internal/external communications; get sign-off on annual self-assessment document.
In regard to operational resilience, each of the following is true EXCEPT which is false?
a. Single points of failure (SPOF) are likely to be key risk indicators (KRI), and whenever possible, they should be mitigated or removed
b. The important business services (IBS) vary by the nature of the institution, and prioritization assumes that not every process is important to an institution
c. The key difference between resilience frameworks and business continuity management (BCM) is that resilience is articulated around important business services (IBS)
d. The framework should be qualitative (because it complements the factor models), non-complex (to achieve successful implementation), and resource-light in generation (because participants already working on the ORM framework should not bias the resilience framework with dual assignments)
Answers here:
Last edited by a moderator: