In 2012, JPMorgan Chase’s Chief Investment Office lost $6.2 billion through positions built on a flawed Value-at-Risk (VaR) model.

The internal review revealed that the model had been modified to halve the reported risk, and the model change process bypassed the bank’s independent validation function.

The “London Whale” incident became the defining case study for model risk management, demonstrating that a single model failure, compounded by governance breakdowns, can produce losses exceeding what most banks reserve for their entire operational risk capital charge.

Key Takeaways
SR 11-7 and OCC Bulletin 2011-12 remain the definitive US regulatory guidance for model risk management, requiring three pillars: model development, model validation, and model governance.
AI/ML models now account for roughly half of the average large bank’s model inventory, yet only 26.4% of financial institutions express confidence in their AI compliance readiness.
58.8% of banks cite the need for clearer regulatory guidance as the single biggest barrier to advancing their AI and model risk management strategy.
A risk-tiered model inventory (Tier 1-4) determines validation frequency, documentation depth, and governance oversight, ensuring resources focus on the models that matter most.
Model validation under SR 11-7 requires three independent activities: conceptual soundness evaluation, outcomes analysis (backtesting), and ongoing monitoring.
The OCC and FDIC have proposed limiting MRA issuance to material financial risks, signaling a supervisory shift that model risk management teams should prepare for.
A 90-day roadmap can establish a defensible model risk management program with inventory, tiering, validation scheduling, and board-ready reporting.

Model risk management under SR 11-7 has never been more complex or more consequential. The Federal Reserve and OCC’s joint supervisory guidance, issued in April 2011, established the three-pillar framework (development, validation, governance) that still defines regulatory expectations.

But the model landscape has transformed beyond recognition since 2011. AI and machine learning models now account for roughly half of the average large bank’s inventory, yet only 26.4% of financial institutions express confidence in their AI compliance readiness (Wolters Kluwer Q1 2026 Survey).

Meanwhile, 58.8% of banks say clearer regulatory guidance is the single biggest barrier to advancing their model risk management and AI strategy. The operational risk management in banking discipline must now encompass models that learn, adapt, and sometimes defy traditional validation techniques.

This guide provides model risk managers, chief risk officers, validators, and audit professionals with a practitioner-focused framework for SR 11-7 compliance that addresses both traditional models and the AI/ML frontier.

You will find the complete SR 11-7 taxonomy, validation methodologies, risk tiering approaches, AI governance extensions, KRI frameworks, and a 90-day implementation roadmap calibrated for US banking institutions.

What Is Model Risk Management Under SR 11-7?

SR 11-7, formally titled “Supervisory Guidance on Model Risk Management,” was issued jointly by the Federal Reserve Board and the OCC on April 4, 2011 (with the OCC’s companion Bulletin 2011-12). The FDIC adopted the same framework in 2017, making it the universal US banking standard.

SR 11-7 defines a model as “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.”

This definition is deliberately broad, and model risk management teams must apply it to everything from simple spreadsheet calculations to complex neural networks.

Model risk, per SR 11-7, arises from two sources: errors in the model itself (flawed assumptions, incorrect coding, inappropriate use of data) and misuse of model outputs (applying a model outside its intended scope, overriding model results without documentation, or failing to understand model limitations).

The guidance requires banks to manage model risk through three interconnected pillars:

The Three Pillars of SR 11-7 Model Risk Management

PillarSR 11-7 RequirementKey ActivitiesEvidence for Examiners
1. Model Development & ImplementationSound model development with documented assumptions, theory, methodology, and limitationsModel specification documentation; development testing; implementation verification; user acceptance testing; data quality assessmentModel development document; testing logs; UAT sign-off; data lineage documentation
2. Model ValidationIndependent challenge of model soundness through evaluation, testing, and ongoing monitoringConceptual soundness review; outcomes analysis (backtesting); sensitivity analysis; benchmarking; ongoing performance monitoringValidation report with findings; backtesting results; monitoring dashboards; remediation tracking
3. Model GovernancePolicies, controls, and oversight structure to manage model risk at enterprise levelModel inventory management; risk tiering; policies and procedures; board and senior management reporting; internal audit coverageModel inventory register; tiering methodology; MRM policy; board reports; audit findings

The three pillars are interdependent. Development quality determines how much validation effort is needed. Validation findings drive remediation within development teams. Governance ensures that both development and validation operate with appropriate independence, resources, and escalation paths.

A weakness in any pillar creates exposure across all three. The risk management process for models mirrors the broader ERM lifecycle (identify, analyze, evaluate, treat, monitor) but applies it specifically to the model population.

Model Inventory Growth: The AI/ML Challenge

Model Risk Management: SR 11-7 Guidance and Validation Framework
Model Risk Management: SR 11-7 Guidance and Validation Framework

Figure 1: AI/ML models now represent approximately half of large bank model inventories, doubling model risk management complexity (Source: Author analysis based on Wolters Kluwer, ValidMind, 2025-2026)

Model Risk Tiering: Prioritizing SR 11-7 Model Risk Management Resources

SR 11-7 does not prescribe a specific tiering methodology, but examiners expect banks to allocate model risk management resources proportionate to risk.

A model that drives credit provisioning for a $50 billion loan portfolio demands fundamentally different validation rigor than a departmental forecasting spreadsheet.

Risk tiering operationalizes this proportionality principle by assigning each model to a tier based on materiality, complexity, and usage criticality.

The risk assessment process for model tiering should be documented in your MRM policy and reviewed annually.

Four-Tier Model Risk Classification

TierCriteriaValidation FrequencyDocumentation StandardGovernance Oversight
Tier 1: CriticalDrives regulatory capital, pricing, or decisions affecting > 5% of assets; complex AI/ML; externally reported outputsAnnual full validation; quarterly monitoringFull model development document; complete validation report; quarterly monitoring memosBoard-level reporting; MRC review of all findings; CRO sign-off on risk acceptance
Tier 2: HighMaterial financial impact (1-5% of assets); moderate complexity; used in regulatory reportingAnnual validation; semi-annual monitoringComprehensive development and validation documentation; semi-annual monitoring reportsMRC quarterly review; senior management sign-off on findings
Tier 3: MediumModerate financial impact (< 1% of assets); well-established methodology; limited regulatory exposureValidation every 18-24 months; annual monitoringStandard documentation template; annual monitoring summaryMRM team tracks findings; escalation to MRC for material issues only
Tier 4: LowMinimal financial impact; simple calculations; internal management use onlyValidation every 3 years; annual self-assessmentAbbreviated documentation; self-assessment checklistMRM team oversight; included in inventory reporting only

Model Risk Tiering Distribution vs. Validation Coverage

Model Risk Management: SR 11-7 Guidance and Validation Framework
Model Risk Management: SR 11-7 Guidance and Validation Framework

Figure 4: 14% of models remain untiered or in shadow IT, creating blind spots in model risk management programs (Source: Author analysis based on industry surveys, 2025)

The untiered/shadow IT category represents the most dangerous gap in model risk management.

These are spreadsheets, end-user computing (EUC) tools, and ad hoc analytical applications that meet SR 11-7’s broad model definition but operate outside the formal inventory.

An RCSA process specifically designed for model risk can surface these shadow models by asking business units to identify any quantitative tool that produces outputs used in decision-making, reporting, or risk measurement.

Once identified, each shadow model enters the tiering process and receives appropriate governance based on its risk classification.

Model Validation Framework Under SR 11-7

Model validation is the core control activity in model risk management. SR 11-7 defines validation as “the set of processes and activities intended to verify that models are performing as expected, in line with their design objectives and business uses.”

Critically, validation must be performed by parties with sufficient independence from model development and model usage.

This independence requirement is non-negotiable: examiners specifically look for organizational separation, reporting line independence, and competency in the validation function.

SR 11-7 Validation Activities

Validation ActivitySR 11-7 RequirementPractical ImplementationCommon Findings (MRA/MRIA)
Conceptual SoundnessEvaluate theoretical basis, assumptions, and limitations; assess appropriateness for intended useLiterature review of methodology; assumption challenge sessions with developers; limitation mapping to use cases; comparison with alternative approachesAssumptions not documented or stale; methodology not appropriate for data characteristics; limitations not communicated to users
Outcomes AnalysisCompare model outputs against actual outcomes (backtesting); assess predictive accuracy over timeBacktesting against realized outcomes; statistical tests (Kupiec, Christoffersen, binomial); performance metrics (Gini, KS, AUC for scoring models)Backtesting periods too short; statistical thresholds not defined; degraded performance not triggering remediation
Ongoing MonitoringContinuous assessment of model performance, stability, and relevance between full validationsAutomated monitoring dashboards; population stability index (PSI); characteristic stability index (CSI); exception tracking; trigger-based revalidationMonitoring not automated; triggers not defined; monitoring results not escalated to governance
Sensitivity AnalysisTest model behavior under stressed inputs and boundary conditionsPerturbation testing of key variables; stress scenario overlays; boundary case evaluation; Monte Carlo sensitivity sweepsSensitivity to key assumptions not quantified; stress scenarios not aligned with macroeconomic outlook
BenchmarkingCompare model to alternative approaches, challenger models, or industry benchmarksChallenger model development; vendor model comparison; peer benchmarking where available; simple vs. complex model trade-off analysisNo challenger model available; benchmarking limited to single alternative; no documentation of model selection rationale

Each validation produces a report that rates model risk (typically green/amber/red or a numeric scale), documents findings, and assigns remediation actions with owners and deadlines.

Findings are categorized as high, medium, or low severity, with high-severity findings potentially triggering model use restrictions until remediated.

The validation report is a primary artifact that examiners review, so its quality directly reflects the maturity of your model risk management program. The risk register should capture each model’s validation status, outstanding findings, and remediation timeline.

SR 11-7 Compliance Rates by Sub-Area

Model Risk Management: SR 11-7 Guidance and Validation Framework
Model Risk Management: SR 11-7 Guidance and Validation Framework

Figure 2: Board reporting (60%) and outcomes analysis (65%) remain the weakest SR 11-7 compliance areas for US banks (Source: Author analysis based on OCC exam trends, 2024-2025)

AI and Machine Learning Model Risk Management Under SR 11-7

SR 11-7 was written before the explosion of AI/ML in banking, but its principles apply. The challenge is that AI/ML models violate several assumptions that traditional validation relies upon: they may lack explicit, interpretable functional forms; their behavior can change as they retrain on new data; and their complexity makes conceptual soundness evaluation fundamentally different from reviewing a regression equation.

Model risk management teams must extend their SR 11-7 framework to address these AI-specific challenges without abandoning the three-pillar structure.

AI/ML Model Risk Management Extensions to SR 11-7

SR 11-7 PillarTraditional Model ApproachAI/ML Extension RequiredRegulatory Signal
Development: DocumentationMathematical specification; assumption list; variable selection rationaleFeature engineering pipeline; training/test split methodology; hyperparameter tuning rationale; model architecture decisionsSR 21-8 (BSA/AML models) explicitly extends SR 11-7 to ML applications
Development: Data QualityInput data profiling; missing value treatment; outlier handlingTraining data representativeness; label quality; data drift detection; bias auditing in training dataOCC examiners increasingly asking for bias testing evidence
Validation: Conceptual SoundnessReview mathematical derivation; challenge assumptionsExplainability analysis (SHAP, LIME); fairness metrics; model interpretability assessment; alternative architecture comparisonFed guidance signals explainability as emerging MRA category
Validation: Outcomes AnalysisBacktesting against actuals; statistical significance testsConcept drift monitoring; retrain-trigger protocols; A/B testing against champion models; production vs. development performance comparisonAutomated retraining without validation = high MRA risk
Validation: Ongoing MonitoringPeriodic performance reporting; PSI/CSI monitoringReal-time feature drift detection; prediction confidence monitoring; model degradation alerts; automated revalidation triggersContinuous monitoring expected for Tier 1 AI/ML models
Governance: InventoryModel inventory with metadata and tieringExtended metadata: training data sources; retraining schedule; explainability method; bias metrics; data lineage to source systemsExaminers expect AI/ML models explicitly flagged in inventory

AI/ML Model Risk: Regulatory Challenges

Model Risk Management: SR 11-7 Guidance and Validation Framework
Model Risk Management: SR 11-7 Guidance and Validation Framework

Figure 3: 58.8% of banks say clearer regulatory guidance is their top barrier to advancing model risk management for AI (Source: Wolters Kluwer Q1 2026 Survey)

The Colorado AI Act (effective June 30, 2026) and the Texas Responsible AI Governance Act (effective January 1, 2026) add state-level compliance requirements for “high-risk” AI systems used in consequential decisions such as credit underwriting and insurance pricing.

Model risk management teams at banks operating in these states must map their AI/ML inventory against these definitions and ensure that their compliance risk assessment covers the new requirements for impact assessments, transparency notices, and self-reporting of algorithmic discrimination.

The intersection of federal SR 11-7 expectations and emerging state AI regulations creates a complex regulatory compliance landscape that demands proactive governance.

Model Risk Management Governance and Examination Readiness

SR 11-7 requires that model risk management governance include “effective policies and procedures, proper allocation of resources, appropriate incentive structures, and clear roles and responsibilities.”

Examiners evaluate governance through the lens of the three-lines model: first-line model developers and users, second-line model risk management (validation and oversight), and third-line internal audit providing independent assurance.

The risk management policy for models must be a standalone document (or a clearly delineated section of the enterprise risk policy) that addresses every element of SR 11-7.

MRM Policy Required Elements

Policy ElementSR 11-7 ExpectationCommon Exam Gap
Model DefinitionBroad, consistent definition applied enterprise-wide; includes EUC/spreadsheet models meeting criteriaDefinition too narrow; excludes spreadsheets and EUC tools that qualify as models
Risk Tiering MethodologyDocumented criteria for tier assignment; annual review of tier assignmentsTiering criteria subjective or undocumented; no process for annual reclassification
Inventory ManagementComplete, accurate, up-to-date inventory of all models with key metadataInventory incomplete; shadow models not captured; metadata fields inconsistent
Validation StandardsDefined scope, frequency, and independence requirements for each validation activityValidation frequency not linked to risk tier; independence compromised by reporting structure
Findings ManagementClassification scheme (high/medium/low); remediation timelines; escalation triggersNo standard classification; remediation timelines not enforced; findings aging without escalation
Board and Senior Management ReportingRegular reporting on model risk profile, validation results, and outstanding findingsBoard reporting infrequent or superficial; no aggregate model risk metrics; findings not presented in business context
Exception and Override ProcessDocumented process for model use outside intended scope or overriding model outputsOverrides not tracked; no documentation of rationale; override frequency not monitored as KRI
Internal Audit CoveragePeriodic audit of MRM function effectiveness, including independence and resource adequacyAudit coverage limited to sample validation reviews; no assessment of overall MRM program effectiveness

The OCC and FDIC’s October 2025 proposed rulemaking to define “unsafe or unsound practice” and limit MRA issuance signals a meaningful shift in supervisory approach.

Model risk management teams should anticipate that examiners will focus MRAs on findings with clear links to material financial risk rather than peripheral documentation gaps. However, this does not reduce the need for comprehensive governance.

The shift means that when an MRA is issued, it will carry greater weight, and the bank compliance assessment process must be prepared to respond swiftly. Building a risk monitoring function that tracks finding severity, aging, and remediation status is essential for exam readiness.

Key Risk Indicators for Model Risk Management

Model risk management requires KRIs that measure both the health of the model population and the effectiveness of the MRM program itself.

The following indicators should feed into your KRI dashboard and aggregate into the enterprise operational risk reporting framework.

KRIMeasurementGreenAmberRed
Model Inventory Completeness% of known models captured in inventory vs. EUC discovery scan> 95%85-95%< 85%
Validation Overdue Rate% of models past scheduled validation date< 5%5-15%> 15%
High-Severity Findings AgingAverage days open for high-severity validation findings< 60 days60-120 days> 120 days
Model Override Frequency% of model outputs overridden by users without documented rationale< 5%5-10%> 10%
Backtesting Pass Rate% of models passing backtesting thresholds> 90%75-90%< 75%
AI/ML Concept Drift% of AI/ML models showing statistically significant drift from training distribution< 10%10-25%> 25%
MRM Staffing AdequacyModels per validator (full-time equivalent)< 20:120-30:1> 30:1
Board Reporting TimelinessDays between quarter end and MRM board report delivery< 30 days30-45 days> 45 days

90-Day Model Risk Management Implementation Roadmap

This roadmap is designed for banking institutions that have a basic model inventory but lack a structured, SR 11-7-compliant model risk management program. Institutions starting from scratch may need 120-150 days. Adapt to your model population size and available resources.

PhaseActionsDeliverablesSuccess Metrics
Days 1-30: Inventory & TierAppoint MRM program owner (ideally reporting to CRO). Conduct comprehensive model discovery including EUC/spreadsheet scan. Build or update model inventory with required metadata. Develop and apply risk tiering methodology. Draft MRM policy aligned to SR 11-7 three pillars.MRM program charter and RACI. Complete model inventory with metadata. Risk tiering methodology document. Tier assignment for all models. Draft MRM policy.MRM owner appointed with clear mandate. Inventory captures > 95% of models (verified via EUC scan). All models assigned to Tier 1-4. Policy reviewed by CRO.
Days 31-60: Validation & MonitoringSchedule validations based on tier and last validation date. Complete or refresh validations for all Tier 1 models. Build ongoing monitoring framework with automated dashboards. Establish findings management process with severity classification and remediation tracking. Pilot AI/ML validation extensions for highest-risk ML models.Validation schedule for next 12 months. Completed Tier 1 validation reports. Monitoring dashboard (beta) with PSI/CSI alerts. Findings management process document. AI/ML validation pilot report.All Tier 1 models validated or revalidated. Monitoring dashboard operational for Tier 1-2. Findings management process producing severity-classified outputs. AI/ML pilot identifies extension gaps.
Days 61-90: Governance & ReportFinalize and approve MRM policy. Deliver first board-level MRM report with aggregate model risk profile. Integrate model risk KRIs into ERM dashboard. Conduct internal audit assessment of MRM program effectiveness. Establish quarterly MRC meeting cadence.Board-approved MRM policy. First MRM board report. KRI dashboard with model risk indicators. Internal audit assessment report. MRC calendar and charter.Policy approved by board or delegated committee. Board report includes tier distribution, validation status, and finding summary. KRIs integrated into ERM reporting. Audit assessment identifies no critical gaps. MRC convened with standing agenda.

Common Pitfalls in Model Risk Management

PitfallRoot CauseRemedy
Incomplete model inventoryNarrow model definition excludes spreadsheets, EUC tools, and vendor models from scopeApply SR 11-7 broad definition; conduct annual EUC discovery scan; include vendor models in inventory
Validation as compliance exerciseValidation focuses on producing a report rather than genuinely challenging model soundnessTrain validators in effective challenge techniques; require challenger model development for Tier 1-2
Independence theaterValidators report to same leadership as developers; validation is non-adversarial by designEnsure organizational separation; validators report to CRO or Chief Model Risk Officer independently
Ignoring outcomes analysisBacktesting treated as optional or deferred when data is limitedMake backtesting mandatory for all Tier 1-2; define minimum data periods; use out-of-sample testing when historicals are insufficient
AI/ML validation gapTraditional validation frameworks not extended for AI/ML-specific risks (drift, bias, explainability)Develop AI/ML validation supplement to MRM policy; require explainability and bias metrics for all ML models
Findings aging without remediationFindings documented but not tracked to closure; no escalation for overdue itemsImplement findings management system with automated aging alerts; escalate > 90-day high findings to MRC
Shadow model proliferationBusiness units develop decision-support tools outside MRM governanceAnnual EUC attestation process; data governance controls that flag new analytical tools; model definition training for business units
Board reporting as data dumpMRM reports present raw data without context, trends, or decision-relevant framingDesign board report with executive summary; traffic-light model risk profile; trend arrows; decision asks; limit to 3-5 pages

Three converging trends will reshape model risk management over the next 24 months, requiring MRM teams to evolve their capabilities, tooling, and governance structures.

First, generative AI is entering the model inventory. Large language models (LLMs) are being deployed in banking for credit memo generation, customer interaction scoring, regulatory text analysis, and fraud narrative detection.

These models challenge every element of SR 11-7: they are opaque by design, they can hallucinate outputs, their behavior changes with prompt engineering rather than code changes, and their training data may include biased or copyrighted material.

Model risk management frameworks must develop validation approaches for generative AI that test output reliability, bias, consistency, and factual accuracy.

The risk assessment process for LLMs will need to include prompt injection testing, output variance measurement, and human-in-the-loop guardrail validation.

Second, regulatory expectations are converging globally. The Federal Reserve’s SR 21-8 (2021) extended SR 11-7 to BSA/AML models, and subsequent supervisory messaging signals further extension to AI/ML applications.

BaFin’s December 2025 guidance on AI within ICT risk management under DORA creates European parallel requirements. The ECB has issued model risk expectations that align conceptually with SR 11-7 while adding European-specific requirements.

Banks operating across jurisdictions need a compliance risk assessment framework that maps their model risk management controls to multiple regulatory regimes simultaneously, rather than maintaining parallel programs that inevitably diverge.

Third, model risk management technology is maturing rapidly. Platforms like ValidMind, ModelOp, and CIMCON are automating model inventory management, validation workflow orchestration, monitoring dashboards, and regulatory reporting.

AI-assisted validation is emerging: tools that automatically run backtesting suites, detect data drift, generate bias metrics, and flag models requiring revalidation.

The ERM technology landscape for model risk is moving from spreadsheet-based tracking to integrated platforms that provide continuous model risk visibility.

Banks that invest in these platforms now will find it significantly easier to scale their model risk management program as AI/ML model populations continue to grow at 30-40% annually.

The alternative, manual processes applied to an exponentially growing model inventory, is mathematically unsustainable and represents a model risk management failure in its own right.

Ready to build or strengthen your model risk management program under SR 11-7? Our risk management consultants specialize in MRM framework design, validation methodology, and AI/ML governance for banking institutions. Explore our services or contact us directly to schedule a discovery call.

References

1. Federal Reserve Board (2011). “SR 11-7: Supervisory Guidance on Model Risk Management.”

2. OCC (2011). “Bulletin 2011-12: Sound Practices for Model Risk Management.”

3. OCC (2011). “OCC 2011-12 Attachment: Supervisory Guidance on Model Risk Management.”

4. Federal Reserve Board (2021). “SR 21-8: Interagency Statement on Model Risk Management for BSA/AML.”

5. Wolters Kluwer (2026). “Q1 2026 Banking Compliance AI Trend Report.”

6. Wolters Kluwer (2025). “Banking on AI: Risk, Readiness, and the Next Frontier.”

7. ValidMind (2025). “5 Predictions for Model Risk Management and AI Risk in 2025.”

8. Moody’s (2025). “From Compliance to Resilience: Regulators Drive New Standards for AI Model Risk Management.”

9. GAO (2025). “Artificial Intelligence: Use and Oversight in Financial Services.”

10. Sullivan & Cromwell (2025). “FDIC and OCC Issue Proposal to Define Unsafe or Unsound Practice and Limit MRAs.”

11. Simpson Thacher (2025). “Fed Issues New Principles to Significantly Shift Bank Supervisory Priorities.”

12. CIMCON Software (2025). “What is SR 11-7 Guidance on Model Risk Management?”

13. ModelOp (2025). “SR 11-7 Model Risk Management: Compliance, Validation & Governance.”

14. ProSight Financial Association (2025). “AI in Risk Management: From Pilots to Production.”

15. BCLP (2025). “AI Regulation in Financial Services: Turning Principles into Practice.”