This repository is the expected loss integration layer in the commercial credit-risk stack. It combines upstream PD, LGD, and EAD outputs, with optional industry reference data, to produce loan-level expected loss measures, portfolio summaries, IFRS 9 style tables, and downstream pricing or stress inputs. The project is positioned as the bridge between component risk models and downstream portfolio analytics for both bank-aligned frameworks and practical lending environments.
This project demonstrates how separate commercial credit-risk components can be brought together into one practical expected loss workflow. It is built for portfolio review and recruiter inspection, so the data is synthetic, the logic is transparent, and the outputs are shaped for downstream reuse in both structured risk assessment and lending decision support.
Upstream inputs:
PD-and-scorecard-commercialLGD-commercialEAD-CCF-commercial- optional reference context from
industry-analysis
Downstream consumers:
stress-testing-commercialRAROC-pricing-and-return-hurdleportfolio-monitor-commercialRWA-capital-commercial
This project can be applied in:
- Expected loss estimation for portfolio risk assessment, impairment-style views, and stress inputs
- Loan-level and segment-level loss measurement for structured risk review
- Reusable EL contract for monitoring, pricing, and capital workflows
- Loss estimation to support origination strategy, pricing, and risk-adjusted decisioning
- Customer and facility segmentation using expected loss and concentration views
- Portfolio performance tracking with a consistent loss metric across products or cohorts
data/input/portfolio_input.csv: facility/borrower portfolio base used to join componentsdata/input/facility_pd_final_combined.csv: facility PD final layer contractdata/input/lgd_final.csv: LGD contract used to build expected lossdata/input/downturn_overlays.csv: simple overlay table used for scenario-weighted ECL examplesdata/raw/demo_portfolio.csv: lightweight demo extract for quick reviewer context
outputs/reports/pipeline_summary.md: run summary and file indexoutputs/tables/loan_level_el.csv: facility-level expected loss output contractoutputs/tables/segment_expected_loss_summary.csv: segment aggregation for portfolio packsoutputs/tables/ifrs9_ecl_by_facility.csv: IFRS 9 style ECL view (simplified)outputs/tables/input_source_report.csv: traceability report for which inputs drove each outputoutputs/tables/pipeline_validation_report.csv: reviewer-friendly validation checksoutputs/charts/el_waterfall.png: quick visual showing which components drive EL movement
A portfolio team has borrower PD, LGD assumptions, and exposure measures but needs one consistent “loan-level expected loss” dataset to drive stress tests, pricing packs, monitoring, and capital discussion. This repo produces that EL contract and the portfolio summaries that downstream repos can reuse without re-implementing component logic.
stress-testing-commercial: usesoutputs/tables/loan_level_el.csv(and segment summaries) as the base expected loss view before applying scenario assumptions.RAROC-pricing-and-return-hurdle: usesoutputs/tables/pricing_table.csvandoutputs/tables/loan_level_el.csvto translate EL into cost-of-risk inputs for hurdle pricing.portfolio-monitor-commercial: usesoutputs/tables/ifrs9_ecl_by_facility.csvand scenario-weighted tables as the impairment / staging foundation.RWA-capital-commercial: uses expected loss and stress outputs as context for capital overlays and reporting.
outputs/tables/loan_level_el.csvoutputs/tables/segment_expected_loss_summary.csvoutputs/tables/portfolio_summary.csvoutputs/tables/pricing_table.csvoutputs/tables/stress_test_results.csvoutputs/tables/ifrs9_ecl_by_facility.csvoutputs/tables/concentration_summary.csvoutputs/tables/input_source_report.csvoutputs/tables/pipeline_validation_report.csv
data/: staged input bundles, processed working tables, and external reference filessrc/: reusable expected loss, pricing, staging, and pipeline logicscripts/: wrapper scripts for pipeline executiondocs/: methodology, assumptions, pricing logic, stress notes, and validation materialnotebooks/: reviewer-facing walkthrough notebooksoutputs/: exported tables, charts, reports, and sample artifactstests/: validation and regression checks
Quick start:
pip install -r requirements.txt
python -m src.pipelineAfter the run, start with:
outputs/reports/pipeline_summary.mdoutputs/tables/portfolio_summary.csvoutputs/tables/pipeline_validation_report.csv
Run validation tests:
python -m pytestpython -m src.pipelineAlternative:
python -m src.run_pipelinetests/test_pipeline.pyruns the pipeline end-to-end and checks that core output files are written.outputs/tables/pipeline_validation_report.csvcaptures reconciliations and sanity checks in a reviewer-friendly table.
- All inputs are synthetic or public-style demo data.
- The integration logic is designed to be explainable and reusable rather than to mirror a production impairment engine exactly.
- Downstream tables are intentionally flat and portable so the repo can demonstrate stack integration without relying on workspace-specific conventions.