Skip to content

Commit 0ea5aa1

Browse files
Data analysis skills specifically designed for the financial risk control field (#823)
* upload skill datanalysis-credit-risk * update skill datanalysis-credit-risk * fix plugin problem * re-run npm start * re-run npm start * change codes description skill.md to english and remove personal path * try to update readme.md * Updating readme --------- Co-authored-by: Aaron Powell <me@aaron-powell.com>
1 parent 4cf83b0 commit 0ea5aa1

5 files changed

Lines changed: 1956 additions & 0 deletions

File tree

docs/README.skills.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -89,6 +89,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to
8989
| [csharp-nunit](../skills/csharp-nunit/SKILL.md) | Get best practices for NUnit unit testing, including data-driven tests | None |
9090
| [csharp-tunit](../skills/csharp-tunit/SKILL.md) | Get best practices for TUnit unit testing, including data-driven tests | None |
9191
| [csharp-xunit](../skills/csharp-xunit/SKILL.md) | Get best practices for XUnit unit testing, including data-driven tests | None |
92+
| [datanalysis-credit-risk](../skills/datanalysis-credit-risk/SKILL.md) | Credit risk data cleaning and variable screening pipeline for pre-loan modeling. Use when working with raw credit data that needs quality assessment, missing value analysis, or variable selection before modeling. it covers data loading and formatting, abnormal period filtering, missing rate calculation, high-missing variable removal,low-IV variable filtering, high-PSI variable removal, Null Importance denoising, high-correlation variable removal, and cleaning report generation. Applicable scenarios arecredit risk data cleaning, variable screening, pre-loan modeling preprocessing. | `references/analysis.py`<br />`references/func.py`<br />`scripts/example.py` |
9293
| [dataverse-python-advanced-patterns](../skills/dataverse-python-advanced-patterns/SKILL.md) | Generate production code for Dataverse SDK using advanced patterns, error handling, and optimization techniques. | None |
9394
| [dataverse-python-production-code](../skills/dataverse-python-production-code/SKILL.md) | Generate production-ready Python code using Dataverse SDK with error handling, optimization, and best practices | None |
9495
| [dataverse-python-quickstart](../skills/dataverse-python-quickstart/SKILL.md) | Generate Python SDK setup + CRUD + bulk + paging snippets using official patterns. | None |
Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
---
2+
name: datanalysis-credit-risk
3+
description: Credit risk data cleaning and variable screening pipeline for pre-loan modeling. Use when working with raw credit data that needs quality assessment, missing value analysis, or variable selection before modeling. it covers data loading and formatting, abnormal period filtering, missing rate calculation, high-missing variable removal,low-IV variable filtering, high-PSI variable removal, Null Importance denoising, high-correlation variable removal, and cleaning report generation. Applicable scenarios arecredit risk data cleaning, variable screening, pre-loan modeling preprocessing.
4+
---
5+
6+
# Data Cleaning and Variable Screening
7+
8+
## Quick Start
9+
10+
```bash
11+
# Run the complete data cleaning pipeline
12+
python ".github/skills/datanalysis-credit-risk/scripts/example.py"
13+
```
14+
15+
## Complete Process Description
16+
17+
The data cleaning pipeline consists of the following 11 steps, each executed independently without deleting the original data:
18+
19+
1. **Get Data** - Load and format raw data
20+
2. **Organization Sample Analysis** - Statistics of sample count and bad sample rate for each organization
21+
3. **Separate OOS Data** - Separate out-of-sample (OOS) samples from modeling samples
22+
4. **Filter Abnormal Months** - Remove months with insufficient bad sample count or total sample count
23+
5. **Calculate Missing Rate** - Calculate overall and organization-level missing rates for each feature
24+
6. **Drop High Missing Rate Features** - Remove features with overall missing rate exceeding threshold
25+
7. **Drop Low IV Features** - Remove features with overall IV too low or IV too low in too many organizations
26+
8. **Drop High PSI Features** - Remove features with unstable PSI
27+
9. **Null Importance Denoising** - Remove noise features using label permutation method
28+
10. **Drop High Correlation Features** - Remove high correlation features based on original gain
29+
11. **Export Report** - Generate Excel report containing details and statistics of all steps
30+
31+
## Core Functions
32+
33+
| Function | Purpose | Module |
34+
|------|------|----------|
35+
| `get_dataset()` | Load and format data | references.func |
36+
| `org_analysis()` | Organization sample analysis | references.func |
37+
| `missing_check()` | Calculate missing rate | references.func |
38+
| `drop_abnormal_ym()` | Filter abnormal months | references.analysis |
39+
| `drop_highmiss_features()` | Drop high missing rate features | references.analysis |
40+
| `drop_lowiv_features()` | Drop low IV features | references.analysis |
41+
| `drop_highpsi_features()` | Drop high PSI features | references.analysis |
42+
| `drop_highnoise_features()` | Null Importance denoising | references.analysis |
43+
| `drop_highcorr_features()` | Drop high correlation features | references.analysis |
44+
| `iv_distribution_by_org()` | IV distribution statistics | references.analysis |
45+
| `psi_distribution_by_org()` | PSI distribution statistics | references.analysis |
46+
| `value_ratio_distribution_by_org()` | Value ratio distribution statistics | references.analysis |
47+
| `export_cleaning_report()` | Export cleaning report | references.analysis |
48+
49+
## Parameter Description
50+
51+
### Data Loading Parameters
52+
- `DATA_PATH`: Data file path (best are parquet format)
53+
- `DATE_COL`: Date column name
54+
- `Y_COL`: Label column name
55+
- `ORG_COL`: Organization column name
56+
- `KEY_COLS`: Primary key column name list
57+
58+
### OOS Organization Configuration
59+
- `OOS_ORGS`: Out-of-sample organization list
60+
61+
### Abnormal Month Filtering Parameters
62+
- `min_ym_bad_sample`: Minimum bad sample count per month (default 10)
63+
- `min_ym_sample`: Minimum total sample count per month (default 500)
64+
65+
### Missing Rate Parameters
66+
- `missing_ratio`: Overall missing rate threshold (default 0.6)
67+
68+
### IV Parameters
69+
- `overall_iv_threshold`: Overall IV threshold (default 0.1)
70+
- `org_iv_threshold`: Single organization IV threshold (default 0.1)
71+
- `max_org_threshold`: Maximum tolerated low IV organization count (default 2)
72+
73+
### PSI Parameters
74+
- `psi_threshold`: PSI threshold (default 0.1)
75+
- `max_months_ratio`: Maximum unstable month ratio (default 1/3)
76+
- `max_orgs`: Maximum unstable organization count (default 6)
77+
78+
### Null Importance Parameters
79+
- `n_estimators`: Number of trees (default 100)
80+
- `max_depth`: Maximum tree depth (default 5)
81+
- `gain_threshold`: Gain difference threshold (default 50)
82+
83+
### High Correlation Parameters
84+
- `max_corr`: Correlation threshold (default 0.9)
85+
- `top_n_keep`: Keep top N features by original gain ranking (default 20)
86+
87+
## Output Report
88+
89+
The generated Excel report contains the following sheets:
90+
91+
1. **汇总** - Summary information of all steps, including operation results and conditions
92+
2. **机构样本统计** - Sample count and bad sample rate for each organization
93+
3. **分离OOS数据** - OOS sample and modeling sample counts
94+
4. **Step4-异常月份处理** - Abnormal months that were removed
95+
5. **缺失率明细** - Overall and organization-level missing rates for each feature
96+
6. **Step5-有值率分布统计** - Distribution of features in different value ratio ranges
97+
7. **Step6-高缺失率处理** - High missing rate features that were removed
98+
8. **Step7-IV明细** - IV values of each feature in each organization and overall
99+
9. **Step7-IV处理** - Features that do not meet IV conditions and low IV organizations
100+
10. **Step7-IV分布统计** - Distribution of features in different IV ranges
101+
11. **Step8-PSI明细** - PSI values of each feature in each organization each month
102+
12. **Step8-PSI处理** - Features that do not meet PSI conditions and unstable organizations
103+
13. **Step8-PSI分布统计** - Distribution of features in different PSI ranges
104+
14. **Step9-null importance处理** - Noise features that were removed
105+
15. **Step10-高相关性剔除** - High correlation features that were removed
106+
107+
## Features
108+
109+
- **Interactive Input**: Parameters can be input before each step execution, with default values supported
110+
- **Independent Execution**: Each step is executed independently without deleting original data, facilitating comparative analysis
111+
- **Complete Report**: Generate complete Excel report containing details, statistics, and distributions
112+
- **Multi-process Support**: IV and PSI calculations support multi-process acceleration
113+
- **Organization-level Analysis**: Support organization-level statistics and modeling/OOS distinction

0 commit comments

Comments
 (0)