Command Palette
Search for a command to run...
Decoupling Decision-Making in Fraud Prevention through Classifier Calibration for Business Logic Action
Luzio Emanuele ; Ponti Moacir Antonelli ; Arevalo Christian Ramirez ; Argerich Luis

Abstract
Machine learning models typically focus on specific targets like creatingclassifiers, often based on known population feature distributions in abusiness context. However, models calculating individual features adapt overtime to improve precision, introducing the concept of decoupling: shifting frompoint evaluation to data distribution. We use calibration strategies asstrategy for decoupling machine learning (ML) classifiers from score-basedactions within business logic frameworks. To evaluate these strategies, weperform a comparative analysis using a real-world business scenario andmultiple ML models. Our findings highlight the trade-offs and performanceimplications of the approach, offering valuable insights for practitionersseeking to optimize their decoupling efforts. In particular, the Isotonic andBeta calibration methods stand out for scenarios in which there is shiftbetween training and testing data.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| fraud-detection-on-baf-base | MLP–NN | Recall @ 5% FPR: 49.6% |
| fraud-detection-on-baf-base | CatBoost | Recall @ 5% FPR: 52.4% |
| fraud-detection-on-baf-base | LightGBM | Recall @ 5% FPR: 54.3% |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.