FedGuard: A Robust Federated AI Framework for Privacy-Conscious Collaborative AML, Inspired by DARPA GARD Principles

Authors

  • Nik Sultan Illinois Institute of Technology, USA
  • Neal Patwar University of Utah, USA
  • Xianggang Wei Xi'an University of Architecture and Technology, Shaanxi, China
  • JiaJia Chew Accounting, Universiti Sains Malaysia, Malaysia
  • Jingwei Liu New York University, USA
  • Rui Du King's College London, United Kingdom

DOI:

https://doi.org/10.5281/zenodo.18253151

Keywords:

Federated Learning, Anti-Money Laundering (AML), Privacy-Preserving AI, Model Poisoning, Membership Inference, Robust Aggregation, Differential Privacy, DARPA GARD, Financial Security

Abstract

The fight against money laundering requires collaborative analysis of financial data across institutions, yet privacy regulations and security concerns create debilitating data silos. While federated learning (FL) offers a privacy-preserving framework for decentralized model training, its application to Anti-Money Laundering (AML) is acutely vulnerable to specialized AI security threats, such as model poisoning and privacy inference attacks. To address this, we introduce FedGuard, a robust FL framework for collaborative AML, inspired by the security-first principles of the DARPA GARD program. FedGuard integrates a dual defense mechanism. First, a Dynamic Contribution-Aware Robust Aggregation module counters model poisoning by evaluating client updates via reputation scoring and statistical filtering, ensuring the global model's integrity. Second, a calibrated Differential Privacy scheme is applied to local updates, providing a mathematical guarantee against membership inference and data reconstruction attacks. This design operationalizes the GARD tenets of "evaluable robustness" and "defense-in-depth" within a practical FL system. Our comprehensive evaluation on financial transaction datasets demonstrates that FedGuard maintains high AML detection accuracy (AUC-ROC, F1-Score) comparable to standard FL in benign settings. Under attack, it shows superior robustness, reducing model poisoning success rates by over 70% compared to vulnerable baselines, while simultaneously preserving privacy by lowering inference attack accuracy to near-random levels with a manageable utility cost. FedGuard provides a deployable solution that enables secure, cross-institutional collaboration, directly supporting national financial security initiatives and regulatory goals for safer data sharing.

Downloads

Published

2026-01-15

How to Cite

Sultan, N., Patwar, N., Wei, X., Chew, J., Liu, J., & Du, R. (2026). FedGuard: A Robust Federated AI Framework for Privacy-Conscious Collaborative AML, Inspired by DARPA GARD Principles. International Academic Journal of Social Science, 2, 1–16. https://doi.org/10.5281/zenodo.18253151

Issue

Section

Articles