posted on 2025-09-27, 15:33authored byJia Yi Chan, Huaqun GuoHuaqun Guo, Timothy Liu, Aik Beng Ng, Simon See
<p dir="ltr">Advances in real-world adversarial threats have heightened concerns over the privacy and security of AI systems. One key threat is Data Poisoning, where malicious data is injected into training sets or model inputs to compromise model behavior. While much of the current research focuses on deep learning, particularly Foundation Models such as Large Language Models, there is limited understanding of how data poisoning affects different model eras. This survey presents a comparative analysis of data poisoning vulnerabilities in different model eras: statistical machine learning models, smaller-scale ”classical” deep learning models, and foundation models. Our goal is to inform practitioners of the trade-offs between robustness and performance when choosing models for different applications.</p>
Funding
Funded by the Research Scholarship Funding (RSF) from the Ministry of Education, Singapore
History
Journal/Conference/Book title
The 19th IEEE International Conference on Service Operations and Logistics, and Informatics (IEEE SOLI 2025)