AI/ML, Threat Intelligence, Supply chain![Today’s columnist, Steve Wilson of Exabeam, explores the concept of “fair use” when it comes to AI data. (Adobe Stock)](https://image-optimizer.cyberriskalliance.com/unsafe/1920x0/https://files.cyberriskalliance.com/wp-content/uploads/2025/01/AdobeStock_1088480696.jpg)
Hugging Face compromised with malicious AI models
![Today’s columnist, Steve Wilson of Exabeam, explores the concept of “fair use” when it comes to AI data. (Adobe Stock)](https://image-optimizer.cyberriskalliance.com/unsafe/1920x0/https://files.cyberriskalliance.com/wp-content/uploads/2025/01/AdobeStock_1088480696.jpg)
(Adobe Stock)
Widely used machine learning and data science platform Hugging Face has been covertly infiltrated with at least two artificial intelligence models containing malicious code through the new nullifAI attack technique that involves the exploitation of Pickle files leveraged for ML model data serialization and deserialization, Cybernews reports. Both malicious packages resembling proof-of-concept models — which have already been deactivated by Hugging Face — were not identified by Hugging Face's Picklescan security tool due to differences in compression format with PyTorch, as well as a security issue that prevented the proper scanning of Pickle files that could facilitate compromise, according to a report from ReversingLabs. "Threats lurking in Pickle files are not new. In fact, there are warnings popping out all over the documentation and there has been a lot of research on this topic," said researchers, who noted the existence of several other yet-to-be-identified security evasion measures involving the exploitation of Pickle files.
An In-Depth Guide to AI
Get essential knowledge and practical strategies to use AI to better your security program.
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds