AI updates
2024-12-23 01:41:23 Pacfic

Hugging Face Model Loading Vulnerabilities Enable Code Execution - 23d
Read more: huggingface.co

Hugging Face, a popular open-source platform hosting generative AI models, has been found to have vulnerabilities in its model-loading libraries. These vulnerabilities, detailed in a Malware.news analysis and Checkmarx blog posts, allow malicious model uploads resulting in arbitrary code execution. The flaws stem from the use of deprecated methods and insufficient validation for legacy model formats across multiple machine learning frameworks integrated with Hugging Face, such as TensorFlow and PyTorch. Attackers can exploit these weaknesses to inject and execute harmful code, potentially compromising systems.

The vulnerabilities affect various libraries used for interacting with Hugging Face, including the huggingface_hub library. A malicious model, once uploaded, can be loaded by unsuspecting users through methods like huggingface_hub.from_pretrained_keras and huggingface_hub.KerasModelHubMixin.from_pretrained. Even methods used in example ReadMe files from reputable sources have been found to be vulnerable. While Hugging Face was notified of these issues in August 2024, their response in early September was deemed inadequate, with the vulnerable methods still not officially deprecated in documentation or code.

Researchers have demonstrated proof-of-concept exploits leveraging these vulnerabilities. The use of older, less secure model formats and serialization methods, like pickle in PyTorch, contributes significantly to the risk. The researchers' findings highlight the need for stronger security practices within Hugging Face's libraries and a greater emphasis on robust validation and deprecation of insecure methods, as well as warnings in the documentation regarding the use of deprecated methods. Users are advised to exercise caution and only load models from trusted sources until the vulnerabilities are fully addressed.