Trojans in AI models
#1
Bug 
Quote:Hidden logic, data poisoning, and other targeted attack methods via AI systems.
 
[Image: trojans-in-AI-models-featured.jpg]

Over the coming decades, security risks associated with AI systems will be a major focus of researchers’ efforts. One of the least explored risks today is the possibility of trojanizing an AI model. This involves embedding hidden functionality or intentional errors into a machine learning system that appears to be working correctly at first glance. There are various methods to create such a Trojan horse, differing in complexity and scope — and they must all be protected against.

Malicious code in the model

Certain ML model storage formats can contain executable code. For example, arbitrary code can be executed while loading a file in a pickle format, the standard Python format used for data serialization (converting data into a form that is convenient for storing and transferring). Particularly, this format is used in a deep learning library PyTorch. In another popular machine learning library, TensorFlow, models in the .keras and HDF5 formats support a “lambda layer”, which also executes arbitrary Python commands. This code can easily conceal malicious functionality.

TensorFlow’s documentation includes a warning that a TensorFlow model can read and write files, send and receive network data, and even launch child processes. In other words, it’s essentially a full-fledged program.

Malicious code can activate as soon as an ML model is loaded. In February 2024, approximately 100 models with malicious functionality were discovered in the popular repository of public models, Hugging Face. Of these, 20% created a reverse shell on the infected device, and 10% launched additional software.

Training dataset poisoning

Models can be trojanized at the training stage by manipulating the initial datasets. This process, called data poisoning, can be either targeted or untargeted. Targeted poisoning trains a model to work incorrectly in specific cases (for example, always claiming that Yuri Gagarin was the first person on the Moon). Untargeted poisoning aims to degrade the model’s overall quality.

Targeted attacks are difficult to detect in a trained model because they require very specific input data. But poisoning the input data for a large model is costly, as it requires altering a significant volume of data without being detected.

In practice, there are known cases of manipulating models that continue to learn while in operation. The most striking example is the poisoning of Microsoft’s Tay chatbot, which was trained to express racist and extremist views in less than a day. A more practical example is the attempts to poison Gmail’s spam classifier.

Here, attackers mark tens of thousands of spam emails as legitimate to allow more spam through to user inboxes.

The same goal can be achieved by altering training labels in annotated datasets or by injecting poisoned data into the fine-tuning process of a pre-trained model.

Continue Reading...
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)
[-]
Welcome
You have to register before you can post on our site.

Username/Email:


Password:





[-]
Recent Posts
QOwnNotes 19.1.6
25.5.9   The p...Kool — 15:38
XYplorer
What's new in Rele...Kool — 15:30
FastestVPN PRO Lifetime Plan 15 Logins +...
Link: https://fastes...siriyax320 — 10:40
F-Secure 25.5
Version 25.5 ​R...harlan4096 — 09:31
uBOLite_2025.601.2131
uBOLite_2025.601.2...harlan4096 — 08:54

[-]
Birthdays
Today's Birthdays
avatar (50)nteriageda
Upcoming Birthdays
avatar (47)BrantgoG
avatar (41)tapedDow
avatar (49)eapedDow
avatar (46)Carlosskake
avatar (48)rapedDow
avatar (43)Johnsonsyday
avatar (48)Groktus
avatar (40)efodo
avatar (38)Tedscolo
avatar (45)brakasig
avatar (44)JamesReshy
avatar (46)Francisemefe
avatar (39)leoniDup
avatar (38)Patrizaancem
avatar (50)smudloquask
avatar (45)benchJem
avatar (38)biobdam
avatar (41)zacforat
avatar (46)NemrokReks
avatar (49)Jasoncedia
avatar (37)Barrackleve
avatar (39)Julioagopy
avatar (49)aolaupitt2558
avatar (47)vadimTob
avatar (37)leannauu4
avatar (39)storoBox
avatar (47)kinotHeemn
avatar (38)Ceballos1976
avatar (39)efynu
avatar (31)horancos

[-]
Online Staff
There are no staff members currently online.

>