EvilModel: Hiding Malware Inside of Neural Network Models
Delivering malware covertly and detection-evadingly is critical to advanced
malware campaigns. In this paper, we present a method that delivers malware
covertly and detection-evadingly through neural network models. Neural network
models are poorly explainable and have a good generalization ability. By
embedding malware into the neurons, malware can be delivered covertly with
minor or even no impact on the performance of neural networks. Meanwhile, since
the structure of the neural network models remains unchanged, they can pass the
security scan of antivirus engines. Experiments show that 36.9MB of malware can
be embedded into a 178MB-AlexNet model within 1% accuracy loss, and no
suspicious are raised by antivirus engines in VirusTotal, which verifies the
feasibility of this method. With the widespread application of artificial
intelligence, utilizing neural networks becomes a forwarding trend of malware.
We hope this work could provide a referenceable scenario for the defense on
neural network-assisted attacks.
Back
Read News