Deep neural network (DNN) has been recently applied to many safety-critical environments.Unfortunately, recent research has proven that DNN can be vulnerable to well-designed examples, called adversarial examples.Adversarial examples can easily Instant Printer fool a well-performed deep learning model with little perturbations imperceptible to humans.
In this paper, to tackle the DNN security issue, we propose a Model Adversarial Score (MAS) index to evaluate the vulnerability of a deep neural network, and introduce a deep learning vulnerability assessment system (SecureAS) using adversarial samples to assess the vulnerability and risk of a trained DNN in a blackbox way.We also present two adversary algorithms (FGNM and PINM) that provide better adversary images with the similar attack effect compared VITAMIN B1 100MG to existing approaches like FGSM and BIM.Our experimental results confirm the effectiveness of MAS algorithm, SecureAS, FGNM and PINM.