Abstract
In adversarial machine learning, attackers add carefully crafted perturbations to input, where the perturbations are almost imperceptible to humans, but can cause models to make wrong predictions. In this paper, we did comprehensive review of some of the most recent research, advancement and discoveries on adversarial attack, adversarial sampling generation, the potency or effectiveness of each of the existing attack methods, we also did comprehensive review on some of the most recent research, advancement and discoveries on adversarial defense strategies, the effectiveness of each defense methods, and finally we did comparison on effectiveness and potency of different adversarial attack and defense methods. We came to conclusion that adversarial attack will mainly be blackbox for the foreseeable future since attacker has limited or no knowledge of gradient use for NN model, we also concluded that as dataset becomes more complex, so will be increase in demand for scalable adversarial defense strategy to mitigate or combat attack, and we strongly recommended that any neural network model with or without defense strategy should regularly be revisited, with the source code continuously updated at regular interval to check for any vulnerability against newer attack.