Автор: Raphael Labaca Castro
Издательство: Springer
Год: 2023
Страниц: 134
Язык: английский
Формат: pdf (true), epub, mobi
Размер: 10.2 MB
Machine Learning has become key in supporting decision-making processes across a wide array of applications, ranging from autonomous vehicles to malware detection. However, while highly accurate, these algorithms have been shown to exhibit vulnerabilities, in which they could be deceived to return preferred predictions. Therefore, carefully crafted adversarial objects may impact the trust of machine learning systems compromising the reliability of their predictions, irrespective of the field in which they are deployed. The goal of this book is to improve the understanding of adversarial attacks, particularly in the malware context, and leverage the knowledge to explore defenses against adaptive adversaries. Furthermore, to study systemic weaknesses that can improve the resilience of Machine Learning models.
In addition to Reinforcement Learning (RL) strategies, Genetic Programming (GP) has been implemented to generate adversarial examples across multiple domains. Genetic algorithms were introduced by John Koza and are inspired by natural selection processes resembling Darwinian evolution. More specifically, GP explores the algorithmic search space and can be broadly classified as a subset of ML algorithms. Its goal is to drive a population of objects using genetic-like operations, such as mutation, crossover, and fitness to the next generations until it converges into a solution.
Choi et al. presented an adaptive approach using Genetic Programming (GP) that relies on opensource input files. They evaluated two scenarios using malware based on Python and C. In this scenario, the attack module processes the source code of the object parsing variables and functions. The attack includes two groups of transformations: nonbehavioral and behavioral. In the former, the changes do not impact the execution of the object, whereas in the latter, the changes add, delete, or swap lines of code, affecting the execution flow. Hence, the authors implemented a verifier using Python and C compilers to ensure that the adversarial example remains functional. However, it is important to highlight that the attack targeted TLSH and Variant, legacy malware frameworks based on similarity detection rather than malicious features.
In Chapter 2, we define the background and present preliminary knowledge, including a brief introduction to malware and the specifications of the PE format. Then, we investigate the literature on AML in the context of malware. In Chapter 3, we introduce our framework and define the scope of the problem, including the adversaries’ goals and knowledge, and present the target models along with malware datasets. In Chapters 4–7, we introduce multiple attack strategies in the problem space, whose goal is to leverage stochastic and ML techniques to efficiently identify weaknesses in the target models, in which adversaries have restricted knowledge, resembling scenarios closer to the real world. Likewise, in Chapters 8 and 9, we propose full-knowledge attacks in the feature space using gradient optimization. With this white-box scenario, the goal is to efficiently bypass the target model with malware representations that respect real binary transformations. In Chapter 10, we present a comparison among the proposed realizable attacks using a separate evaluation scenario and then benchmark their performance by analyzing the best use cases for each strategy. Next, in Chapter 11, we present multiple strategies to improve the resilience of static malware classifiers on the basis of knowledge leveraged from the attack modules. Finally, in Chapter 12, we review the contributions of this study as a conclusion and provide an outlook for future work.
Contents:
Part I. The Beginnings of Adversarial ML
Part II. Framework for Adversarial Malware Evaluation
Part III. Problem-Space Attacks
Part IV. Feature-Space Attacks
Part V. Benchmark & Defenses
Part VI. Closing Remarks
Скачать Machine Learning under Malware Attack