Large Language Models in Cybersecurity: Threats, Exposure and Mitigation

Автор: literator от 1-06-2024, 18:35, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Название: Large Language Models in Cybersecurity: Threats, Exposure and Mitigation
Автор: Andrei Kucharavy, Octave Plancherel, Valentin Mulder, Alain Mermoud
Издательство: Springer
Год: 2024
Страниц: 249
Язык: английский
Формат: pdf (true)
Размер: 11.3 MB

This book provides cybersecurity practitioners with the knowledge needed to understand the risks of the increased availability of powerful large language models (LLMs) and how they can be mitigated. It attempts to outrun the malicious attackers by anticipating what they could do. It also alerts LLM developers to understand their work's risks for cybersecurity and provides them with tools to mitigate those risks.

The book starts in Part I with a general introduction to LLMs and their main application areas. Part II collects a description of the most salient threats LLMs represent in cybersecurity, be they as tools for cybercriminals or as novel attack surfaces if integrated into existing software. Part III focuses on attempting to forecast the exposure and the development of technologies and science underpinning LLMs, as well as macro levers available to regulators to further cybersecurity in the age of LLMs. Eventually, in Part IV, mitigation techniques that should allow safe and secure development and deployment of LLMs are presented. The book concludes with two final chapters in Part V, one speculating what a secure design and integration of LLMs from first principles would look like and the other presenting a summary of the duality of LLMs in cyber-security.

Artificial Intelligence has become mainstream in recent years. While general Artificial Intelligence is still not on the horizon, the Machine Learning models that have been developed and made publicly available are getting extremely powerful. Text, image, or video generation—brought together under the umbrella term of “generative Artificial Intelligence (AI)” or “multimodal large language models” are highly discussed and popular among the general public.

However, with the increasing usage of generative AI, there are also increasing concerns about the abuse of the technology. Abuse cases are manifold. Deep fakes and fake news are one class of such misuse. Generative AI allows for more believable disinformation or fraud schemes at scale, as skillfully explained in this book.

Another class of challenges that arises is the manipulation of learning algorithms. Machine Learning algorithms can derive new strategies to achieve results on tasks that are hard to both validate and verify. For instance, code generated by generative AI can accomplish the intended task while containing subtle bugs that compromise the application in which it is used. Not only does this book contain an in-depth exploration of how such vulnerability injection could work, but it also offers means to mitigate it.

Large Language Models (LLMs) are scaled-up instances of Deep Neural Language Models—a type of Natural Language Processing (NLP) tools trained with Machine Learning (ML). To best understand how LLMs work, we must dive into what technologies they build on top of and what makes them different. To achieve this, an overview of the history of LLMs development, starting from the 1990s, is provided before covering the counterintuitive purely probabilistic nature of the Deep Neural Language Models, continuous token embedding spaces, recurrent neural networks-based models, what self-attention brought to the table, and finally, why scaling Deep Neural Language Models led to a qualitative change, warranting a new name for the technology.

While the general public discovered Large Language Models (LLMs) with ChatGPT—a generative autoregressive model, they are far from the only models in the LLM family. Various architectures and training regiments optimized for specific usages were designed throughout their development, which were then classified as different LLM families.

Code suggestions from generative language models like ChatGPT contain vulnerabilities as they often rely on older code and programming practices, over-represented in the older code libraries the LLMs rely on for their coding abilities. Advanced attackers can leverage this by injecting code with known but hard-to-detect vulnerabilities in the training datasets. Mitigation can include user education and engineered safeguards such as LLMs trained for vulnerability detection or rule-based checking of codebases. Analysis of LLMs’ code generation capabilities, including formal verification and source training dataset (code-comment pairs) analysis, is necessary for effective vulnerability detection and mitigation.

LLMs can be vulnerable to prompt injection attacks. Similar to how code injections can alter the behavior of a given program, malicious prompt injection can influence the execution flow of a specific business logic. This is due to their reliance on user-provided text for controlling execution flow. In the context of interactive systems, this poses significant business and cybersecurity risks. Mitigations such as prohibiting the use of LLMs in critical systems, developing prompt and resulting API calls verification tools, implementing security by designing good practices, and enhancing incident logging and alerting mechanisms can be considered to reduce the novel attack surface presented by LLMs.

The “Deep Web” contains, among other data, sensitive information that is left unsecured and publicly available but not indexed and thus impossible to locate by search engines. Using search-augmented language models can potentially make the deep web shallower and more searchable, posing a concern for cyber defense, particularly in countries with linguistic specifics. Mitigation strategies include red-teaming of LLM-based search engines, end-to-end encryption, or modifying terms used in critical cyber-physical systems to make resources harder to find. However, these approaches may have limitations and cause potential disruptions to user workflows.

Скачать Large Language Models in Cybersecurity: Threats, Exposure and Mitigation








Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.