Computer and Information Science (2023)

Автор: literator от 26-11-2022, 02:43, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Computer and Information Science (2023)Название: Computer and Information Science
Автор: Roger Lee
Издательство: Springer
Серия: Studies in Computational Intelligence
Год: 2023
Страниц: 224
Язык: английский
Формат: pdf (true), epub
Размер: 30.6 MB

The aim of this book was to bring together researchers and scientists, engineers, computer users, and students to discuss the numerous fields of Computer Science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.

Symmetric-key cryptography is used widely due to its capability to provide a strong defense against diverse attacks; however, it is prone to cryptanalysis attacks. Therefore, we propose a novel and highly secure symmetric-key cryptography, symKrypt for short, to defend against diverse attacks and provide tighter security than the conventional cryptography. Our proposed algorithm uses multiple private keys to encrypt a single block of a message. To generate the private keys, we again propose a true-random number generator, called Grando, and a pseudo-random number generator, called Prando. Moreover, symKrypt keeps secret about the bit mixing of the original message with the private keys. Also, the number of private keys is kept secret. In addition, the private keys are generated dynamically based on the initial inputs using a pseudo-random number generator which is highly unpredictable and secure. In this paper, we theoretically analyze the capabilities of symKrypt and provide experimental demonstration using millions of private keys to prove its correctness. To the best of our knowledge, symKrypt is the first model to use multiple private keys in encryption yet lightweight and powerful.

The amount of data in some fields are scarce because they are difficult or expensive to obtain. The general practice is to pre-train a model on similar data sets and fine-tune the models in downstream tasks by transfer learning. Recently, with the development of Big Data and high-performance hardware, large-scale pre-trained models have injected new vitality into the development of Artificial Intelligence (AI) and created a new paradigm. The pre-trained models could learn the general language representation from large-scale corpora but their downstream task may be different from the pre-trained tasks in form and type. It also lacks related semantic knowledge. Therefore, we propose PK-BERT—Knowledge Enhanced Pre-trained Models with Prompt for Few-shot Learning. It (1) achieves few-shot learning by using small samples with pre-trained models; (2) constructs the prefix that contains the masked label to shorten the gap between downstream task and pre-trained task; (3) uses the explicit representation to inject knowledge graph triples into the text to enhance the sentence information; and (4) uses masked language modelling (MLM) head to convert the classification task into generation task. The experiments show that our proposed model PK-BERT achieves better results.

Traditional text classification models have some drawbacks, such as the inability of the model to focus on important parts of the text contextual information in text processing. To solve this problem, we fuse the long and short-term memory network BiGRU with a convolutional neural network to receive text sequence input to reduce the dimensionality of the input sequence and to reduce the loss of text features based on the length and context dependency of the input text sequence. Considering the extraction of important features of the text, we choose the long and short-term memory network BiLSTM to capture the main features of the text and thus reduce the loss of features. Finally, we propose a BiGRU-CNN-BiLSTM model (DCRC model) based on CNN, GRU and LSTM, which is trained and validated on the THUCNews and Toutiao News datasets. The model outperformed the traditional model in terms of accuracy, recall and F1 score after experimental comparison. For text classification based on Deep Learning, Kalchbrenner et al. proposed the DCNN-Dynamic Convolutional Neural Network, which uses wide convolution and k-max pooling sampling to construct a parse tree-like structure that can extract information over long distances.

Contents:
symKrypt: A Lightweight Symmetric-Key Cryptography for Diverse Applications
PK-BERT: Knowledge Enhanced Pre-trained Models with Prompt for Few-Shot Learning
Typhoon Track Prediction Based on TimeForce CNN-LSTM Hybrid Model
The Novel Characterizing Method of Collective Behavior Pattern in PSO
Research on Box Office Prediction of Commercial Films Based on Internet Search Index and Multilayer Perceptron
A DCRC Model for Text Classification
Hierarchical Medical Classification Based on DLCF
Noise Detection and Classification in Chagasic ECG Signals Based on One-Dimensional Convolutional Neural Networks
Based on the Analysis of Interrelation Between Parallel Distributed Computer System and Network
Improvement of DGA Long Tail Problem Based on Transfer Learning
A Phonetics and Semantics-Based Chinese Short Text Fusion Algorithm

Скачать Computer and Information Science, 2023 Edition




ОТСУТСТВУЕТ ССЫЛКА/ НЕ РАБОЧАЯ ССЫЛКА ЕСТЬ РЕШЕНИЕ, ПИШИМ СЮДА!


Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.