Neural Networks in the Processing of Natural Language Texts in Information Learning Systems

Tkachenko, O. та Tkachenko, K. та Tkachenko, O. та Kyrychok, Roman та Yaskevych, Vladyslav (2024) Neural Networks in the Processing of Natural Language Texts in Information Learning Systems Cybersecurity Providing in Information and Telecommunication Systems, 3654. с. 73-87. ISSN 1613-0073

[thumbnail of Tkachenko_Kyrychok_Yaskevych_CPITS-2024_3654_FITM.pdf] Текст
Tkachenko_Kyrychok_Yaskevych_CPITS-2024_3654_FITM.pdf - Опублікована версія

Download (647kB)
Офіційне посилання: https://ceur-ws.org/Vol-3654/

Анотація

Processes of natural language text processing in informational learning systems or informational learning systems with elements of intellectualization are considered. Among these processes, attention was paid to simplification, and normalization of the text, highlighting the main essences of the subject area of the courses supported by the appropriate informational learning system. The use of neural networks for text processing consisted, in particular, of the unification of words, the formation of abbreviations, the removal of redundant clarifications, the replacement of terms (slang words), the removal of clarifying constructions and redundant symbols, and the correction of errors and paraphrasing. Natural language text processing in information learning systems or information learning systems with elements of intellectualization is based on the use of the Transformer model in neural networks, which, with the help of its unique architecture, facilitates the parallelization of processing processes, simplifies the use of these processes, and increases the efficiency and speed of training of the corresponding neural network. The considered model of the neural network effectively determines the patterns of test fragments (words, phrases) and finds connections in the training data of the network. All this contributes to the acceleration of natural language text processing processes, even when using a small amount of training data for training. The use of the Transformer model in neural networks contributes to normalization (words, phrases), simplification of the text and reduction of text volumes, and removal of complex wording. All this contributes to the efficient and quick processing of large texts.

Тип елементу : Стаття
Ключові слова: Natural language text processing; neural network; normalization; simplification; embedding
Типологія: Статті у базах даних > Scopus
Підрозділи: Факультет інформаційних технологій та математики > Кафедра комп'ютерних наук
Факультет інформаційних технологій та математики > Кафедра інформаційної та кібернетичної безпеки ім. професора Володимира Бурячка
Користувач, що депонує: Роман Васильович Киричок
Дата внесення: 08 Квіт 2024 11:44
Останні зміни: 08 Квіт 2024 11:44
URI: https://elibrary.kubg.edu.ua/id/eprint/48584

Actions (login required)

Перегляд елементу Перегляд елементу