Shkarban, I.V. (2025) AI tools in translation practice Закарпатські філологічні студії, 41 (1). pp. 279-284. ISSN 2663-4899
|
Text
I_Shkarban_AITTP__ZFS_41_2_2025_FRGF.pdf Download (490kB) |
Abstract
This study examines the translation quality of AI tools (ChatGPT, Google Translate) compared to human translation (HT) across technical, news, travel, and literary texts using the Multidimensional Quality Metrics (MQM) framework. Forty six first-year translation students have been involved in the evaluation of pre translated English-Ukrainian and Ukrainian-English texts for accuracy, fluency, terminology, and style. The qualitative analysis revealed that AI systems, while fluent and grammatically accurate, struggled with stylistic accuracy, idiomatic expression, and metaphor translation. Google Translate often produced literal and mechanical renderings, whereas ChatGPT introduced lexical inconsistencies, particularly in culturally dense or poetic texts. Human translators displayed a consistent ability to preserve authorial voice and pragmatic nuances, especially in texts requiring emotional or aesthetic sensitivity. The study underscores the need for translator training programs to incorporate AI critically, equipping students with technical skills and the ability to evaluate, revise, and manage AI outputs. While AI tools offer valuable support for terminology management, basic comprehension, and first drafts, they cannot substitute the human translator’s interpretive, creative, and ethical functions. Their limitations highlight the continued need for human oversight, especially in culturally rich or emotionally charged content. The study recommends integrating AI critically into translator training, enhancing post-editing, prompt engineering, and ethical awareness. Furthermore, the current evaluation model demonstrates the value of MQM as a robust framework for analysing translation quality across human and machine-produced texts. The use of convenience sampling and participants’ novice status may affect the generalizability of results. Additionally, reliance on self-assessed translation proficiency introduces potential bias. Future research should employ standardised proficiency assessments and include more diverse and experienced participant cohorts. The need for advanced hybrid evaluation methods, combining automated and human assessment, remains pressing, particularly for culturally and stylistically complex texts.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | Artificial intelligence (AI) tools; computer-assisted translation (CAT); human translation (HT); large language model (LLM) tools; machine translation (MT); Multidimensional Quality Metrics (MQM) |
| Subjects: | Статті у базах даних > Index Copernicus Статті у періодичних виданнях > Фахові (входять до переліку фахових, затверджений МОН) Це архівна тематика Київського університету імені Бориса Грінченка > Статті у журналах > Наукові (входять до інших наукометричних баз, крім перерахованих, мають ISSN, DOI, індекс цитування) |
| Divisions: | Факультет романо-германської філології > Кафедра англійської мови та комунікації |
| Depositing User: | Інна Володимирівна Шкарбан |
| Date Deposited: | 03 Oct 2025 15:01 |
| Last Modified: | 03 Oct 2025 15:01 |
| URI: | https://elibrary.kubg.edu.ua/id/eprint/53332 |
Actions (login required)
![]() |
View Item |


