IIIT-Delhi Institutional Repository

Light weight code - mix neural machine translation

Show simple item record

dc.contributor.author Bansal, Subhanshu
dc.contributor.author Akhtar, Md. Shad (Advisor)
dc.date.accessioned 2024-05-11T11:59:12Z
dc.date.available 2024-05-11T11:59:12Z
dc.date.issued 2023-11-29
dc.identifier.uri http://repository.iiitd.edu.in/xmlui/handle/123456789/1440
dc.description.abstract In the contemporary multilingual landscape of online communication, the emergence of codemixed language, the seamless integration of multiple languages within a single utterance, has become increasingly prevalent. The Transformer Architecture, a revolutionary development in natural language processing, has significantly facilitated the modeling of such linguistic complexity. However, despite its efficacy, deploying Transformer models to Edge Devices presents challenges. The inherent depth of Transformer models, while enhancing their learning capacity, poses obstacles for deployment on resource-constrained Edge Devices. These devices, characterized by limited computational capabilities, struggle with the computational intensity of deep models, resulting in impractical latency. Consequently, the transformative benefits of code-mixed language processing are hindered when confined to internet-based usage. The current limitation of deploying Transformer models exclusively via the internet restricts their accessibility and utility, especially in scenarios where real-time, low-latency processing is imperative. As technological advancements continue, addressing these deployment challenges and enabling the efficient implementation of Transformer models on Edge Devices could unlock new possibilities for seamless, multilingual communication in diverse settings. en_US
dc.language.iso en_US en_US
dc.publisher IIIT-Delhi en_US
dc.subject Machine Translation en_US
dc.subject Recurrent Neural Network en_US
dc.subject Long Short Term Memory en_US
dc.subject Transformers en_US
dc.subject Retention Network en_US
dc.subject Knowledge Distillation en_US
dc.subject Student - Teacher Model Model Pruning en_US
dc.title Light weight code - mix neural machine translation en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account