TY - CONF AU - Adarsh Tiwari AU - Sanket Biswas AU - Josep Llados A2 - ICDAR PY - 2023// TI - Can Pre-trained Language Models Help in Understanding Handwritten Symbols? BT - 17th International Conference on Document Analysis and Recognition SP - 199–211 VL - 14193 N2 - The emergence of transformer models like BERT, GPT-2, GPT-3, RoBERTa, T5 for natural language understanding tasks has opened the floodgates towards solving a wide array of machine learning tasks in other modalities like images, audio, music, sketches and so on. These language models are domain-agnostic and as a result could be applied to 1-D sequences of any kind. However, the key challenge lies in bridging the modality gap so that they could generate strong features beneficial for out-of-domain tasks. This work focuses on leveraging the power of such pre-trained language models and discusses the challenges in predicting challenging handwritten symbols and alphabets. UR - https://link.springer.com/chapter/10.1007/978-3-031-41498-5_15 N1 - DAG ID - Adarsh Tiwari2023 ER -