![]() We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. Publisher = "Association for Computational Linguistics",Ībstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. Cite (Informal): A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages (Ortiz Suárez et al., ACL 2020) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: Video: Data = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",Īuthor = "Ortiz Surez, Pedro Javier andīooktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", Association for Computational Linguistics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703–1714, Online. ![]() ![]() A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages. Anthology ID: 2020.acl-main.156 Volume: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics Month: July Year: 2020 Address: Online Venue: ACL SIG: Publisher: Association for Computational Linguistics Note: Pages: 1703–1714 Language: URL: DOI: 10.18653/v1/2020.acl-main.156 Bibkey: ortiz-suarez-etal-2020-monolingual Cite (ACL): Pedro Javier Ortiz Suárez, Laurent Romary, and Benoît Sagot. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures. They actually equal or improve the current state of the art in tagging and parsing for all five languages. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. Abstract We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |