The Korean Association for the Study of English Language and Linguistics

Korean Journal of English Language and Linguistics - Vol. 22

[ Article ]
Korea Journal of English Language and Linguistics - Vol. 22, No. 0, pp. 547-562
Abbreviation: KASELL
ISSN: 1598-1398 (Print) 2586-7474 (Online)
Received 09 May 2022 Revised 18 Jun 2022 Accepted 30 Jun 2022
DOI: https://doi.org/10.15738/kjell.22..202205.547

An L2 Neural Language Model of Adaptation
Sunjoo Choi ; Myung-Kwan Park
(first author) Post-Doctor, Division of English Language and Literature, Dongguk University (sunjoo3008@gmail.com)
(corresponding author) Professor, Division of English Language and Literature, Dongguk University (korgen2003@naver.com)


© 2022 KASELL All rights reserved
This is an open-access article distributed under the terms of the Creative Commons License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Funding Information ▼

Abstract

In recent years, the increasing capacities of neural language models (NLMs) have led to a surge in research into their representations of syntactic structures. A wide range of methods have been used to address the linguistic knowledge that NLMs acquire. In the present study, using the syntactic priming paradigm, we explore the extent to which the L2 LSTM NLM is susceptible to syntactic priming, the phenomenon where the syntactic structure of a sentence makes the same structure more probable in a follow-up sentence. In line with the previous work by van Schijndel and Linzen (2018), we provide further evidence for the issue concerned by showing that the L2 LM adapts to abstract syntactic properties of sentences as well as to lexical items. At the same time we report that the addition of a simple adaptation method to the L2 LSTM NLM does not always improve on the NLM’s predictions of human reading times, compared to its non-adaptive counterpart.


Keywords: syntactic priming, adaptation, neural language model, surprisal, perplexity, learning rate

Acknowledgments

This work was supported by the Dongguk University Research Fund of 2021 (S-2021-G0001-00116).


References
1. Bacchiani, M., M. Riley, B. Roark and R. Sproat. 2006. MAP adaptation of stochastic grammars. Computer Speech & Language 20(1), 41-68.
2. Bhattacharya, D. and M. van Schijndel. 2020. Filler-gaps that neural networks fail to generalize. In Proceedings of the 24th Conference on Computational Natural Language Learning, 486-495.
3. Bock, J. K. 1986. Syntactic persistence in language production. Cognitive Psychology 18(3), 355-387.
4. Bock, J. K. and Z. M. Griffin. 2000. The persistence of structural priming: Transient activation or implicit learning? Journal of Experimental Psychology: General 129(2), 177.
5. Choi, S. J., M. K. Park and E. Kim. 2021. How are Korean neural language models ‘surprised’ layerwisely? Journal of Language Sciences 28(4), 301-317.
6. Choi, S. J. and M. K. Park. 2022a. An L2 neural language model of adaptation to dative alternation in English. The Journal of Modern British & American Language & Literature 40(1). 143-159.
7. Choi, S. J. and M. K. Park. 2022b. Syntactic priming by L2 LSTM language models. The Journal of Studies in Language 37(4). 475-489.
8. Church, K. 2000. Empirical estimates of adaptation: The chance of two noriegas is closer to p/2 than p2. In COLING 2000 Volume 1: The 18th International Conference on Computational Linguistics.
9. Davis, F. and M. Van Schijndel. 2020. Recurrent neural network language models always learn English-like relative clause attachment. arXiv preprint arXiv:2005.00165.
10. Dubey, A., F. Keller and P. Sturt. 2006. Integrating syntactic priming into an incremental probabilistic parser, with an application to psycholinguistic modeling. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, 417-424).
11. Fine, A. B. and T. F. Jaeger. 2016. The role of verb repetition in cumulative structural priming in comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition 42(9), 1362.
12. Futrell, R., E. Gibson, H. Tily, I. Blank, A. Vishnevetsky, S. T. Piantadosi and E. Fedorenko. 2017. The natural stories corpus. arXiv preprint arXiv:1708.05763.
13. Gulordava, K., P. Bojanowski, E. Grave, T. Linzen and M. Baroni. 2018. Colorless green recurrent networks dream hierarchically. arXiv preprint arXiv:1803.11138.
14. Kaschak, M. P., R. A. Loney and K. L. Borreggine. 2006. Recent experience affects the strength of structural priming. Cognition 99(3), B73-B82.
15. Kaschak, M. P., T. J. Kutta and C. Schatschneider. 2011. Long-term cumulative structural priming persists for (at least) one week. Memory & Cognition 39(3), 381-388.
16. Kim, E. (2020). The ability of L2 LSTM language models to learn the filler-gap dependency. Journal of the Korea Society of Computer and Information 25(11), 27-40.
17. Kuhn, R. and R. De Mori. 1990. A cache-based natural language model for speech recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(6), 570-583.
18. Mahowald, K., A. James, R. Futrell and E. Gibson. 2016. A meta-analysis of syntactic priming in language production. Journal of Memory and Language, 91, 5-27.
19. Merity, S., C. Xiong, J. Bradbury and R. Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843.
20. Pickering, M. J. and H. P. Branigan. 1998. The representation of verbs: Evidence from syntactic priming in language production. Journal of Memory and language 39(4), 633-651.
21. Pickering, M. J. and V. S. Ferreira. 2008. Structural priming: a critical review. Psychological Bulletin 134(3), 427.
22. Prasad, G., M. Van Schijndel and T. Linzen. 2019. Using priming to uncover the organization of syntactic representations in neural language models. arXiv preprint arXiv:1909.10579.
23. Ravfogel, S., G. Prasad, T. Linzen and Y. Goldberg. 2021. Counterfactual interventions reveal the causal effect of relative clause representations on agreement prediction. arXiv preprint arXiv:2105.06965.
24. Sinclair, A., J. Jumelet, W. Zuidema and R. Fernández. 2021. Syntactic persistence in language models: Priming as a window into abstract language representations. arXiv preprint arXiv:2109.14989.
25. Tenney, I., D. Das and E. Pavlick. 2019. BERT rediscovers the classical NLP pipeline. arXiv preprint arXiv:1905.05950.
26. van Schijndel, M. and T. Linzen. 2018. A neural model of adaptation in reading. arXiv preprint arXiv:1808.09930.
27. Warstadt, A., A. Parrish, H. Liu, A. Mohananey, W. Peng, S. F. Wang and S. R. Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. Transactions of the Association for Computational Linguistics 8, 377-392.