text
stringlengths 0
9.36M
|
|---|
File: README.md
|
Contents:
|
# 100 Must-Read NLP Papers
|
This is a list of 100 important natural language processing (NLP) papers that serious students and researchers working in the field should probably know about and read. This list is compiled by [Masato Hagiwara](http://masatohagiwara.net/). I welcome any feedback on this list.
|
This list is originally based on the answers for a Quora question I posted years ago: [What are the most important research papers which all NLP students should definitely read?](https://www.quora.com/What-are-the-most-important-research-papers-which-all-NLP-students-should-definitely-read). I thank all the people who contributed to the original post.
|
This list is far from complete or objective, and is evolving, as important papers are being published year after year. Please let me know via [pull requests](https://github.com/mhagiwara/100-nlp-papers/pulls) and [issues](https://github.com/mhagiwara/100-nlp-papers/issues) if anything is missing.
|
A paper doesn't have to be a peer-reviewed conference/journal paper to appear here. We also include tutorial/survey-style papers and blog posts that are often easier to understand than the original papers.
|
## Machine Learning
|
* Avrim Blum and Tom Mitchell: Combining Labeled and Unlabeled Data with Co-Training, 1998.
|
* John Lafferty, Andrew McCallum, Fernando C.N. Pereira: Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data, ICML 2001.
|
* Charles Sutton, Andrew McCallum. An Introduction to Conditional Random Fields for Relational Learning.
|
* Kamal Nigam, et al.: Text Classification from Labeled and Unlabeled Documents using EM. Machine Learning, 1999.
|
* Kevin Knight: Bayesian Inference with Tears, 2009.
|
* Marco Tulio Ribeiro et al.: "Why Should I Trust You?": Explaining the Predictions of Any Classifier, KDD 2016.
|
* Marco Tulio Ribeiro et al.: [Beyond Accuracy: Behavioral Testing of NLP Models with CheckList](https://www.aclweb.org/anthology/2020.acl-main.442/), ACL 2020.
|
## Neural Models
|
* Richard Socher, et al.: Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection, NIPS 2011.
|
* Ronan Collobert et al.: Natural Language Processing (almost) from Scratch, J. of Machine Learning Research, 2011.
|
* Richard Socher, et al.: Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, EMNLP 2013.
|
* Xiang Zhang, Junbo Zhao, and Yann LeCun: Character-level Convolutional Networks for Text Classification, NIPS 2015.
|
* Yoon Kim: Convolutional Neural Networks for Sentence Classification, 2014.
|
* Christopher Olah: Understanding LSTM Networks, 2015.
|
* Matthew E. Peters, et al.: Deep contextualized word representations, 2018.
|
* Jacob Devlin, et al.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2018.
|
* Yihan Liu et al. [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692), 2020.
|
## Clustering & Word/Sentence Embeddings
|
* Peter F Brown, et al.: Class-Based n-gram Models of Natural Language, 1992.
|
* Tomas Mikolov, et al.: Efficient Estimation of Word Representations in Vector Space, 2013.
|
* Tomas Mikolov, et al.: Distributed Representations of Words and Phrases and their Compositionality, NIPS 2013.
|
* Quoc V. Le and Tomas Mikolov: Distributed Representations of Sentences and Documents, 2014.
|
* Jeffrey Pennington, et al.: GloVe: Global Vectors for Word Representation, 2014.
|
* Ryan Kiros, et al.: Skip-Thought Vectors, 2015.
|
* Piotr Bojanowski, et al.: Enriching Word Vectors with Subword Information, 2017.
|
* Daniel Cer et al.: [Universal Sentence Encoder](https://arxiv.org/abs/1803.11175), 2018.
|
## Topic Models
|
* Thomas Hofmann: Probabilistic Latent Semantic Indexing, SIGIR 1999.
|
* David Blei, Andrew Y. Ng, and Michael I. Jordan: Latent Dirichlet Allocation, J. Machine Learning Research, 2003.
|
## Language Modeling
|
* Joshua Goodman: A bit of progress in language modeling, MSR Technical Report, 2001.
|
* Stanley F. Chen and Joshua Goodman: An Empirical Study of Smoothing Techniques for Language Modeling, ACL 2006.
|
* Yee Whye Teh: A Hierarchical Bayesian Language Model based on Pitman-Yor Processes, COLING/ACL 2006.
|
* Yee Whye Teh: A Bayesian interpretation of Interpolated Kneser-Ney, 2006.
|
* Yoshua Bengio, et al.: A Neural Probabilistic Language Model, J. of Machine Learning Research, 2003.
|
* Andrej Karpathy: The Unreasonable Effectiveness of Recurrent Neural Networks, 2015.
|
* Yoon Kim, et al.: Character-Aware Neural Language Models, 2015.
|
* Alec Radford, et al.: [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), 2018.
|
## Segmentation, Tagging, Parsing
|
* Donald Hindle and Mats Rooth. Structural Ambiguity and Lexical Relations, Computational Linguistics, 1993.
|
* Adwait Ratnaparkhi: A Maximum Entropy Model for Part-Of-Speech Tagging, EMNLP 1996.
|
* Eugene Charniak: A Maximum-Entropy-Inspired Parser, NAACL 2000.
|
* Michael Collins: Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms, EMNLP 2002.
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 10