markdown_text
stringlengths
1
2.5k
pdf_metadata
dict
header_metadata
dict
chunk_metadata
dict
![](/content/images/1703.00848v1.pdf-12-1.jpg) 250 2750 5250 7750 250 2750 5250 7750 Iterations Iterations Figure 10: Sensitivity to the weight on the image likelihood term. The figure plots the translation error as a function of the training iteration with *λ* 2 = 0 *.* 0001 and *λ* 1 varying from 0.001 to 10000...
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**B. Quantitative Analysis**", "Header 4": null }
{ "chunk_type": "body" }
### **C. Unsupervised Domain Adaptation** We applied the UNIT framework to the UDA problem, which concerns adapting a classifier trained using labeled samples in one domain, which is referred to as the source domain, to classify samples in a new domain, which is referred to as the target domain, where labeled sampl...
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**C. Unsupervised Domain Adaptation**", "Header 4": null }
{ "chunk_type": "body" }
Translation from the second domain to the first domain. For each image pair, the one in the left-hand side is the input image, while the one in the right-hand side is the translated image. power from the source domain adversarial discriminator. We did not use a separately trained source-domain classifier to classify ...
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**C. Unsupervised Domain Adaptation**", "Header 4": null }
{ "chunk_type": "body" }
Layer Discriminators Shared? 1 CONV-(N20,K5,S1), MaxPooling-(K2,S2) No 2 CONV-(N50,K5,S1), MaxPooling-(K2,S2) Yes 3 FC-(N500), ReLU, Dropout Yes 4a FC-(N1), Sigmoid Yes 4b FC-(N10), Softmax Yes Table 7: Unsupervised Domain Adaptation Results |Method|MNIST→USPS USPS→MNIST| |---|---| |No adap. CoGAN UNIT (propose...
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**C. Unsupervised Domain Adaptation**", "Header 4": null }
{ "chunk_type": "body" }
View House Number (SVHN) dataset (Netzer et al., 2011) to the MNIST dataset. Specifically, we trained the UNIT network to learn to translate images between the SVHN and MNIST training sets as well as to classify the digit classes in the SVHN training images using the features extracted from the SVHN domain adversarial ...
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**C. Unsupervised Domain Adaptation**", "Header 4": null }
{ "chunk_type": "body" }
### **D. Network Architecture** The UNIT architecture for translating natural images presented in the experiment section is shown in Table 10.
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**D. Network Architecture**", "Header 4": null }
{ "chunk_type": "body" }
### **E. Additional Translation Results** Additional unsupervised image-to-image translation results are visualized in Figure 14,15, 16, 17 18, and 19. ----- **Unsu** **p** **ervised Ima** **g** **e-to-Ima** **g** **e Translation Networks** Table 10: The UNIT network architecture for translating natural images ...
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**E. Additional Translation Results**", "Header 4": null }
{ "chunk_type": "body" }
4 Output from Discriminator Layer 3 CONV-(N512,K5,S2), BN, LeakyReLU Yes 5 Output from Discriminator Layer 4 CONV-(N1024,K5,S2), BN, LeakyReLU Yes 6 Output from Discriminator Layer 5 CONV-(N2048,K5,S2), Sigmoid Yes ----- **Unsu** **p** **ervised Ima** **g** **e-to-Ima** **g** **e Translation Networks** Input ...
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**E. Additional Translation Results**", "Header 4": null }
{ "chunk_type": "body" }
![](/content/images/1703.00848v1.pdf-17-4.jpg) ![](/content/images/1703.00848v1.pdf-17-5.jpg) ![](/content/images/1703.00848v1.pdf-17-6.jpg) ![](/content/images/1703.00848v1.pdf-17-7.jpg) ![](/content/images/1703.00848v1.pdf-17-8.jpg) ![](/content/images/1703.00848v1.pdf-17-9.jpg) ![](/content/images/1703.0...
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**E. Additional Translation Results**", "Header 4": null }
{ "chunk_type": "body" }
![](/content/images/1703.00848v1.pdf-18-13.jpg) ![](/content/images/1703.00848v1.pdf-18-14.jpg) ![](/content/images/1703.00848v1.pdf-18-15.jpg) ![](/content/images/1703.00848v1.pdf-18-16.jpg) ![](/content/images/1703.00848v1.pdf-18-17.jpg) ![](/content/images/1703.00848v1.pdf-18-18.jpg) ![](/content/images/...
{ "id": "1703.00848", "title": "Unsupervised Image-to-Image Translation Networks", "categories": [ "cs.CV", "cs.AI" ] }
{ "Header 1": null, "Header 2": "**Unsupervised Image-to-Image Translation Networks**", "Header 3": "**E. Additional Translation Results**", "Header 4": null }
{ "chunk_type": "body" }
## **An Analysis of** **Deep Neural Network Models** **for Practical Applications** **Alfredo Canziani & Eugenio Culurciello** Weldon School of Biomedical Engineering Purdue University ``` {canziani,euge}@purdue.edu ``` **Adam Paszke** Faculty of Mathematics, Informatics and Mechanics University of Warsaw Warsaw, Po...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": null, "Header 4": null }
{ "chunk_type": "title" }
### **1 Introduction** Since the breakthrough in 2012 ImageNet competition [ 8 ] achieved by AlexNet [ 4 ] — the first entry that used a Deep Neural Network (DNN) — several DNNs have been submitted to the challenge with increasing complexity in order to achieve better performance. In the ImageNet classification cha...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**1 Introduction**", "Header 4": null }
{ "chunk_type": "body" }
### **2 Methods** In order to compare the quality of different models, we collected and analysed the accuracy values reported in the literature. We immediately found out that different sampling techniques do not allow for a direct comparison of resource utilization. For example, central-crop (top-5 validation) errors...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**2 Methods**", "Header 4": null }
{ "chunk_type": "body" }
### **3 Results** In this section we report our results and comparisons. We analysed the following DDNs: AlexNet [ 4 ], batch normalised AlexNet [ 12 ], batch normalised Network In Network (NIN) [ 5 ], GoogLeNet [ 10 ], VGG-16 and -19 [ 9 ], ResNet-18, -34, -50 and -101 [ 3 ] and Inception-v3 [ 11 ], since they obtai...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**3 Results**", "Header 4": null }
{ "chunk_type": "body" }
|B|N-AlexN|et||||||| ||AlexNet|||||||| |Col1|Col2|Col3|Col4|Col5|BN-NIN|Col7| |---|---|---|---|---|---|---| ||||||GoogLeNet|| ||||||Inception-v|3| ||||||AlexNet|| ||||||BN-AlexNet VGG-16 VGG-19 ResNet-18|| |||||||| ||||||ResNet-34|| ||||||ResNet-50 ResNet-101|| |||||||| |||||||| |||||||| |||||||| |||||||| |||||||| ...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**3 Results**", "Header 4": null }
{ "chunk_type": "body" }
|600 BN-NIN 500 BN-NIN GoogLeNet GoogLeNet Inception-v3 Inception-v3 500 AlexNet AlexNet BN-AlexNet 200 BN-AlexNet [ms] [ms] VGG-16 VGG-16 400 VGG-19 VGG-19 image image ResNet-18 100 ResNet-18 ResNet-34 ResNet-34 per per 300 ResNet-50 50 ResNet-50 ResNet-101 ResNet-101 time time Foward Foward 200 20 100 10 0 5 1 2 4 8 ...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**3 Results**", "Header 4": null }
{ "chunk_type": "body" }
|---|---|---|---|---|---|---|---| ||||||||| ||||||||| ||||||||| |||||||BN-NIN GoogLeNet|| |||||||Inception-v AlexNet BN-AlexNet|3| |||||||VGG-16 VGG-19|| |||||||ResNet-18 ResNet-34|| |||||||ResNet-50 ResNet-101|| Figure 4: **Power** ***vs.*** **batch size.** Net power consumption (due only to the forward processing o...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**3 Results**", "Header 4": null }
{ "chunk_type": "body" }
for larger batches, and thus only in this condition it is able to fully utilize all available hardware resources. On the other side, for architectures like ResNet and GoogLeNet, their fully connected layers are so small, accounting for only 1% of total operation count, so they do not suffer from this issue. **3.4** *...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**3 Results**", "Header 4": null }
{ "chunk_type": "body" }
size, in case of custom implementation of neural network accelerators. In figure 7, in case of batch of 16 images, there is a linear relationship between operations count and inference time per image. Therefore, at design time, we can pose a constraint on the number of operation to keep processing speed in a usable ran...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**3 Results**", "Header 4": null }
{ "chunk_type": "body" }
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| Figure 10: **Accuracy per parameter** ***vs.*** **network.** Information density (accuracy per parameters) is an efficiency me...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**3 Results**", "Header 4": null }
{ "chunk_type": "body" }
figure 10 we clearly see that, although VGG has a better accuracy than AlexNet (as shown by figure 6 ----- 1), its information density is worse. This means that the amount of degrees of freedom introduced in the VGG architecture bring a lesser improvement in terms of accuracy. Moreover, GoogLeNet achieves the hig...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**3 Results**", "Header 4": null }
{ "chunk_type": "body" }
### **4 Conclusions** In this paper we analysed multiple state-of-the-art deep neural networks submitted to the ImageNet challenge, in terms of accuracy, memory footprint, parameters, operations count, inference time and power consumption. Our goal is to provide insights into the design choices that can lead to effic...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**4 Conclusions**", "Header 4": null }
{ "chunk_type": "body" }
### **Acknowledgment** This paper would have not look so pretty without the *Python Software Foundation*, the `matplotlib` library and the communities of *stackoverflow* and T E X of *StackExchange* which I ought to thank. This work is partly supported by the Office of Naval Research (ONR) grants N00014-12-1-0167, N0...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**Acknowledgment**", "Header 4": null }
{ "chunk_type": "body" }
### **References** [1] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cuDNN: Efficient Primitives for Deep Learning. *arXiv.org arXiv:1410.0759*, 2014. [2] Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environmen...
{ "id": "1605.07678", "title": "An Analysis of Deep Neural Network Models for Practical Applications", "categories": [ "cs.CV" ] }
{ "Header 1": null, "Header 2": "**An Analysis of** **Deep Neural Network Models** **for Practical Applications**", "Header 3": "**References**", "Header 4": null }
{ "chunk_type": "references" }
## **Convolutional Neural Networks for Sentence Classification** ### **Yoon Kim** New York University yhk255@nyu.edu
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**Yoon Kim** New York University yhk255@nyu.edu", "Header 4": null }
{ "chunk_type": "title" }
### **Abstract** We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We first show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multi ple benchmarks. Le...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**Abstract**", "Header 4": null }
{ "chunk_type": "body" }
### **1 Introduction** Deep learning models have achieved remarkable results in computer vision (Krizhevsky et al., 2012) and speech recognition (Graves et al., 2013) in recent years. Within natural language processing, much of the work with deep learning methods has involved learning word vector representations thro...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**1 Introduction**", "Header 4": null }
{ "chunk_type": "body" }
finally describe a simple modification to the architecture to allow for the use of both pre-trained and task-specific vectors by having multiple channels. Our work is philosophically similar to Razavian et al. (2014) which showed that for image classification, feature extractors obtained from a pretrained deep learning...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**1 Introduction**", "Header 4": null }
{ "chunk_type": "body" }
### **2 Model** The model architecture, shown in figure 1, is a slight variant of the CNN architecture of Collobert et al. (2011). Let **x** *i* *∈* R *[k]* be the *k* -dimensional word vector corresponding to the *i* -th word in the sentence. A sentence of length *n* (zero-padded 1 https://code.google.com/p/word2v...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**2 Model**", "Header 4": null }
{ "chunk_type": "body" }
to obtain multiple features. These features form the penultimate layer and are passed to a fully connected softmax layer whose output is the probability distribution over labels. In one of the model variants, we experiment with having two ‘channels’ of word vectors—one that is kept static throughout training and one ...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**2 Model**", "Header 4": null }
{ "chunk_type": "body" }
*N* : Dataset size. *|V |* : Vocabulary size. *|V* *pre* *|* : Number of words present in the set of pre-trained word vectors. *Test* : Test set size (CV means there was no standard train/test split and thus 10-fold CV was used).
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**2 Model**", "Header 4": null }
{ "chunk_type": "body" }
### **3 Datasets and Experimental Setup** We test our model on various benchmarks. Sum mary statistics of the datasets are in table 1. *•* **MR-a** : Movie reviews with one sentence per review. Classification involves detecting positive/negative reviews (Pang and Lee, 2005). [3] *•* **MR-b** : Extension of MR-a b...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**3 Datasets and Experimental Setup**", "Header 4": null }
{ "chunk_type": "body" }
dev set. Training is done through stochastic gradient descent over shuffled mini-batches with the Adadelta update rule (Zeiler, 2012). **3.2** **Pre-trained Word Vectors** Initializing word vectors with those obtained from an unsupervised neural language model is a popular method to improve performance in the absen...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**3 Datasets and Experimental Setup**", "Header 4": null }
{ "chunk_type": "body" }
Table 2: Results of our CNN models against other methods. **RAE** : Recursive Autoencoders with pre-trained word vectors from Wikipedia (Socher et al., 2011). **MV-RNN** : Matrix-Vector Recursive Neural Network with parse trees (Socher et al., 2012). **RNTN** : Recursive Neural Tensor Network with tensor-based feature ...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**3 Datasets and Experimental Setup**", "Header 4": null }
{ "chunk_type": "body" }
### **4 Results and Discussion** Results of our models against other methods are listed in table 2. Our baseline model with all ran domly initialized words (CNN-rand) does not perform well. While we had expected performance gains through the use of pre-trained vectors, we were surprised at the magnitude of the gains....
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**4 Results and Discussion**", "Header 4": null }
{ "chunk_type": "body" }
task-at-hand (as is the case with the single channel non-static model). For example, *good* is most similar to *bad* in word2vec, presumably because they are (almost) syntactically equivalent. But for vectors in the non-static channel that were finetuned on the MR-c dataset, this is no longer the case (table 3). Simila...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**4 Results and Discussion**", "Header 4": null }
{ "chunk_type": "body" }
### **5 Conclusion** In the present work we have described a series of experiments with convolutional neural networks built on top of word2vec. Despite little tuning of hyperparameters, a simple CNN with one layer of convolution performs remarkably well. Our results add to the well-established evidence that pretraini...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**5 Conclusion**", "Header 4": null }
{ "chunk_type": "body" }
### **References** Y. Bengio, R. Ducharme, P. Vincent. 2003. Neural Probabilitistic Language Model. *Journal of Ma-* *chine Learning Research* 3:1137–1155. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuglu, P. Kuksa. 2011. Natural Language Processing (Almost) from Scratch. *Journal of Ma-* *chine Learnin...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**References**", "Header 4": null }
{ "chunk_type": "references" }
CRFs with hidden variables. *In Proceedings of ACL* *2010.* B. Pang, L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. *In Proceedings of ACL* *2004.* B. Pang, L. Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with...
{ "id": "1408.5882", "title": "Convolutional Neural Networks for Sentence Classification", "categories": [ "cs.CL", "cs.NE" ] }
{ "Header 1": null, "Header 2": "**Convolutional Neural Networks for Sentence Classification**", "Header 3": "**References**", "Header 4": null }
{ "chunk_type": "references" }
### **Scaling Laws for Neural Language Models** **Jared Kaplan** *[∗]* Johns Hopkins University, OpenAI ``` jaredk@jhu.edu ``` **Sam McCandlish** *[∗]* OpenAI ``` sam@openai.com ``` **Tom Henighan** OpenAI ``` henighan@openai.com ``` **Scott Gray** OpenAI ``` scott@openai.com ``` **Tom B. Brown** ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": null }
{ "chunk_type": "title" }
#### **Abstract** We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as netw...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**Abstract**" }
{ "chunk_type": "body" }
#### **Contents** **1** **Introduction** **2** **2** **Background and Methods** **6** **3** **Empirical Results and Basic Power Laws** **7** **4** **Charting the Infinite Data Limit and Overfitting** **10** **5** **Scaling Laws with Model Size and Training Time** **12** **6** **Optimal Allocation of the Com...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**Contents**" }
{ "chunk_type": "body" }
#### **1 Introduction** Language provides a natural domain for the study of artificial intelligence, as the vast majority of reasoning tasks can be efficiently expressed and evaluated in language, and the world’s text provides a wealth of data for unsupervised learning via generative modeling. Deep learning has recen...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**1 Introduction**" }
{ "chunk_type": "body" }
performance depends very weakly on other architectural hyperparameters such as depth vs. width. (Section 3) **Smooth power laws:** Performance has a power-law relationship with each of the three scale factors *N, D, C* when not bottlenecked by the other two, with trends spanning more than six orders of magnitude (see...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**1 Introduction**" }
{ "chunk_type": "body" }
with data requirements growing very slowly as *D ∼* *C* [0] *[.]* [27] with training compute. (Section 6) **Optimal batch size:** The ideal batch size for training these models is roughly a power of the loss only, and continues to be determinable by measuring the gradient noise scale [MKAT18]; it is roughly 1-2 milli...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**1 Introduction**" }
{ "chunk_type": "body" }
or the optimally allocated compute budget *C* min (see Figure 1): 1. For models with a limited number of parameters, trained to convergence on sufficiently large datasets: *L* ( *N* ) = ( *N* c */N* ) *[α]* *[N]* ; *α* *N* *∼* 0 *.* 076 *,* *N* c *∼* 8 *.* 8 *×* 10 [13] (non-embedding parameters) (1.1) 2. For lar...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**1 Introduction**" }
{ "chunk_type": "body" }
expected as we scale up *N*, *D*, or *C* min ; for example, doubling the number of parameters yields a loss that is smaller by a factor 2 *[−][α]* *[N]* = 0 *.* 95. The precise numerical values of *N* c *, C* c [min] *,* and *D* c depend on the vocabulary size and tokenization and hence do not have a fundamental meanin...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**1 Introduction**" }
{ "chunk_type": "body" }
computational budget *C* increases, it should be spent primarily on larger models, without dramatic increases in training time or dataset size (see Figure 3). This also implies that as models grow larger, they become increasingly sample efficient. In practice, researchers typically train smaller models for longer than ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**1 Introduction**" }
{ "chunk_type": "body" }
#### **2 Background and Methods** We train language models on WebText2, an extended version of the WebText [RWC [+] 19] dataset, tokenized using byte-pair encoding [SHB15] with a vocabulary size *n* vocab = 50257. We optimize the autoregressive log-likelihood (i.e. cross-entropy loss) averaged over a 1024-token conte...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**2 Background and Methods**" }
{ "chunk_type": "body" }
|Feedforward|n 2d d layer model ff|2n 2d d layer model ff| |De-embed|—|2d n model vocab| |Total (Non-Embedding)|N = 2d n (2d + d ) model layer attn ff|C = 2N + 2n n d forward layer ctx attn| **Table 1** Parameter counts and compute (forward pass) estimates for a Transformer model. Sub-leading terms such as nonlineari...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**2 Background and Methods**" }
{ "chunk_type": "body" }
python library. In total, the dataset consists of 20.3M documents containing 96 GB of text and 1 *.* 62 *×* 10 [10] words (as defined by `wc` ). We then apply the reversible tokenizer described in [RWC [+] 19], which yields 2 *.* 29 *×* 10 [10] tokens. We reserve 6 *.* 6 *×* 10 [8] of these tokens for use as a test set...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**2 Background and Methods**" }
{ "chunk_type": "body" }
#### **3 Empirical Results and Basic Power Laws** To characterize language model scaling we train a wide variety of models, varying a number of factors including: *•* Model size (ranging in size from 768 to 1.5 billion non-embedding parameters) *•* Dataset size (ranging from 22 million to 23 billion tokens) *•*...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**3 Empirical Results and Basic Power Laws**" }
{ "chunk_type": "body" }
In this section we will display data along with empirically-motivated fits, deferring theoretical analysis to later sections. **3.1** **Approximate Transformer Shape and Hyperparameter Independence** Transformer performance depends very weakly on the shape parameters *n* layer *, n* heads, and *d* ff when we hold t...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**3 Empirical Results and Basic Power Laws**" }
{ "chunk_type": "body" }
parameter count (including the embedding parameters) the trend is somewhat obscured (see Figure 6). This suggests that the embedding matrix can be made smaller without impacting performance, as has been seen in recent work [LCG [+] 19]. Although these models have been trained on the WebText2 dataset, their test loss ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**3 Empirical Results and Basic Power Laws**" }
{ "chunk_type": "body" }
dataset. We stopped training once the test loss ceased to decrease. We see that the resulting test losses can be fit with simple power-law *L* ( *D* ) *≈* *D* *c* *α* *D* (3.2) � *D* � in the dataset size. The data and fit appear in Figure 1. The total amount of non-embedding compute used during training can be e...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**3 Empirical Results and Basic Power Laws**" }
{ "chunk_type": "body" }
#### **4 Charting the Infinite Data Limit and Overfitting** In Section 3 we found a number of basic scaling laws for language modeling performance. Here we will study the performance of a model of size *N* trained on a dataset with *D* tokens while varying *N* and *D* simultaneously. We will empirically demonstrate t...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**4 Charting the Infinite Data Limit and Overfitting**" }
{ "chunk_type": "body" }
*α* *N* see Figure 4.) **Right** : The extent of overfitting depends predominantly on the ratio *N* *αD* */D*, as predicted in equation (4.3). The line is our fit to that equation. Since we stop training early when the test loss ceases to improve and optimize all models in the same way, we expect that larger mode...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**4 Charting the Infinite Data Limit and Overfitting**" }
{ "chunk_type": "body" }
We obtain an excellent fit, with the exception of the runs where the dataset has been reduced by a factor of 1024, to about 2 *×* 10 [7] tokens. With such a small dataset, an epoch consists of only 40 parameter updates. Perhaps such a tiny dataset represents a different regime for language modeling, as overfitting happ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**4 Charting the Infinite Data Limit and Overfitting**" }
{ "chunk_type": "body" }
avoid overfitting when training to within that threshold of convergence we require *D* ≳ (5 *×* 10 [3] ) *N* [0] *[.]* [74] (4.4) With this relation, models smaller than 10 [9] parameters can be trained with minimal overfitting on the 22B token WebText2 dataset, but our largest models will encounter some mild overf...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**4 Charting the Infinite Data Limit and Overfitting**" }
{ "chunk_type": "body" }
#### **5 Scaling Laws with Model Size and Training Time** In this section we will demonstrate that a simple scaling law provides a good description for the loss as a function of model size *N* and training time. First we will explain how to use the results of [MKAT18] to define a universal training step *S* min, whic...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**5 Scaling Laws with Model Size and Training Time**" }
{ "chunk_type": "body" }
critical batch size *B* crit ( *L* ) *≡* *[E]* [min] (5.2) *S* min which is a function of the target value of the loss. Training at the critical batch size makes a roughly optimal time/compute tradeoff, requiring 2 *S* min training steps and processing *E* = 2 *E* min data examples. In Figure 10 we have plotted t...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**5 Scaling Laws with Model Size and Training Time**" }
{ "chunk_type": "body" }
runs using Equation (1.6), repeated here for convenience: *L* ( *N, S* min ) = *N* *c* *α* *N* + *S* *c* *α* *S* (5.6) � *N* � � *S* min � for the loss. We include all training steps after the warmup period of the learning rate schedule, and find a fit to the data with the parameters: 5 Although the critical batc...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**5 Scaling Laws with Model Size and Training Time**" }
{ "chunk_type": "body" }
the Hessian eigenvalue density is roughly independent of model size. **5.3** **Lower Bound on Early Stopping Step** The results for *L* ( *N, S* min ) can be used to derive a lower-bound (and rough estimate) of the step at which early stopping should occur when training is data limited. It is motivated by the idea ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**5 Scaling Laws with Model Size and Training Time**" }
{ "chunk_type": "body" }
#### **6 Optimal Allocation of the Compute Budget** We displayed the *empirical* trend of performance as a function of the computation used during training in the top-right of Figure 1. However, this result involved training at a fixed batch size *B*, whereas we know 14 ----- |Models between 0.6x and 2.2x the o...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**6 Optimal Allocation of the Compute Budget**" }
{ "chunk_type": "body" }
during training, namely 2 *B* crit *S* min . We will determine this allocation both empirically and theoretically, by using the equation for *L* ( *N, S* min ), and we will demonstrate that these methods agree. **6.1** **Optimal Performance and Allocations** Let us first study the loss as a function of the optimall...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**6 Optimal Allocation of the Compute Budget**" }
{ "chunk_type": "body" }
that the optimal number of steps will only grow very slowly with compute, as *S* min *∝* ( *C* min ) [0] *[.]* [03] *,* (6.2) matching the empirical results in Figure 14. In fact the measured exponent is sufficiently small that our results may even be consistent with an exponent of zero. Thus we conclude that as ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**6 Optimal Allocation of the Compute Budget**" }
{ "chunk_type": "body" }
sensitive to the precise exponents from our power-law fits. **6.3** **Contradictions and a Conjecture** We observe no signs of deviation from straight power-law trends at large values of compute, data, or model size. Our trends must eventually level off, though, since natural language has non-zero entropy. Indeed...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**6 Optimal Allocation of the Compute Budget**" }
{ "chunk_type": "body" }
loss should scale as *L* ( *D* ) *∝* *D* *[−]* [0] *[.]* [095] . This implies that the loss would scale with compute as *L* ( *D* ( *C* min )) *∝* *C* min *[−]* [0] *[.]* [03] once we are data-limited. Once again, we have a contradiction, as this will eventually intersect with our prediction for *L* ( *C* min ) from Fi...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**6 Optimal Allocation of the Compute Budget**" }
{ "chunk_type": "body" }
#### **7 Related Work** Power laws can arise from a wide variety of sources [THK18]. Power-law scalings with model and dataset size in density estimation [Was06] and in random forest models [Bia12] may be connected with our results. These models suggest that power-law exponents may have a very rough interpretation as...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**7 Related Work**" }
{ "chunk_type": "body" }
many orders of magnitude beyond typical practice, and in particular does not use early stopping). We do not observe such a transition, and find that the necessary training data scales sublinearly in the model size. Expansions in the model size, particularly at large width [JGH18, LXS [+] 19], may provide a useful frame...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**7 Related Work**" }
{ "chunk_type": "body" }
#### **8 Discussion** We have observed consistent scalings of language model log-likelihood loss with non-embedding parameter count *N*, dataset size *D*, and optimized training computation *C* min, as encapsulated in Equations (1.5) and (1.6). Conversely, we find very weak dependence on many architectural and optimi...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**8 Discussion**" }
{ "chunk_type": "body" }
loss translates into improvement on relevant language tasks. Smooth quantitative change can mask major qualitative improvements: “more is different”. For example, the smooth aggregate growth of the economy provides no indication of the specific technological developments that underwrite it. Similarly, the smooth improv...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**8 Discussion**" }
{ "chunk_type": "body" }
#### **Acknowledgements** We would like to thank Shan Carter, Paul Christiano, Jack Clark, Ajeya Cotra, Ethan Dyer, Jason Eisner, Danny Hernandez, Jacob Hilton, Brice Menard, Chris Olah, and Ilya Sutskever for discussions and for feedback on drafts of this work. 19 -----
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": null, "Header 2": null, "Header 3": "**Scaling Laws for Neural Language Models**", "Header 4": "**Acknowledgements**" }
{ "chunk_type": "body" }
# **Appendices** #### **A Summary of Power Laws** For easier reference, we provide a summary below of the key trends described throughout the paper. |Parameters|Data|Compute|Batch Size|Equation| |---|---|---|---|---| |N|∞|∞|Fixed|L (N) = (N /N)αN c| |∞|D|Early Stop|Fixed|L (D) = (D /D)αD c| |Optimal|∞|C|Fixed|L (...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**A Summary of Power Laws**" }
{ "chunk_type": "body" }
#### **B Empirical Model of Compute-Efficient Frontier** Throughout this appendix all values of *C, S,* and *α* *C* are adjusted for training at the critical batch size *B* crit . We have left off the ‘adj’ label to avoid cluttering the notation. **B.1** **Defining Equations** The power-law fit to the learning cu...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**B Empirical Model of Compute-Efficient Frontier**" }
{ "chunk_type": "body" }
� *C* � where we defined *α* *C* = 1 */* (1 */α* *S* + 1 */α* *B* + 1 */α* *N* ) *≈* 0 *.* 052 (B.7) 1 */α* *S* +1 */α* *N* 1 */α* *S* *α* *S* *C* *c* = 6 *N* *c* *B* *∗* *S* *c* 1 + *[α]* *[N]* *.* (B.8) � *α* *S* � � *α* *N* � Similarly, we can eliminate *L* to find *N* ( *C* ): *α* *C* */α* *N* 1 */α* ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**B Empirical Model of Compute-Efficient Frontier**" }
{ "chunk_type": "body" }
*α* *N* *−* 1 */α* *S* *C* ( *N, L* ) = 6 *B* *∗* *S* *c* *N* *L −* *N* *c* *.* (B.15) � *L* [1] *[/α]* *[B]* �� � *N* � � Using A.6 and A.9, we can eliminate *L* in favor of *N* eff ( *L* ), the model size which reaches *L* most efficiently. From there, we find an expression for the excess compute needed as a cons...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**B Empirical Model of Compute-Efficient Frontier**" }
{ "chunk_type": "body" }
#### **C Caveats** In this section we list some potential caveats to our analysis. *•* At present we do not have a solid theoretical understanding for any of our proposed scaling laws. The scaling relations with model size and compute are especially mysterious. It may be possible to understand scaling at very large...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**C Caveats**" }
{ "chunk_type": "body" }
results, quantitatively or qualitatively. *•* We used the estimated training compute *C ≈* 6 *NBS*, which did not include contributions proportional to *n* ctx (see Section 2.1). So our scalings with compute may be confounded in practice in the regime of very large *n* ctx, specifically where *n* ctx ≳ 12 *d* model ....
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**C Caveats**" }
{ "chunk_type": "body" }
#### **D Supplemental Figures** **D.1** **Early Stopping and Test vs Train** In section 5.3 we described the result shown in Figure 16, which provides a prediction for a lower bound on the early stopping step. We also show the train and test loss for a given model size when training on different sized datasets. *...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**D Supplemental Figures**" }
{ "chunk_type": "body" }
|5.5 105 5.0 (Smin) 4.5 Steps 104 4.0 Loss 3.5 Minimum 3.0 103 2.5 106 107 108 Parameters (non-embedding)|5.5 1011 (Emin) 5.0 1010 4.5 Examples 4.0 Loss 109 3.5 Minimum 3.0 108 2.5 106 107 108 Parameters (non-embedding)| |---|---| **Figure 19** The number of minimum serial steps needed to reach any fixed value of the...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**D Supplemental Figures**" }
{ "chunk_type": "body" }
LT16], or a more general feature of the model architecture and optimization. It provides some suggestion for the potential benefits (or lack thereof) from training on larger contexts. Not only do larger models converge to better performance at *T* = 1024, but they also improve more quickly at early tokens, suggesting t...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**D Supplemental Figures**" }
{ "chunk_type": "body" }
multiple runs is necessary to validate performance changes smaller than this level. 6 ![](/content/images/2001.08361v1.pdf-25-0.jpg) 5 4 3 2 10 [4] 10 [5] 10 [6] 10 [7] 10 [8] 10 [9] Parameters (non-embedding) **Figure 23** The trend for performance as a function of parameter count, *L* ( *N* ), is fi...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**D Supplemental Figures**" }
{ "chunk_type": "body" }
26 ----- 2.8 2.7 2.6 2.5 ![](/content/images/2001.08361v1.pdf-26-1.jpg) ![](/content/images/2001.08361v1.pdf-26-2.jpg) 2.4 2.3 10 [1] 10 [2] Depth **Figure 24** We show evaluations on a series of datasets for models with approximately 1.5 Billion parameters. We observe no effect of depth on gene...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**D Supplemental Figures**" }
{ "chunk_type": "body" }
#### **List of Figures** 1 Summary of simple power laws. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Illustration of sample efficiency and compute efficiency. . . . . . . . . . . . . . . . . . . . . 4 3 How to scale up model size, batch size, and serial steps . . . . . . . . . . . . . . . ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**List of Figures**" }
{ "chunk_type": "body" }
#### **List of Tables** 1 Parameter and compute counts for Transformer . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Fits to *L* ( *N, D* ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3 Fits to *L* ( *N, S* ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**List of Tables**" }
{ "chunk_type": "body" }
#### **References** [ACDE12] Eduardo G Altmann, Giampaolo Cristadoro, and Mirko Degli Esposti. On the origin of longrange correlations in texts. *Proceedings of the National Academy of Sciences*, 109(29):11582– 11587, 2012. 25 [AS17] Madhu S. Advani and Andrew M. Saxe. High-dimensional dynamics of generalization er...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**References**" }
{ "chunk_type": "references" }
Giulio Biroli, Clément Hongler, and Matthieu Wyart. Scaling description of generalization with number of parameters in deep learning. *arXiv* [, 2019, 1901.01608. 18](http://arxiv.org/abs/1901.01608) [GKX19] Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net optimization via hessian e...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**References**" }
{ "chunk_type": "references" }
convolutional neural networks. In *Proceedings of the 25th International Conference on Neural* *Information Processing Systems - Volume 1*, NIPS’12, pages 1097–1105, USA, 2012. Curran Associates Inc. URL `http://dl.acm.org/citation.cfm?id=2999134.2999257` . 19 [LCG [+] 19] Zhenzhong Lan, Mingda Chen, Sebastian Goodma...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**References**" }
{ "chunk_type": "references" }
[RRBS19a] Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive [prediction of the generalization error across scales, 2019, 1909.12673. 18](http://arxiv.org/abs/1909.12673) [RRBS19b] Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive [prediction ...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**References**" }
{ "chunk_type": "references" }
`11946` . 18 [VSP [+] 17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, *Advances in Neural* *Information P...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**References**" }
{ "chunk_type": "references" }
*(ICCV)* [, Dec 2015. doi:10.1109/iccv.2015.11. 7](http://dx.doi.org/10.1109/iccv.2015.11) [ZLN [+] 19] Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George E. Dahl, Christopher J. Shallue, and Roger B. Grosse. Which algorithmic choices matter at which batch sizes? insights from a noisy quadr...
{ "id": "2001.08361", "title": "Scaling Laws for Neural Language Models", "categories": [ "cs.LG", "stat.ML" ] }
{ "Header 1": "**Appendices**", "Header 2": null, "Header 3": null, "Header 4": "**References**" }
{ "chunk_type": "references" }