ojhfklsjhl commited on
Commit
cd2ef46
·
verified ·
1 Parent(s): 90342dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -7,13 +7,13 @@ tags:
7
  - lookback
8
  - language
9
  ---
10
- # Model Card for NoLBert: A Time-Stamped Pre-Trained LLM
11
 
12
- NoLBERT (No Lookahead(back) bias bidirectional encoder representation from transformers) is a foundational transformer-based language model specifically trained to avoid both lookahead and lookback bias.
13
 
14
- Lookahead bias is a fundamental challenge when researchers and practitioners use inferences from language models for forecasting. For example, when we ask a language model to infer the short-term return of a stock given a set of news articles, a concern is that the model may have been trained on data that include future information beyond the point in time when the news articles were released. As a result, the nature of the task changes from drawing return-related inference from text to retrieving the date of the news articles and the realized returns of the particular stock shortly after that date. Consequently, this approach becomes invalid in practice when using such models to predict stock returns beyond the training data's coverage period. To frame the task as one of natural language inference, we pre-train a new text encoder using data strictly from 1976 to 1995. Therefore, our model exhibits no lookahead bias when backtesting trading strategies using data from 1996 onward or when performing other time series forecasting tasks using text data.
15
 
16
- Another key feature of our model is that it also avoids lookback bias. In particular, after pre-training, the numerical representation provided by any model reflects a snapshot in time (although the exact time may not be well-defined). For example, in the early 1900s, the sentence “She is running a program” likely meant that the person was organizing an event. By contrast, in the late 20th century, the same sentence likely refers to someone executing a computer code. Since a model is trained to learn from all of its training data to form text representations, if it is trained using data spanning a long time horizon, it becomes unclear which period the final encoded vector represents. In this example, if the model is trained on data from the entire 20th century, the resulting numerical representation may exhibit lookback bias when the intention is to analyze texts from more recent periods. To overcome this, we use a highly restricted time window: all of our model's training data are from 1976 to 1995, and our validation set is strictly from 1996.
17
 
18
  Our model is trained on 1 billion words (1-2 billion tokens) from Parliament Q&As, TV show conversations, music lyrics, patents, FOMC documents, public access books, newspapers, election campaign documents, and research papers. The model is based on the base-size DeBERTa model architecture and a custom ByteLevelBPETokenizer trained using the same training data.
19
 
@@ -23,7 +23,7 @@ Our model achieves state-of-the-art performance with less than 10% of training d
23
  |------------------------|:--------------:|:--------------------:|:-----------------:|:---------------:|:--------------------:|:-----------------:|:---------------:|
24
  | FinBERT | 30 | 110 | 0.29|0.89|0.87|0.79|0.86|
25
  | StoriesLM | 30 | 110 | **0.47**|0.90|0.87|0.80|0.87|
26
- | NolBERT | 30 | 109 | 0.43|**0.91**|**0.91**|**0.82**|**0.89**
27
 
28
  ## Usage Examples
29
 
 
7
  - lookback
8
  - language
9
  ---
10
+ # NoLBert: A Time-Stamped Pre-Trained LLM
11
 
12
+ **NoLBERT** (No Lookahead(back) bias bidirectional encoder representation from transformers) is a foundational transformer-based language model specifically trained to avoid both lookahead and lookback bias.
13
 
14
+ **Lookahead bias** is a fundamental challenge when researchers and practitioners use inferences from language models for forecasting. For example, when we ask a language model to infer the short-term return of a stock given a set of news articles, a concern is that the model may have been trained on data that include future information beyond the point in time when the news articles were released. As a result, the nature of the task changes from drawing return-related inference from text to retrieving the date of the news articles and the realized returns of the particular stock shortly after that date. Consequently, this approach becomes invalid in practice when using such models to predict stock returns beyond the training data's coverage period. To frame the task as one of natural language inference, we pre-train a new text encoder using data strictly from 1976 to 1995. Therefore, our model exhibits no lookahead bias when backtesting trading strategies using data from 1996 onward or when performing other time series forecasting tasks using text data.
15
 
16
+ Another key feature of our model is that it also avoids **lookback bias**. In particular, after pre-training, the numerical representation provided by any model reflects a snapshot in time (although the exact time may not be well-defined). For example, in the early 1900s, the sentence “She is running a program” likely meant that the person was organizing an event. By contrast, in the late 20th century, the same sentence likely refers to someone executing a computer code. Since a model is trained to learn from all of its training data to form text representations, if it is trained using data spanning a long time horizon, it becomes unclear which period the final encoded vector represents. In this example, if the model is trained on data from the entire 20th century, the resulting numerical representation may exhibit lookback bias when the intention is to analyze texts from more recent periods. To overcome this, we use a highly restricted time window: all of our model's training data are *from 1976 to 1995*, and our validation set is strictly from 1996.
17
 
18
  Our model is trained on 1 billion words (1-2 billion tokens) from Parliament Q&As, TV show conversations, music lyrics, patents, FOMC documents, public access books, newspapers, election campaign documents, and research papers. The model is based on the base-size DeBERTa model architecture and a custom ByteLevelBPETokenizer trained using the same training data.
19
 
 
23
  |------------------------|:--------------:|:--------------------:|:-----------------:|:---------------:|:--------------------:|:-----------------:|:---------------:|
24
  | FinBERT | 30 | 110 | 0.29|0.89|0.87|0.79|0.86|
25
  | StoriesLM | 30 | 110 | **0.47**|0.90|0.87|0.80|0.87|
26
+ | NoLBERT | 30 | 109 | 0.43|**0.91**|**0.91**|**0.82**|**0.89**
27
 
28
  ## Usage Examples
29