Simon van Dyk commited on
Commit ·
09df95f
1
Parent(s): 3f43bf7
More refinements
Browse files
README.md
CHANGED
|
@@ -27,12 +27,9 @@ NOSIBLE is a web-scale vertical search engine. Our worldwide media surveillance
|
|
| 27 |
- [NOSIBLE Forward Looking v1.1 Base](https://huggingface.co/NOSIBLE/forward-looking-v1.1-base)
|
| 28 |
- [NOSIBLE Prediction v1.1 Base](https://huggingface.co/NOSIBLE/prediction-v1.1-base)
|
| 29 |
|
| 30 |
-
## Why this dataset?
|
| 31 |
-
|
| 32 |
-
The NOSIBLE Financial Sentiment Dataset is an open collection of 100,000 cleaned and deduplicated financial news text samples. The data was extracted directly from the NOSIBLE search engine in response to real queries asked by real financial instiutions. Each sample has been labeled as positive, negative, or neutral using a sophisticated labeling pipeline (more information coming soon). The models we trained on the NOSIBLE Financial Sentiment Dataset outperform models trained on the Financial PhraseBank out of sample.
|
| 33 |
-
|
| 34 |
## What is it?
|
| 35 |
-
|
|
|
|
| 36 |
|
| 37 |
## How to use it
|
| 38 |
Using the [HuggingFace datasets library](https://huggingface.co/docs/datasets/):
|
|
@@ -86,32 +83,32 @@ The following is an example sample from the dataset:
|
|
| 86 |
## Dataset creation
|
| 87 |
|
| 88 |
### Data source
|
| 89 |
-
The dataset was sampled from
|
| 90 |
|
| 91 |
### Relabeling algorithm
|
| 92 |
-
The dataset's label field was annotated
|
| 93 |
|
| 94 |
The algorithm outline is as follows:
|
| 95 |
|
| 96 |
1. Hand label a candidate set of ~200 samples to use as a test bed to refine the prompt used by the LLM labelers to classify the text.
|
| 97 |
2. Label a set of 100k samples with LLM labelers:
|
| 98 |
-
- x-ai/grok-4-fast
|
| 99 |
-
- x-ai/grok-4-fast:thinking
|
| 100 |
-
- google/gemini-2.5-flash
|
| 101 |
-
- openai/gpt-5-nano
|
| 102 |
-
- openai/gpt-4.1-mini
|
| 103 |
-
- openai/gpt-oss-120b
|
| 104 |
-
- meta-llama/llama-4-maverick
|
| 105 |
-
- qwen/qwen3-32b
|
| 106 |
3. Train a linear model on the labels using the majority vote of the LLM labelers.
|
| 107 |
4. Iterative relabeling (active learning steps) to improve the label quality:
|
| 108 |
- Evaluate linear model's predictions over the samples.
|
| 109 |
- Find disagreements: samples where the LLM labelers agree on a label, but the model has predicted a different label.
|
| 110 |
- Consult a much larger LLM, the oracle, to evaluate the model's prediction and relabel the sample if it agrees with the LLM labels.
|
| 111 |
-
- Drop the worst performing LLM
|
| 112 |
- Repeat the process with the remaining LLM labelers until the number of samples relabeled reaches 0.
|
| 113 |
- Store the refined relabeled dataset.
|
| 114 |
-
5. This is the final dataset used for training the NOSIBLE/financial-sentiment-v1.1-base model, which is a finetune.
|
| 115 |
|
| 116 |
## Additional information
|
| 117 |
|
|
|
|
| 27 |
- [NOSIBLE Forward Looking v1.1 Base](https://huggingface.co/NOSIBLE/forward-looking-v1.1-base)
|
| 28 |
- [NOSIBLE Prediction v1.1 Base](https://huggingface.co/NOSIBLE/prediction-v1.1-base)
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
## What is it?
|
| 31 |
+
|
| 32 |
+
The NOSIBLE Financial Sentiment Dataset is an open collection of **100,000** cleaned and deduplicated financial news text samples. The data was extracted directly from the NOSIBLE search engine in response to real queries asked by real financial institutions. Each sample has been labeled as **positive**, **negative**, or **neutral** using a sophisticated labeling pipeline (more information coming soon, and a brief explanation below). The models we trained on the NOSIBLE Financial Sentiment Dataset outperform models trained on the Financial PhraseBank out of sample.
|
| 33 |
|
| 34 |
## How to use it
|
| 35 |
Using the [HuggingFace datasets library](https://huggingface.co/docs/datasets/):
|
|
|
|
| 83 |
## Dataset creation
|
| 84 |
|
| 85 |
### Data source
|
| 86 |
+
The dataset was sampled from NOSIBLE datafeeds, which provides web-scale surveillance data to customers. Samples consist of top-ranked search results from the NOSIBLE search engine in response to safe, curated queries. All data is sourced exclusively from the public web.
|
| 87 |
|
| 88 |
### Relabeling algorithm
|
| 89 |
+
The dataset's label field was annotated by LLMs and refined using an active learning algorithm called relabeling.
|
| 90 |
|
| 91 |
The algorithm outline is as follows:
|
| 92 |
|
| 93 |
1. Hand label a candidate set of ~200 samples to use as a test bed to refine the prompt used by the LLM labelers to classify the text.
|
| 94 |
2. Label a set of 100k samples with LLM labelers:
|
| 95 |
+
- `x-ai/grok-4-fast`
|
| 96 |
+
- `x-ai/grok-4-fast:thinking`
|
| 97 |
+
- `google/gemini-2.5-flash`
|
| 98 |
+
- `openai/gpt-5-nano`
|
| 99 |
+
- `openai/gpt-4.1-mini`
|
| 100 |
+
- `openai/gpt-oss-120b`
|
| 101 |
+
- `meta-llama/llama-4-maverick`
|
| 102 |
+
- `qwen/qwen3-32b`
|
| 103 |
3. Train a linear model on the labels using the majority vote of the LLM labelers.
|
| 104 |
4. Iterative relabeling (active learning steps) to improve the label quality:
|
| 105 |
- Evaluate linear model's predictions over the samples.
|
| 106 |
- Find disagreements: samples where the LLM labelers agree on a label, but the model has predicted a different label.
|
| 107 |
- Consult a much larger LLM, the oracle, to evaluate the model's prediction and relabel the sample if it agrees with the LLM labels.
|
| 108 |
+
- Drop the worst performing LLM labelers from the ensemble.
|
| 109 |
- Repeat the process with the remaining LLM labelers until the number of samples relabeled reaches 0.
|
| 110 |
- Store the refined relabeled dataset.
|
| 111 |
+
5. This is the final dataset used for training the [NOSIBLE Financial Sentiment v1.1 Base](https://huggingface.co/NOSIBLE/financial-sentiment-v1.1-base) model, which is a finetune.
|
| 112 |
|
| 113 |
## Additional information
|
| 114 |
|