Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,71 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
## The Verdict RCL LLM Dataset
|
| 5 |
+
|
| 6 |
+
### Overview
|
| 7 |
+
This dataset is prepared explicitly for training Large Language Models (LLMs) using Lumina AI's Random Contrast Learning (RCL) algorithm via the PrismRCL application. Unlike traditional classification datasets, LLM datasets require text to be formatted into input sequences and corresponding target tokens.
|
| 8 |
+
|
| 9 |
+
### Dataset Structure
|
| 10 |
+
For LLM training, the data structure differs significantly from standard classification datasets:
|
| 11 |
+
|
| 12 |
+
```
|
| 13 |
+
the-verdict-rcl-mm/
|
| 14 |
+
train/
|
| 15 |
+
[class_token_1]/
|
| 16 |
+
values.txt
|
| 17 |
+
[class_token_2]/
|
| 18 |
+
values.txt
|
| 19 |
+
...
|
| 20 |
+
test/
|
| 21 |
+
[class_token_1]/
|
| 22 |
+
values.txt
|
| 23 |
+
[class_token_2]/
|
| 24 |
+
values.txt
|
| 25 |
+
...
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
- **Class tokens:** Each folder name represents the target token for sequences.
|
| 29 |
+
- **values.txt:** Each line within `values.txt` files represents an individual input sequence that maps to the target token of the folder it's contained within.
|
| 30 |
+
|
| 31 |
+
### LLM Data Preparation
|
| 32 |
+
PrismRCL requires LLM datasets to follow specific formatting, distinct from classification tasks:
|
| 33 |
+
- Clean your raw text data (removing very long or non-printable characters).
|
| 34 |
+
- Create input sequences with a sliding-window method. For example, a 4-token input sequence predicts the 5th token.
|
| 35 |
+
- Each input sequence is stored as a single line in the class-specific `values.txt` files.
|
| 36 |
+
|
| 37 |
+
**Example:**
|
| 38 |
+
Original text: "The Project Gutenberg eBook of Les Misérables."
|
| 39 |
+
- Input: "The Project Gutenberg eBook" → Target: "of"
|
| 40 |
+
- Input: "Project Gutenberg eBook of" → Target: "Les"
|
| 41 |
+
|
| 42 |
+
### Usage (LLM-specific)
|
| 43 |
+
Use PrismRCL's `llm` parameter for LLM-specific training:
|
| 44 |
+
|
| 45 |
+
```
|
| 46 |
+
C:\PrismRCL\PrismRCL.exe llm naivebayes directional rclticks=67 readtextbyline ^
|
| 47 |
+
data=C:\path\to\the-verdict-rcl-mm\train testdata=C:\path\to\the-verdict-rcl-mm\test ^
|
| 48 |
+
savemodel=C:\path\to\models\verdict_llm.classify ^
|
| 49 |
+
log=C:\path\to\log_files stopwhendone
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
### Explanation of Command
|
| 53 |
+
- **llm:** Specifies the dataset as an LLM training dataset.
|
| 54 |
+
- **naivebayes:** Evaluation method suitable for LLM data.
|
| 55 |
+
- **directional:** Maintains the order of tokens, critical for language modeling.
|
| 56 |
+
- **rclticks:** Sets the granularity for RCL discretization.
|
| 57 |
+
- **readtextbyline:** Each line in the text files is treated as a separate data sample.
|
| 58 |
+
- **data & testdata:** Paths to training and testing datasets, respectively.
|
| 59 |
+
- **savemodel:** Output path for the trained LLM model.
|
| 60 |
+
- **log:** Directory for log file storage.
|
| 61 |
+
- **stopwhendone:** Automatically ends the session after training completes.
|
| 62 |
+
|
| 63 |
+
### License
|
| 64 |
+
This dataset is licensed under the MIT License.
|
| 65 |
+
|
| 66 |
+
### Original Source
|
| 67 |
+
Prepared and structured explicitly by Lumina AI for RCL-based LLM training. Please credit Lumina AI when utilizing this dataset in research or applications.
|
| 68 |
+
|
| 69 |
+
### Additional Information
|
| 70 |
+
Refer to the PrismRCL Technical Documentation v2.6.2 for more detailed guidance on LLM data preparation and parameter specifications.
|
| 71 |
+
|