Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -12,10 +12,10 @@ The original dataset consisted of ~90K samples. Light filtering striped that dow
|
|
| 12 |
Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
|
| 13 |
|
| 14 |
In here are several files:
|
| 15 |
-
* `discord_rp_with_token_counts.json` - The original dataset in all its unprocessed glory. ~90k items. Total Average Token Length for all items: ~
|
| 16 |
-
* `125_tokens_10_messages_discord_rp.json` (Strictest) - Original dataset filtered for an average token length of 125 and a minimum conversation length of 10 messages. Mostly unprocessed.
|
| 17 |
-
* `80_tokens_6_messages_discord_rp.json` (Stricter) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
|
| 18 |
-
* `80_tokens_3_messages_discord_rp.json` (Light) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 3 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
|
| 19 |
* `opencai_rp.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by `gpt-3.5-turbo-16k`.
|
| 20 |
* `opencai_rp_metharme.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed, filtered to 1229 samples, and converted to metharme format.
|
| 21 |
|
|
|
|
| 12 |
Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
|
| 13 |
|
| 14 |
In here are several files:
|
| 15 |
+
* `discord_rp_with_token_counts.json` - The original dataset in all its unprocessed glory. ~90k items. Total Average Token Length for all items: ~143 tokens.
|
| 16 |
+
* `125_tokens_10_messages_discord_rp.json` (Strictest) - Original dataset filtered for an average token length of 125 and a minimum conversation length of 10 messages. Mostly unprocessed. Average Length: 205 tokens.
|
| 17 |
+
* `80_tokens_6_messages_discord_rp.json` (Stricter) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages. Mostly unprocessed. Average Length: 181 tokens. The latter contains the former, so use one or the other, but not both.
|
| 18 |
+
* `80_tokens_3_messages_discord_rp.json` (Light) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 3 messages. Mostly unprocessed. Average Length: 202 tokens. The latter contains the former, so use one or the other, but not both.
|
| 19 |
* `opencai_rp.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by `gpt-3.5-turbo-16k`.
|
| 20 |
* `opencai_rp_metharme.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed, filtered to 1229 samples, and converted to metharme format.
|
| 21 |
|