Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,11 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
+
An experimental small-ish dataset I whipped up in an afternoon.
|
| 5 |
+
|
| 6 |
+
What I did:
|
| 7 |
+
- Curated StackOverflow and other StackExchange subforums for some good examples that were **not** GPT generated, including writing advice & other real world questions.
|
| 8 |
+
- Duplicated said examples with different prompt formatting (so it would generalize to both Alpaca & the official llama2-chat prompt layouts)
|
| 9 |
+
- Two long context outliers to ensure long context works (a TV episode script, 10k tokens, and the first few chapters of 1984, 32k tokens.)
|
| 10 |
+
|
| 11 |
+
This comes out to about ~60k tokens total, give or take.
|