GenAIDevTOProd commited on
Commit
4522d1b
·
verified ·
1 Parent(s): 6dc9fe0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -18
README.md CHANGED
@@ -1,12 +1,28 @@
1
  ---
2
  license: cc-by-sa-3.0
3
  pretty_name: dolly 15k enriched
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
- Databricks - Dolly 15k – Enriched Variant (Instruction-Tuned with Semantic and Complexity Augmentation)
 
6
  Overview
 
7
  This dataset is a semantically enriched and complexity-aware extension of the original Databricks Dolly 15k, purpose-built for evaluating and training instruction-following models. Each sample is augmented with additional signals to enable more nuanced filtering, curriculum learning, and benchmark development across diverse NLP tasks.
8
 
9
- Dataset Format
 
10
  Each sample includes the following fields:
11
 
12
  instruction (str) – the prompt or task instruction
@@ -29,8 +45,10 @@ instruction_tokens (int) – number of tokens in the instruction
29
 
30
  response_tokens (int) – number of tokens in the response
31
 
32
- Enrichment Details
33
- 1. Semantic Embeddings (384-D)
 
 
34
  Model: all-MiniLM-L6-v2 from SentenceTransformers
35
 
36
  Purpose:
@@ -41,7 +59,8 @@ Facilitates clustering or curriculum grouping based on semantic distance.
41
 
42
  Use Case: RAG pipelines, hybrid retriever-generator evaluation, semantic data deduplication.
43
 
44
- 2. Multi-Label Category Enrichment
 
45
  Method: LLM-based enrichment of original category into multiple labels reflecting nuanced intent (e.g., closed_qa, classification, instruction_reformulation).
46
 
47
  Purpose:
@@ -52,7 +71,8 @@ Enables few-shot sampling or balanced evaluation subsets.
52
 
53
  Use Case: Model generalization studies, task disambiguation training, LLM taxonomy alignment.
54
 
55
- 3. Readability Scores
 
56
  Metric: Flesch Reading Ease
57
 
58
  Range: Typically from -10 (very complex/short text) to 100+ (very easy to read)
@@ -69,7 +89,8 @@ Measures linguistic complexity for curriculum learning.
69
 
70
  Enables filtering of prompts based on difficulty level.
71
 
72
- 4. Token Lengths (Instruction/Response)
 
73
  Method: tiktoken tokenizer for gpt-3.5-turbo vocabulary
74
 
75
  Purpose:
@@ -80,7 +101,8 @@ Enables outlier detection for unusually long or short samples.
80
 
81
  Use Case: Model length conditioning, latency profiling, instruction tuning length analysis.
82
 
83
- Research Use Cases
 
84
  Curriculum Learning: Use readability and token length to gradually train models from simple to complex examples.
85
 
86
  Semantic Similarity Evaluation: Leverage embeddings for nearest-neighbor search, duplicate detection, or hybrid retriever training.
@@ -89,15 +111,16 @@ Task-Type Robustness: Train and evaluate models across enriched multi-label cate
89
 
90
  Prompt Engineering Validation: Analyze impact of prompt complexity (via readability/tokens) on response quality.
91
 
92
- Citation (Original)
93
- bibtex
94
- Copy
95
- Edit
96
- @misc{databricks2023dolly,
97
- author = {Databricks},
98
- title = {Databricks Dolly 15k: Instruction-Tuned LLM Dataset},
99
- year = {2023},
100
- url = {https://huggingface.co/datasets/databricks/databricks-dolly-15k}
101
  }
102
- License
 
 
103
  Same as original Dolly 15k: Creative Commons Attribution-ShareAlike 3.0 (CC BY-SA 3.0)
 
1
  ---
2
  license: cc-by-sa-3.0
3
  pretty_name: dolly 15k enriched
4
+ language:
5
+ - en
6
+ tags:
7
+ - databricks
8
+ - dolly
9
+ - NLP
10
+ - semantic
11
+ - llm-evaluation
12
+ - fine-tuning
13
+ - text-statistics
14
+ - sentence-transformers
15
+ - embedding
16
+ - faiss-compatible
17
  ---
18
+ ## Databricks - Dolly 15k – Enriched Variant (Instruction-Tuned with Semantic and Complexity Augmentation)
19
+
20
  Overview
21
+
22
  This dataset is a semantically enriched and complexity-aware extension of the original Databricks Dolly 15k, purpose-built for evaluating and training instruction-following models. Each sample is augmented with additional signals to enable more nuanced filtering, curriculum learning, and benchmark development across diverse NLP tasks.
23
 
24
+ ## Dataset Format
25
+
26
  Each sample includes the following fields:
27
 
28
  instruction (str) – the prompt or task instruction
 
45
 
46
  response_tokens (int) – number of tokens in the response
47
 
48
+ ## Enrichment Details
49
+
50
+ # 1. Semantic Embeddings (384-D)
51
+
52
  Model: all-MiniLM-L6-v2 from SentenceTransformers
53
 
54
  Purpose:
 
59
 
60
  Use Case: RAG pipelines, hybrid retriever-generator evaluation, semantic data deduplication.
61
 
62
+ # 2. Multi-Label Category Enrichment
63
+
64
  Method: LLM-based enrichment of original category into multiple labels reflecting nuanced intent (e.g., closed_qa, classification, instruction_reformulation).
65
 
66
  Purpose:
 
71
 
72
  Use Case: Model generalization studies, task disambiguation training, LLM taxonomy alignment.
73
 
74
+ # 3. Readability Scores
75
+
76
  Metric: Flesch Reading Ease
77
 
78
  Range: Typically from -10 (very complex/short text) to 100+ (very easy to read)
 
89
 
90
  Enables filtering of prompts based on difficulty level.
91
 
92
+ # 4. Token Lengths (Instruction/Response)
93
+
94
  Method: tiktoken tokenizer for gpt-3.5-turbo vocabulary
95
 
96
  Purpose:
 
101
 
102
  Use Case: Model length conditioning, latency profiling, instruction tuning length analysis.
103
 
104
+ ## Research Use Cases
105
+
106
  Curriculum Learning: Use readability and token length to gradually train models from simple to complex examples.
107
 
108
  Semantic Similarity Evaluation: Leverage embeddings for nearest-neighbor search, duplicate detection, or hybrid retriever training.
 
111
 
112
  Prompt Engineering Validation: Analyze impact of prompt complexity (via readability/tokens) on response quality.
113
 
114
+ ## Citation (Original)
115
+
116
+ @online{DatabricksBlog2023DollyV2,
117
+ author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
118
+ title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
119
+ year = {2023},
120
+ url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
121
+ urldate = {2023-06-30}
 
122
  }
123
+
124
+ ## License
125
+
126
  Same as original Dolly 15k: Creative Commons Attribution-ShareAlike 3.0 (CC BY-SA 3.0)