Datasets:
language:
- en
First, I merged instruction and context columns because it's weird to have instructions saying "summarize this" without the passage itself.
Then I used Senku-70B Q2 GGUF to rate each example out of 10 using a custom-made prompt based on clarity, completeness, correctness, relevance and formatting. Here are a few examples of below 5 pairs:
Observations & Thoughts:
There 1734 examples with <6.5 score and 562 examples with <5 score. Around 10% of the dataset looks low quality and/or confusing.
There may be improvement potential with better meta prompt and quantized models.
It took ~15 hours processing all of the dataset using an RTX 3090.
Miqu's Fine-Tune Senku did a good job at only providing numeric answers in instruction mode. Rates with alphabetic characters were removed, which were less than a hundred. Most of the removed content were a bit confusing.
Leave a like if human disappear like dinosaur. Also keep in mind that I have no idea what I am doing actually.
