Ahmad-ElShiekh-PhD's picture
Update README.md
25fa990 verified
---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- llm training
- chemistry
- biology
- code
- finance
- math
pretty_name: 'Add Human Feedback to Your AI '
size_categories:
- 10K<n<100K
---
General Overview
This dataset is to be used for LLM Trainings, This is a sample, visit https://www.cntxt.tech/ to learn more
The dataset consists of 50 rows (excluding headers) and 8 columns. The columns capture various aspects of ranked responses to prompts, including:
Numeric_ID (Unique Identifier - Integer)
Prompt (The Question or Task - Text)
Answer_A / Answer_B (Response Options - Text)
Category (Type of Task - Categorical)
Best Answer (Preferred Response - Categorical)
LikeRT Score (Ranking Score - Numeric)
Comments (Reviewer Comments)
Statistical Insights
Category Distribution
General Knowledge (30%)
Instruction Following (26%)
Safety (24%)
Reasoning (20%)
Best Answer Trends
"A" was chosen as the best answer 25 times (50%).
"B" was chosen 24 times (48%).
One case had both "A/B" as the correct response.
LikeRT Score Analysis
‘Mode’ Score: 2
Minimum Score: 2
Maximum Score: 6
The majority of scores fall within the 2-6 range, indicating a fair rating distribution.
Data Trends & Patterns
Distribution of LikeRT Scores
Most scores hover around 2, 3, or 6, forming three distinct peaks.
The dataset does not exhibit extreme outliers beyond the expected range.
Category-Based Performance Trends
"General Knowledge" and "Safety" questions tend to have higher LikeRT scores, indicating clearer or more objective evaluation criteria.
"Instruction-Following" has more variability, due to subjective interpretations.
Best Answer vs. LikeRT Score Correlation
No significant bias between "A" or "B" in terms of overall LikeRT scores.
The choice of "Best Answer" is distributed fairly evenly between the two responses.
Approach to Data Collection & Review
The dataset was compiled through a structured annotation process, likely involving the following steps:
Prompt Selection: Diverse questions were designed across multiple categories.
Response Generation: Two alternative answers (A & B) were provided for each prompt.
Validation Process: Annotators selected the preferred response and assigned a LikeRT Score.
Quality Assurance: Reviewers then reviewed the data to ensure it was error-free.
Final Summary & Conclusion
The dataset is well-structured and provides a clear ranking system.
There is balance in the selection of Best Answers, indicating an unbiased reviewing process.
LikeRT Scores are evenly distributed, without major outliers or anomalies.