Ahmad-ElShiekh-PhD commited on
Commit
d1c76c3
·
verified ·
1 Parent(s): b1a6f75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -1
README.md CHANGED
@@ -10,4 +10,69 @@ tags:
10
  - code
11
  - finance
12
  - math
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - code
11
  - finance
12
  - math
13
+ pretty_name: 'Add Human Feedback to Your AI '
14
+ size_categories:
15
+ - 10K<n<100K
16
+ ---
17
+ General Overview
18
+ The dataset consists of 50 rows (excluding headers) and 8 columns. The columns capture various aspects of ranked responses to prompts, including:
19
+ Numeric_ID (Unique Identifier - Integer)
20
+ Prompt (The Question or Task - Text)
21
+ Answer_A / Answer_B (Response Options - Text)
22
+ Category (Type of Task - Categorical)
23
+ Best Answer (Preferred Response - Categorical)
24
+ LikeRT Score (Ranking Score - Numeric)
25
+ Comments (Reviewer Comments)
26
+ Statistical Insights
27
+ Category Distribution
28
+
29
+
30
+ General Knowledge (30%)
31
+ Instruction Following (26%)
32
+ Safety (24%)
33
+ Reasoning (20%)
34
+
35
+ Best Answer Trends
36
+
37
+
38
+ "A" was chosen as the best answer 25 times (50%).
39
+ "B" was chosen 24 times (48%).
40
+ One case had both "A/B" as the correct response.
41
+
42
+ LikeRT Score Analysis
43
+
44
+
45
+ ‘Mode’ Score: 2
46
+ Minimum Score: 2
47
+ Maximum Score: 6
48
+ The majority of scores fall within the 2-6 range, indicating a fair rating distribution.
49
+
50
+
51
+
52
+ Data Trends & Patterns
53
+ Distribution of LikeRT Scores
54
+
55
+ Most scores hover around 2, 3, or 6, forming three distinct peaks.
56
+ The dataset does not exhibit extreme outliers beyond the expected range.
57
+
58
+ Category-Based Performance Trends
59
+
60
+ "General Knowledge" and "Safety" questions tend to have higher LikeRT scores, indicating clearer or more objective evaluation criteria.
61
+ "Instruction-Following" has more variability, due to subjective interpretations.
62
+
63
+ Best Answer vs. LikeRT Score Correlation
64
+
65
+ No significant bias between "A" or "B" in terms of overall LikeRT scores.
66
+ The choice of "Best Answer" is distributed fairly evenly between the two responses.
67
+
68
+
69
+ Approach to Data Collection & Review
70
+ The dataset was compiled through a structured annotation process, likely involving the following steps:
71
+ Prompt Selection: Diverse questions were designed across multiple categories.
72
+ Response Generation: Two alternative answers (A & B) were provided for each prompt.
73
+ Validation Process: Annotators selected the preferred response and assigned a LikeRT Score.
74
+ Quality Assurance: Reviewers then reviewed the data to ensure it was error-free.
75
+ Final Summary & Conclusion
76
+ The dataset is well-structured and provides a clear ranking system.
77
+ There is balance in the selection of Best Answers, indicating an unbiased reviewing process.
78
+ LikeRT Scores are evenly distributed, without major outliers or anomalies.