Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Libraries:
Datasets
pandas
NicholasOgenstad commited on
Commit
49889c0
·
verified ·
1 Parent(s): b442365

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -22
README.md CHANGED
@@ -1,7 +1,7 @@
1
  # Original Dataset + Tokenized Data + (Buggy + Fixed Embedding Pairs) + Difference Embeddings
2
 
3
  ## Overview
4
- This repository contains 4 related datasets for [your use case]:
5
 
6
  ## Datasets Included
7
 
@@ -9,36 +9,80 @@ This repository contains 4 related datasets for [your use case]:
9
  - **Description**: Legacy RunBugRun Dataset
10
  - **Format**: Parquet file with buggy-fixed code pairs, bug labels, and language
11
  - **Size**: 456,749 samples
12
- - **Load with**: dataset = load_dataset("NicholasOgenstad/my-runbugrun-dataset-filtered", split="train")
 
 
 
 
 
 
 
 
 
13
 
14
  ### 2. Difference Embeddings (`diff_embeddings_chunk_XXXX.pkl`)
15
  - **Description**: ModernBERT-large embeddings for buggy-fixed pairs. The difference is Fixed embedding - Buggy embedding. 1024 dimensional vector.
16
  - **Format**: Pickle file with XXXX arrays
17
  - **Dimensions**: 456,749 × 1024, split among the different files, most 20000, last one shorter.
18
- - **Load with**: XXXX
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ### 3. Tokens (`token_embeddings.pkl`)
21
  - **Description**: Original Dataset tokenized, pairs of Buggy and Fixed code.
22
  - **Format**: Pickle file XXXX
23
- - **Load with**: XXXX
24
-
25
- ### 4. Buggy + Fixed Embeddings(`tokenized_data.json`)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  - **Description**: Preprocessed tokenized sequences
27
- - **Format**: Pickle file XXXX
28
- - **Load with**: XXXX
29
-
30
- ## Usage Examples
31
-
32
  ```python
33
- # Load original dataset
34
-
35
-
36
- # Load tokens
37
-
38
-
39
- # Load diff embeddings
40
-
41
-
42
- # Load buggy fixed embedding pairs
43
-
 
 
 
 
 
 
44
 
 
1
  # Original Dataset + Tokenized Data + (Buggy + Fixed Embedding Pairs) + Difference Embeddings
2
 
3
  ## Overview
4
+ This repository contains 4 related datasets for training a transformation from buggy to fixed code embeddings:
5
 
6
  ## Datasets Included
7
 
 
9
  - **Description**: Legacy RunBugRun Dataset
10
  - **Format**: Parquet file with buggy-fixed code pairs, bug labels, and language
11
  - **Size**: 456,749 samples
12
+ - **Load with**:
13
+ ```python
14
+ from datasets import load_dataset
15
+
16
+ dataset = load_dataset(
17
+ "ASSERT-KTH/RunBugRun-Final",
18
+ split="train"
19
+ )
20
+ buggy = dataset['buggy_code']
21
+ fixed = dataset['fixed_code']
22
 
23
  ### 2. Difference Embeddings (`diff_embeddings_chunk_XXXX.pkl`)
24
  - **Description**: ModernBERT-large embeddings for buggy-fixed pairs. The difference is Fixed embedding - Buggy embedding. 1024 dimensional vector.
25
  - **Format**: Pickle file with XXXX arrays
26
  - **Dimensions**: 456,749 × 1024, split among the different files, most 20000, last one shorter.
27
+ - **Load with**:
28
+ ```python
29
+ from huggingface_hub import hf_hub_download
30
+ import pickle
31
+
32
+ repo_id = "ASSERT-KTH/RunBugRun-Final"
33
+ diff_embeddings = []
34
+
35
+ for chunk_num in range(23):
36
+ file_path = hf_hub_download(
37
+ repo_id=repo_id,
38
+ filename=f"Embeddings_RBR/diff_embeddings/diff_embeddings_chunk_{chunk_num:04d}.pkl",
39
+ repo_type="dataset"
40
+ )
41
+ with open(file_path, 'rb') as f:
42
+ data = pickle.load(f)
43
+ diff_embeddings.extend(data.tolist())
44
 
45
  ### 3. Tokens (`token_embeddings.pkl`)
46
  - **Description**: Original Dataset tokenized, pairs of Buggy and Fixed code.
47
  - **Format**: Pickle file XXXX
48
+ - **Load with**:
49
+ ```python
50
+ from huggingface_hub import hf_hub_download
51
+ import pickle
52
+
53
+ repo_id = "ASSERT-KTH/RunBugRun-Final"
54
+ tokenized_data = []
55
+
56
+ for chunk_num in range(23):
57
+ file_path = hf_hub_download(
58
+ repo_id=repo_id,
59
+ filename=f"Embeddings_RBR/tokenized_data/chunk_{chunk_num:04d}.pkl",
60
+ repo_type="dataset"
61
+ )
62
+ with open(file_path, 'rb') as f:
63
+ data = pickle.load(f)
64
+ tokenized_data.extend(data)
65
+
66
+ ### 4. Buggy + Fixed Embeddings (`tokenized_data.json`)
67
  - **Description**: Preprocessed tokenized sequences
68
+ - **Format**: Pickle file with dictionaries containing 'buggy_embeddings' and 'fixed_embeddings' keys
69
+ - **Load with**:
 
 
 
70
  ```python
71
+ from huggingface_hub import hf_hub_download
72
+ import pickle
73
+
74
+ repo_id = "ASSERT-KTH/RunBugRun-Final"
75
+ buggy_list = []
76
+ fixed_list = []
77
+
78
+ for chunk_num in range(23):
79
+ file_path = hf_hub_download(
80
+ repo_id=repo_id,
81
+ filename=f"Embeddings_RBR/buggy_fixed_embeddings/buggy_fixed_embeddings_chunk_{chunk_num:04d}.pkl",
82
+ repo_type="dataset"
83
+ )
84
+ with open(file_path, 'rb') as f:
85
+ data = pickle.load(f)
86
+ buggy_list.extend(data['buggy_embeddings'].tolist())
87
+ fixed_list.extend(data['fixed_embeddings'].tolist())
88