Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Libraries:
Datasets
pandas
NicholasOgenstad commited on
Commit
261813a
·
verified ·
1 Parent(s): 49889c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -12,17 +12,17 @@ This repository contains 4 related datasets for training a transformation from b
12
  - **Load with**:
13
  ```python
14
  from datasets import load_dataset
15
-
16
  dataset = load_dataset(
17
  "ASSERT-KTH/RunBugRun-Final",
18
  split="train"
19
  )
20
  buggy = dataset['buggy_code']
21
  fixed = dataset['fixed_code']
 
22
 
23
  ### 2. Difference Embeddings (`diff_embeddings_chunk_XXXX.pkl`)
24
  - **Description**: ModernBERT-large embeddings for buggy-fixed pairs. The difference is Fixed embedding - Buggy embedding. 1024 dimensional vector.
25
- - **Format**: Pickle file with XXXX arrays
26
  - **Dimensions**: 456,749 × 1024, split among the different files, most 20000, last one shorter.
27
  - **Load with**:
28
  ```python
@@ -41,10 +41,11 @@ for chunk_num in range(23):
41
  with open(file_path, 'rb') as f:
42
  data = pickle.load(f)
43
  diff_embeddings.extend(data.tolist())
 
44
 
45
  ### 3. Tokens (`token_embeddings.pkl`)
46
  - **Description**: Original Dataset tokenized, pairs of Buggy and Fixed code.
47
- - **Format**: Pickle file XXXX
48
  - **Load with**:
49
  ```python
50
  from huggingface_hub import hf_hub_download
@@ -62,10 +63,11 @@ for chunk_num in range(23):
62
  with open(file_path, 'rb') as f:
63
  data = pickle.load(f)
64
  tokenized_data.extend(data)
 
65
 
66
  ### 4. Buggy + Fixed Embeddings (`tokenized_data.json`)
67
  - **Description**: Preprocessed tokenized sequences
68
- - **Format**: Pickle file with dictionaries containing 'buggy_embeddings' and 'fixed_embeddings' keys
69
  - **Load with**:
70
  ```python
71
  from huggingface_hub import hf_hub_download
@@ -85,4 +87,4 @@ for chunk_num in range(23):
85
  data = pickle.load(f)
86
  buggy_list.extend(data['buggy_embeddings'].tolist())
87
  fixed_list.extend(data['fixed_embeddings'].tolist())
88
-
 
12
  - **Load with**:
13
  ```python
14
  from datasets import load_dataset
 
15
  dataset = load_dataset(
16
  "ASSERT-KTH/RunBugRun-Final",
17
  split="train"
18
  )
19
  buggy = dataset['buggy_code']
20
  fixed = dataset['fixed_code']
21
+ ```
22
 
23
  ### 2. Difference Embeddings (`diff_embeddings_chunk_XXXX.pkl`)
24
  - **Description**: ModernBERT-large embeddings for buggy-fixed pairs. The difference is Fixed embedding - Buggy embedding. 1024 dimensional vector.
25
+ - **Format**: Pickle file
26
  - **Dimensions**: 456,749 × 1024, split among the different files, most 20000, last one shorter.
27
  - **Load with**:
28
  ```python
 
41
  with open(file_path, 'rb') as f:
42
  data = pickle.load(f)
43
  diff_embeddings.extend(data.tolist())
44
+ ```
45
 
46
  ### 3. Tokens (`token_embeddings.pkl`)
47
  - **Description**: Original Dataset tokenized, pairs of Buggy and Fixed code.
48
+ - **Format**: Pickle file
49
  - **Load with**:
50
  ```python
51
  from huggingface_hub import hf_hub_download
 
63
  with open(file_path, 'rb') as f:
64
  data = pickle.load(f)
65
  tokenized_data.extend(data)
66
+ ```
67
 
68
  ### 4. Buggy + Fixed Embeddings (`tokenized_data.json`)
69
  - **Description**: Preprocessed tokenized sequences
70
+ - **Format**: Pickle file
71
  - **Load with**:
72
  ```python
73
  from huggingface_hub import hf_hub_download
 
87
  data = pickle.load(f)
88
  buggy_list.extend(data['buggy_embeddings'].tolist())
89
  fixed_list.extend(data['fixed_embeddings'].tolist())
90
+ ```