Update README.md
Browse files
README.md
CHANGED
|
@@ -16,6 +16,8 @@ Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is in
|
|
| 16 |
|
| 17 |
- We train the model on [training set](https://huggingface.co/datasets/apcl/so13m/blob/main/train.bin) for 1 epoch, roughly 300,000 training iterations.
|
| 18 |
|
|
|
|
|
|
|
| 19 |
| Hyperparameter | Description | Value |
|
| 20 |
| ----------- | ----------- |------------|
|
| 21 |
|e | embedding dimensions | 1024 |
|
|
@@ -28,7 +30,7 @@ Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is in
|
|
| 28 |
|r | learning rate | 3e-5 |
|
| 29 |
|y | weight decay | 1e-1 |
|
| 30 |
|
| 31 |
-
We train our models using a single NVidia A5000 GPUs.
|
| 32 |
|
| 33 |
---
|
| 34 |
## Jam Projects
|
|
|
|
| 16 |
|
| 17 |
- We train the model on [training set](https://huggingface.co/datasets/apcl/so13m/blob/main/train.bin) for 1 epoch, roughly 300,000 training iterations.
|
| 18 |
|
| 19 |
+
- Our [GitHub repo](https://github.com/apcl-research/jam/blob/main) contains the code for re-training using the [raw data](https://huggingface.co/datasets/apcl/so13m/blob/main/so13m.pkl).
|
| 20 |
+
|
| 21 |
| Hyperparameter | Description | Value |
|
| 22 |
| ----------- | ----------- |------------|
|
| 23 |
|e | embedding dimensions | 1024 |
|
|
|
|
| 30 |
|r | learning rate | 3e-5 |
|
| 31 |
|y | weight decay | 1e-1 |
|
| 32 |
|
| 33 |
+
We train our models using a single NVidia A5000 GPUs.
|
| 34 |
|
| 35 |
---
|
| 36 |
## Jam Projects
|