Update README.md
Browse files
README.md
CHANGED
|
@@ -1,13 +1,44 @@
|
|
| 1 |
-
# Jam-sojm
|
| 2 |
---
|
| 3 |
license: bigscience-openrail-m
|
| 4 |
datasets:
|
| 5 |
- apcl/so13m
|
|
|
|
| 6 |
---
|
| 7 |
|
|
|
|
| 8 |
Jam-sojm is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
-
## Epochs: Two ( one with each dataset, with the the learning rate and decay reset in between)
|
| 13 |
-
## Iterations : ~600,000
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: bigscience-openrail-m
|
| 3 |
datasets:
|
| 4 |
- apcl/so13m
|
| 5 |
+
- apcl/jm52m
|
| 6 |
---
|
| 7 |
|
| 8 |
+
# Jam-sojm
|
| 9 |
Jam-sojm is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
|
| 10 |
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
## Jam-sojm Training Details
|
| 14 |
+
|
| 15 |
+
- We trained the jam-sojm model using the training procedures from Daniel Grittner's [NanoGPT-LoRA](https://github.com/danielgrittner/nanoGPT-LoRA)
|
| 16 |
+
|
| 17 |
+
- The datasets used to train our model are our own datasets [so13m dataset](https://huggingface.co/datasets/apcl/so13m) and [jm52m dataset](https://huggingface.co/datasets/apcl/jm52m).
|
| 18 |
+
|
| 19 |
+
- First we train the model on [so13m training set](https://huggingface.co/datasets/apcl/so13m/blob/main/train.bin) for 1 epoch, roughly 300,000 training iterations.
|
| 20 |
+
|
| 21 |
+
- We reset the learning rate and weight decay, then train it again on the [jm52mm training set](https://huggingface.co/datasets/apcl/jm52m/blob/main/train.bin) for 1 more epoch, roughly 300,000 more training iterations for a total of 600,000 iterations.
|
| 22 |
+
|
| 23 |
+
- Our [GitHub repo](https://github.com/apcl-research/jam/blob/main) contains the code for re-training using the [raw data](https://huggingface.co/datasets/apcl/so13m/blob/main/so13m.pkl).
|
| 24 |
+
|
| 25 |
+
| Hyperparameter | Description | Value |
|
| 26 |
+
| ----------- | ----------- |------------|
|
| 27 |
+
|e | embedding dimensions | 1024 |
|
| 28 |
+
|L | number of layers | 24 |
|
| 29 |
+
|h | attention heads | 16 |
|
| 30 |
+
|c | block size / context length | 256 |
|
| 31 |
+
|b | batch size | 4 |
|
| 32 |
+
|a | accumulation steps | 32 |
|
| 33 |
+
|d | dropout | 0.20 |
|
| 34 |
+
|r | learning rate | 3e-5 |
|
| 35 |
+
|y | weight decay | 1e-1 |
|
| 36 |
+
|
| 37 |
+
We train our models using a single NVidia A5000 GPUs.
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
## Jam Projects
|
| 41 |
+
|
| 42 |
+
Current projects using the jam_sojm pre-trained model can be found at our Github repository:
|
| 43 |
|
| 44 |
+
https://github.com/apcl-research/jam
|
|
|
|
|
|