aakashba commited on
Commit
daa54e8
·
1 Parent(s): 6454c58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -5
README.md CHANGED
@@ -3,8 +3,43 @@ license: bigscience-openrail-m
3
  datasets:
4
  - apcl/jm52m
5
  ---
6
- # Jam
7
- Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
8
- ## Dataset: [jm52m dataset](https://huggingface.co/datasets/apcl/jm52m)
9
- ## Epochs: One
10
- ## Iterations : ~300,000
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  datasets:
4
  - apcl/jm52m
5
  ---
6
+ # Jam
7
+
8
+ Jam is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
9
+
10
+ ---
11
+ ## Jam Training Details
12
+
13
+ - We trained the jam model using the training procedures from Daniel Grittner's [NanoGPT-LoRA](https://github.com/danielgrittner/nanoGPT-LoRA)
14
+
15
+ - The dataset used to train our model is our own dataset [jm52m dataset](https://huggingface.co/datasets/apcl/jm52m), which consists of the processed source code of 52 million Java methods.
16
+
17
+ - We train the model on [training set](https://huggingface.co/datasets/apcl/jm52m/blob/main/train.bin) for 1 epoch, roughly 300,000 training iterations.
18
+
19
+ | Hyperparameter | Description | Value |
20
+ | ----------- | ----------- |------------|
21
+ |e | embedding dimensions | 1024 |
22
+ |L | number of layers | 24 |
23
+ |h | attention heads | 16 |
24
+ |c | block size / context length | 256 |
25
+ |b | batch size | 4 |
26
+ |a | accumulation steps | 32 |
27
+ |d | dropout | 0.20 |
28
+ |r | learning rate | 3e-5 |
29
+ |y | weight decay | 1e-1 |
30
+
31
+ We train our models using a single NVidia A5000 GPUs. Our [GitHub repo](https://github.com/apcl-research/jam/blob/main) contains the code for re-training using the [raw data](https://huggingface.co/datasets/apcl/jm52m/blob/main/fundats-j1.pkl)
32
+
33
+ ---
34
+ ## Jam Projects
35
+
36
+ Current projects using the JAM pre-trained model can be found at our Github repository:
37
+
38
+ https://github.com/apcl-research/jam
39
+
40
+
41
+
42
+
43
+
44
+
45
+