aakashba commited on
Commit
6454c58
·
1 Parent(s): 2d14e45

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -1,3 +1,10 @@
 
 
 
 
 
1
  # Jam
2
-
3
- Jam is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair. This model is trained on jm52m only and trained for one epoch, which is ∼300,000 iterations.
 
 
 
1
+ ---
2
+ license: bigscience-openrail-m
3
+ datasets:
4
+ - apcl/jm52m
5
+ ---
6
  # Jam
7
+ Jam-so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
8
+ ## Dataset: [jm52m dataset](https://huggingface.co/datasets/apcl/jm52m)
9
+ ## Epochs: One
10
+ ## Iterations : ~300,000