fenchri commited on
Commit
a338571
·
verified ·
1 Parent(s): 38ff595

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - huawei-noah/python_text2code
4
+ tags:
5
+ - python
6
+ - code
7
+ ---
8
+
9
+ ---
10
+ # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
11
+ # Doc / guide: https://huggingface.co/docs/hub/model-cards
12
+ {}
13
+ ---
14
+
15
+ # Model Card for pangu-CodeCLM-300m
16
+
17
+ - **Repository:** https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt
18
+ - **Paper:** https://aclanthology.org/2024.eacl-long.72.pdf
19
+
20
+ ## Model Description
21
+
22
+ This model is a PanGu-Alpha model further trained on text-to-code pairs
23
+ collected from public github repositories.
24
+ Training was performed with the CodeCLM objective, i.e. causal language modeling calculating loss only over code tokens.
25
+
26
+ In order to use the model, first download it from the hub and have a look at the [evaluation section](https://github.com/huawei-noah/noah-research/blob/master/NLP/text2code_mrpt/README.md#evaluation).
27
+
28
+
29
+ ## Citation [optional]
30
+
31
+ **BibTeX:**
32
+
33
+ ```html
34
+ @inproceedings{christopoulou-etal-2024-text,
35
+ title = "Text-to-Code Generation with Modality-relative Pre-training",
36
+ author = "Christopoulou, Fenia and
37
+ Zhang, Guchun and
38
+ Lampouras, Gerasimos",
39
+ editor = "Graham, Yvette and
40
+ Purver, Matthew",
41
+ booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
42
+ month = mar,
43
+ year = "2024",
44
+ address = "St. Julian{'}s, Malta",
45
+ publisher = "Association for Computational Linguistics",
46
+ url = "https://aclanthology.org/2024.eacl-long.72",
47
+ pages = "1194--1208",
48
+ abstract = "Large pre-trained language models have recently been expanded and applied to programming language tasks with great success, often through further pre-training of a strictly-natural language model{--}where training sequences typically contain both natural and (linearised) programming language. Such approaches effectively map both modalities of the sequence into the same embedding space. However, programming language keywords (e.g. {``}while{''}) often have very strictly defined semantics. As such, transfer learning from their natural language usage may not necessarily be beneficial to their code application and vise versa. Assuming an already pre-trained language model, in this work we investigate how sequence tokens can be adapted and represented differently, depending on which modality they belong to, and to the ultimate benefit of the downstream task. We experiment with separating embedding spaces between modalities during further model pre-training with modality-relative training objectives. We focus on text-to-code generation and observe consistent improvements across two backbone models and two test sets, measuring pass@$k$ and a novel incremental variation.",
49
+ }
50
+ ```
51
+
52
+ ## Model Card Authors [optional]
53
+
54
+ [Fenia Christopoulou](mailto:efstathia.christopoulou@huawei.com)
55
+