malteklaes commited on
Commit
7d20e61
·
verified ·
1 Parent(s): 696b358

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -10
README.md CHANGED
@@ -1,11 +1,15 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
4
  ---
5
 
6
  # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
 
 
9
 
10
 
11
 
@@ -17,25 +21,41 @@ tags: []
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
  - **Funded by [optional]:** [More Information Needed]
22
  - **Shared by [optional]:** [More Information Needed]
23
  - **Model type:** [More Information Needed]
24
  - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
 
27
 
28
  ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
  - **Paper [optional]:** [More Information Needed]
34
  - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
  ### Direct Use
41
 
@@ -79,7 +99,7 @@ Use the code below to get started with the model.
79
 
80
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
@@ -196,6 +216,4 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
200
-
201
-
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - code_search_net
6
  ---
7
 
8
  # Model Card for Model ID
9
 
10
+ This model identifies foreign code and determines the recognized programming language.
11
+ It is currently not further trained and has been completely adopted by huggingface/CodeBERTa-language-id.
12
+ [source: https://huggingface.co/huggingface/CodeBERTa-language-id]
13
 
14
 
15
 
 
21
 
22
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
23
 
24
+ - **Developed by:** Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc
25
  - **Funded by [optional]:** [More Information Needed]
26
  - **Shared by [optional]:** [More Information Needed]
27
  - **Model type:** [More Information Needed]
28
  - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** apache-2.0
30
+ - **Finetuned from model [optional]:** huggingface/CodeBERTa-language-id
31
+ - **base_model**: huggingface/CodeBERTa-small-v1
32
 
33
  ### Model Sources [optional]
34
 
35
  <!-- Provide the basic links for the model. -->
36
 
37
+ - **Repository:** https://huggingface.co/huggingface/CodeBERTa-language-id/edit/main/README.md
38
  - **Paper [optional]:** [More Information Needed]
39
  - **Demo [optional]:** [More Information Needed]
40
 
41
  ## Uses
42
 
43
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+ ```python
45
+ from transformers import pipeline
46
+
47
+ checkpoint = "malteklaes/based-CodeBERTa-language-id-llm-module"
48
+
49
+ from transformers import TextClassificationPipeline
50
+
51
+ myPipeline = TextClassificationPipeline(
52
+ model=RobertaForSequenceClassification.from_pretrained(checkpoint),
53
+ tokenizer=RobertaTokenizer.from_pretrained(checkpoint)
54
+ )
55
+
56
+ CODE_TO_IDENTIFY = "print('hello world')"
57
+ myPipeline(CODE_TO_IDENTIFY)
58
+ ```
59
 
60
  ### Direct Use
61
 
 
99
 
100
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
101
 
102
+ code_search_net
103
 
104
  ### Training Procedure
105
 
 
216
 
217
  ## Model Card Contact
218
 
219
+ [More Information Needed]