krish-emissary commited on
Commit
38f52b5
·
verified ·
1 Parent(s): 1e6dc13

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -3
README.md CHANGED
@@ -1,3 +1,49 @@
1
- ---
2
- license: llama2
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ base_model: meta-llama/CodeLlama-70b-Python-hf
4
+ tags:
5
+ - code
6
+ - code-generation
7
+ - tab-completion
8
+ - python
9
+ - llama
10
+ - finetuned
11
+ language:
12
+ - code
13
+ ---
14
+
15
+ # Python Tab Completion CodeLlama 70B
16
+
17
+ ## Model Description
18
+
19
+ This is a finetuned version of Code-Llama-70B specifically optimized for Python tab completion tasks. The model excels at predicting the next tokens in Python code, making it ideal for IDE autocomplete features and code assistance tools.
20
+
21
+ ## Intended Use
22
+
23
+ - **Primary use case**: Python code tab completion in IDEs and code editors
24
+ - **Secondary uses**:
25
+ - Code generation
26
+ - Code explanation
27
+ - Python snippet completion
28
+
29
+ ## Usage
30
+
31
+ ### Quick Start
32
+ ```python
33
+ from transformers import AutoModelForCausalLM, AutoTokenizer
34
+ import torch
35
+
36
+ model_id = "emissary-ai/Python-Tab-Completion-CodeLlama-70b"
37
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
38
+ model = AutoModelForCausalLM.from_pretrained(
39
+ model_id,
40
+ torch_dtype=torch.float16,
41
+ device_map="auto"
42
+ )
43
+
44
+ # Example: Complete Python code
45
+ prompt = "def calculate_average(numbers):\n "
46
+ inputs = tokenizer(prompt, return_tensors="pt")
47
+ outputs = model.generate(**inputs, max_length=100, temperature=0.7)
48
+ completion = tokenizer.decode(outputs[0], skip_special_tokens=True)
49
+ print(completion)