Update README.md
Browse files
README.md
CHANGED
|
@@ -11,9 +11,10 @@ pipeline_tag: text-generation
|
|
| 11 |
tags:
|
| 12 |
- gpt2
|
| 13 |
- dpo
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
-
This model is a finetuned version of
|
| 17 |
|
| 18 |
## Model description
|
| 19 |
|
|
@@ -34,7 +35,7 @@ prompt.
|
|
| 34 |
|
| 35 |
```python
|
| 36 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 37 |
-
>>> model_name = "Sharathhebbar24/
|
| 38 |
>>> model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 39 |
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 40 |
>>> def generate_text(prompt):
|
|
@@ -42,13 +43,7 @@ prompt.
|
|
| 42 |
>>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
|
| 43 |
>>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 44 |
>>> return generated[:generated.rfind(".")+1]
|
| 45 |
-
>>> prompt = ""
|
| 46 |
-
>>> user: what are you?
|
| 47 |
-
>>> assistant: I am a Chatbot intended to give a python program
|
| 48 |
-
>>> user: hmm, can you write a python program to print Hii Heloo
|
| 49 |
-
>>> assistant: Sure Here is a python code.\n print("Hii Heloo")
|
| 50 |
-
>>> user: Can you write a Linear search program in python
|
| 51 |
-
>>> """
|
| 52 |
>>> res = generate_text(prompt)
|
| 53 |
>>> res
|
| 54 |
```
|
|
|
|
| 11 |
tags:
|
| 12 |
- gpt2
|
| 13 |
- dpo
|
| 14 |
+
- code
|
| 15 |
---
|
| 16 |
|
| 17 |
+
This model is a finetuned version of [Sharathhebbar24/code_gpt2_mini_model](https://huggingface.co/Sharathhebbar24/code_gpt2_mini_model) using [Sharathhebbar24/Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/Sharathhebbar24/Evol-Instruct-Code-80k-v1)
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
|
|
|
| 35 |
|
| 36 |
```python
|
| 37 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 38 |
+
>>> model_name = "Sharathhebbar24/code_gpt2"
|
| 39 |
>>> model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 40 |
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 41 |
>>> def generate_text(prompt):
|
|
|
|
| 43 |
>>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
|
| 44 |
>>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 45 |
>>> return generated[:generated.rfind(".")+1]
|
| 46 |
+
>>> prompt = "Can you write a Linear search program in Python"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
>>> res = generate_text(prompt)
|
| 48 |
>>> res
|
| 49 |
```
|