Update README.md
Browse files
README.md
CHANGED
|
@@ -28,6 +28,12 @@ Nxcode-CQ-7B-orpo is an ORPO fine-tune of Qwen/CodeQwen1.5-7B-Chat on 100k sampl
|
|
| 28 |
| HumanEval | 86.0 |
|
| 29 |
| HumanEval+ | 81.1 |
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
[Evalplus Leaderboard](https://evalplus.github.io/leaderboard.html)
|
| 32 |
| Models | HumanEval | HumanEval+|
|
| 33 |
|------ | ------ | ------ |
|
|
@@ -58,7 +64,7 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
| 58 |
)
|
| 59 |
tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo")
|
| 60 |
|
| 61 |
-
prompt = "Write a quicksort algorithm in python
|
| 62 |
messages = [
|
| 63 |
{"role": "system", "content": "You are a helpful assistant."},
|
| 64 |
{"role": "user", "content": prompt}
|
|
|
|
| 28 |
| HumanEval | 86.0 |
|
| 29 |
| HumanEval+ | 81.1 |
|
| 30 |
|
| 31 |
+
We use the simple tempale for generate the solution for evalplus:
|
| 32 |
+
|
| 33 |
+
```python
|
| 34 |
+
"Complete the following Python function:\n{prompt}"
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
[Evalplus Leaderboard](https://evalplus.github.io/leaderboard.html)
|
| 38 |
| Models | HumanEval | HumanEval+|
|
| 39 |
|------ | ------ | ------ |
|
|
|
|
| 64 |
)
|
| 65 |
tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo")
|
| 66 |
|
| 67 |
+
prompt = "Write a quicksort algorithm in python"
|
| 68 |
messages = [
|
| 69 |
{"role": "system", "content": "You are a helpful assistant."},
|
| 70 |
{"role": "user", "content": prompt}
|