Commit
·
2330aac
1
Parent(s):
6ad1c63
Update README.md
Browse files
README.md
CHANGED
|
@@ -4,4 +4,35 @@ datasets:
|
|
| 4 |
- ajibawa-2023/Python-Code-23k-ShareGPT
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
- ajibawa-2023/Python-Code-23k-ShareGPT
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
**Python-Code-13B**
|
| 10 |
+
|
| 11 |
+
Large Language Models (LLMs) are good with code generations. Sometimes LLMs do make mistakes in code generation. How about if they can give detailed explanation along with the code.
|
| 12 |
+
This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around 23000+ set of codes. Each set having 2 conversations.
|
| 13 |
+
This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation.
|
| 14 |
+
We have released the [data](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT).
|
| 15 |
+
|
| 16 |
+
**Training:**
|
| 17 |
+
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 13 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
**GPTQ GGML & AWQ**
|
| 21 |
+
|
| 22 |
+
GPTQ: TBA
|
| 23 |
+
|
| 24 |
+
GGUF: TBA
|
| 25 |
+
|
| 26 |
+
AWQ: TBA
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
**Example Prompt:**
|
| 30 |
+
```
|
| 31 |
+
This is a conversation with your helpful AI assistant. AI assistant can generate Python Code along with necessary explanation.
|
| 32 |
+
|
| 33 |
+
Context
|
| 34 |
+
You are a helpful AI assistant.
|
| 35 |
+
|
| 36 |
+
USER: <prompt>
|
| 37 |
+
ASSISTANT:
|
| 38 |
+
```
|