Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ tags:
|
|
| 11 |
- conversational
|
| 12 |
---
|
| 13 |
|
| 14 |
-
#
|
| 15 |
|
| 16 |
**Lazarus** is a next-generation small LLM based on `gpt2-medium`, distilled from LLaMA 3 3B using the [`vicgalle/alpaca-gpt4`](https://huggingface.co/datasets/vicgalle/alpaca-gpt4) dataset.
|
| 17 |
|
|
@@ -22,10 +22,10 @@ Lazarus demonstrates exceptional performance for its size, especially in questio
|
|
| 22 |
## 🔍 Example Responses
|
| 23 |
|
| 24 |
**User:** What is AI?
|
| 25 |
-
**
|
| 26 |
|
| 27 |
**User:** What is the Pythagorean Theorem?
|
| 28 |
-
**
|
| 29 |
|
| 30 |
---
|
| 31 |
|
|
@@ -39,7 +39,7 @@ import torch
|
|
| 39 |
|
| 40 |
print("CUDA Available:", torch.cuda.is_available())
|
| 41 |
|
| 42 |
-
model_name = "
|
| 43 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 44 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 45 |
model.eval()
|
|
|
|
| 11 |
- conversational
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# AclevoGPT — The Best Small LLM
|
| 15 |
|
| 16 |
**Lazarus** is a next-generation small LLM based on `gpt2-medium`, distilled from LLaMA 3 3B using the [`vicgalle/alpaca-gpt4`](https://huggingface.co/datasets/vicgalle/alpaca-gpt4) dataset.
|
| 17 |
|
|
|
|
| 22 |
## 🔍 Example Responses
|
| 23 |
|
| 24 |
**User:** What is AI?
|
| 25 |
+
**AclevoGPT:** Artificial Intelligence (AI) is a technology that helps humans solve complex problems and communicate with each other. It can help humans solve complex problems by analyzing and modeling data, and can help them understand complex information.
|
| 26 |
|
| 27 |
**User:** What is the Pythagorean Theorem?
|
| 28 |
+
**AclevoGPT:** The Pythagorean theorem is a theorem used to find the hypotenuse of a triangle.
|
| 29 |
|
| 30 |
---
|
| 31 |
|
|
|
|
| 39 |
|
| 40 |
print("CUDA Available:", torch.cuda.is_available())
|
| 41 |
|
| 42 |
+
model_name = "Aclevo/AclevoGPT-100M-Instruct"
|
| 43 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 44 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 45 |
model.eval()
|