File size: 2,519 Bytes
90066b9
 
 
 
6767380
 
 
 
 
9b46f09
3e2c91e
6767380
90066b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6813f94
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: mit
---
Instruction-tuned model finetuned on WizardLMTeam/WizardLM_evol_instruct_V2_196k
- Small enough to be run on a phone
- 124 million parameters
- Comparable performance to TinyLlama-Chat

We ran some zero-shot tests to compare Lazarus Instruct with the much larger TinyLlama-Chat
![Zero-shot Comparison](https://huggingface.co/Aclevo/Lazarus-Instruct/resolve/main/benchmark.png)



## 🚀 Usage

You can interact with Lazarus using the script below:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

print("CUDA Available:", torch.cuda.is_available())

model_name = "Aclevo/Lazarus-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

system_prompt = (
    "Your name is Lazarus. You are an intelligent AI assistant. You help users with whatever they need. "
    "You always think before answering, and explain your reasoning out loud step by step.\n"
)

chat_history = []

def chat():
    print("Chatting with GPT-2 (type 'exit' to quit)\n")

    while True:
        user_input = input("You: ")
        if user_input.lower() == "exit":
            break

        chat_history.append(f"You: {user_input}")
        recent_history = chat_history[-6:]
        full_prompt = system_prompt + "\n".join(recent_history) + "\nAI:"

        inputs = tokenizer(full_prompt, return_tensors="pt", truncation=True).to(device)

        with torch.no_grad():
            outputs = model.generate(
                **inputs,
                max_length=inputs["input_ids"].shape[1] + 150,
                pad_token_id=tokenizer.eos_token_id,
                do_sample=True,
                top_k=100,
                top_p=0.92,
                temperature=0.7,
                eos_token_id=tokenizer.eos_token_id
            )

        response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
        response = response.strip()

        bad_responses = {"I hope that", "I don't know", "", "I'm excited"}
        if response in bad_responses:
            print("AI: [Regenerating due to low-quality response]")
            continue

        print(f"AI: {response}")
        chat_history.append(f"AI: {response}")

if __name__ == "__main__":
    chat()
```
Please consider citing us if you find this model useful

# Aclevo is not responsible for the misuse of this model