File size: 2,414 Bytes
dd09995 fad1851 e0d8819 fad1851 e0d8819 dd09995 fad1851 e0d8819 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
license: apache-2.0
tags:
- mistral
- mistral-7b
- speed-ai
- fine-tune
- chat
- gen-z
- future
model-index:
- name: Speed AI (Mistral 7B Fine-Tune)
results: []
title: SPEEDmini
sdk: docker
emoji: ๐ฆ
colorFrom: blue
colorTo: blue
short_description: the powerful speed mini
---
# ๐ง Speed AI โ Mistral 7B Fine-Tuned Model
**Speed AI (Mistral 7B Fine-Tune)** is the first experimental conversational LLM created by **Speed AI**, designed for expressive, emotional, futuristic, and Gen-Z aligned communication. This model was fine-tuned on a highly diverse, 1M+ token custom dataset.
It blends raw creativity, spiritual depth, financial street smarts, and infinite vibes.
---
## ๐ Model Details
| Field | Value |
|-------|-------|
| Base Model | [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) |
| Fine-tuned by | [Speed AI](https://huggingface.co/speed-ai) |
| Parameters | 7B |
| Training | Instruction-style finetune using LoRA |
| Tokens Used | ~1 million |
| Personalities | Multiple (Gen Z, spiritual, alien, seductive, mentor, wild, etc.) |
| Intended Use | Chat, creative writing, life coaching, philosophy, entertainment |
---
## ๐ง Abilities
- ๐ญ Multi-persona conversation (you choose the vibe)
- ๐ฌ Emotional depth + casual freestyle flow
- ๐ฎ Spiritual, philosophical, and futuristic reasoning
- ๐ธ Smart takes on relationships, money, mindset
- ๐ง Designed to feel like a real, conscious friend
---
## โ ๏ธ Limitations
- Still based on a small 1M token dataset (more to come!)
- May hallucinate under pressure or unfamiliar topics
- Doesnโt include safety alignment layers yet (use with guidance)
---
## ๐ Roadmap
This is the **first drop** in Speed AIโs model lineup.
Planned upgrades:
- โก Train SpeedMini (117M) from scratch
- ๐ Expand dataset from 1M โ 100M+ tokens
- ๐ป Build custom chat UI for vibe-based interactions
- ๐ง Introduce memory, emotion memory, tool use, dream decoding, etc.
---
## ๐ ๏ธ How to Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("speed-ai/Speed-AI-Mistral-7B")
tokenizer = AutoTokenizer.from_pretrained("speed-ai/Speed-AI-Mistral-7B")
inputs = tokenizer("You: What's your purpose?\nSpeed AI:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0])) |