|
|
--- |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- coding |
|
|
- ui |
|
|
- chat |
|
|
- math |
|
|
- factual |
|
|
- agent |
|
|
- multimodal |
|
|
base_model: |
|
|
- mistralai/Mixtral-8x7B-v0.1 |
|
|
- OpenBuddy/openbuddy-openllama-7b-v12-bf16 |
|
|
- HuggingFaceH4/mistral-7b-grok |
|
|
- togethercomputer/RedPajama-INCITE-7B-Chat |
|
|
datasets: |
|
|
- hotboxxgenn/mix-openhermes-openorca-platypus-airoboros-chatalpaca-opencode |
|
|
- microsoft/rStar-Coder |
|
|
- ed001/ds-coder-instruct-v1 |
|
|
- bigcode/starcoderdata |
|
|
- bigcode/starcoder2data-extras |
|
|
- codeparrot/self-instruct-starcoder |
|
|
- mrtoy/mobile-ui-design |
|
|
- YashJain/UI-Elements-Detection-Dataset |
|
|
- tecky-tech/Tecky-UI-Elements-VLM |
|
|
- Tesslate/UIGEN-T2 |
|
|
- FineWeb |
|
|
- OpenWebMath |
|
|
- UltraChat |
|
|
- WizardCoderData |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
# ๐ BerryAI |
|
|
|
|
|
**Author:** [@hotboxxgenn](https://huggingface.co/hotboxxgenn) |
|
|
**Version:** 1.1 |
|
|
**Type:** Conversational + Coding + UI + Math + Factual Model |
|
|
**Base:** Mixtral 8x7B, OpenBuddy, Mistral-Grok, RedPajama |
|
|
|
|
|
--- |
|
|
|
|
|
## โจ Overview |
|
|
|
|
|
BerryAI is a **multi-skill LLM** designed to perform: |
|
|
- ๐ป **Coding** โ Python, JS, React, Tailwind, multi-step reasoning |
|
|
- ๐จ **UI generation** โ generate clean, responsive interfaces |
|
|
- ๐ฌ **Conversational chat** โ helpful, creative, engaging tone |
|
|
- ๐งฎ **Math reasoning** โ step-by-step calculations |
|
|
- ๐ **Factual grounding** โ reduced hallucination, improved accuracy |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ Usage |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("hotboxxgenn/BerryAI") |
|
|
model = AutoModelForCausalLM.from_pretrained("hotboxxgenn/BerryAI") |
|
|
|
|
|
prompt = "Generate a responsive React login form with Tailwind CSS." |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
outputs = model.generate(**inputs, max_new_tokens=200) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |