BerryAI / README.md
hotboxxgenn's picture
Update README.md
cce6826 verified
---
language:
- en
tags:
- coding
- ui
- chat
- math
- factual
- agent
- multimodal
base_model:
- mistralai/Mixtral-8x7B-v0.1
- OpenBuddy/openbuddy-openllama-7b-v12-bf16
- HuggingFaceH4/mistral-7b-grok
- togethercomputer/RedPajama-INCITE-7B-Chat
datasets:
- hotboxxgenn/mix-openhermes-openorca-platypus-airoboros-chatalpaca-opencode
- microsoft/rStar-Coder
- ed001/ds-coder-instruct-v1
- bigcode/starcoderdata
- bigcode/starcoder2data-extras
- codeparrot/self-instruct-starcoder
- mrtoy/mobile-ui-design
- YashJain/UI-Elements-Detection-Dataset
- tecky-tech/Tecky-UI-Elements-VLM
- Tesslate/UIGEN-T2
- FineWeb
- OpenWebMath
- UltraChat
- WizardCoderData
library_name: transformers
---
# ๐Ÿ“ BerryAI
**Author:** [@hotboxxgenn](https://huggingface.co/hotboxxgenn)
**Version:** 1.1
**Type:** Conversational + Coding + UI + Math + Factual Model
**Base:** Mixtral 8x7B, OpenBuddy, Mistral-Grok, RedPajama
---
## โœจ Overview
BerryAI is a **multi-skill LLM** designed to perform:
- ๐Ÿ’ป **Coding** โ€” Python, JS, React, Tailwind, multi-step reasoning
- ๐ŸŽจ **UI generation** โ€” generate clean, responsive interfaces
- ๐Ÿ’ฌ **Conversational chat** โ€” helpful, creative, engaging tone
- ๐Ÿงฎ **Math reasoning** โ€” step-by-step calculations
- ๐Ÿ” **Factual grounding** โ€” reduced hallucination, improved accuracy
---
## ๐Ÿš€ Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("hotboxxgenn/BerryAI")
model = AutoModelForCausalLM.from_pretrained("hotboxxgenn/BerryAI")
prompt = "Generate a responsive React login form with Tailwind CSS."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))