YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
🧠 Mineral-1B
Mineral-1B is a lightweight open language model under the Apache License 2.0, built using Hugging Face AutoTrain for smart and automatic model management.
The project aims to create a flexible LLM capable of natural conversation, reasoning, and code assistance.
⚙️ Configuration
- Base model:
microsoft/phi-2 - Pipeline:
text-generation - Library:
transformers - License: Apache 2.0
- Training: AutoTrain (no dataset yet — pretrained mode)
🚀 Goals
- Serve as a foundation for text generation and conversational tasks
- Support English and optionally other languages
- Enable later fine-tuning with domain-specific data
📘 Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("prelington/Mineral-1B")
model = AutoModelForCausalLM.from_pretrained("prelington/Mineral-1B")
prompt = "Hello! What is Mineral-1B?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support