Configuration Parsing Warning:In UNKNOWN_FILENAME: "auto_map.AutoTokenizer" must be a string
MINDI 1.0 420M
MINDI 1.0 420M is a 420M-parameter coding language model focused on Python first and JavaScript second. It is built for local, offline code generation workflows.
Capabilities
- Code generation from natural language prompts
- Code completion
- Bug-fix suggestions
- Code explanation
Model Details
- Parameters: 423,934,848
- Architecture: Decoder-only Transformer
- Context length: 2048 tokens
- Focus languages: Python, JavaScript
Hardware Requirements
Recommended:
- NVIDIA GPU with 8GB+ VRAM
- CUDA-enabled PyTorch
Minimum:
- CPU inference works but is slower
Quick Start (GPU)
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
repo_id = "YOUR_USERNAME/MINDI-1.0-420M"
tokenizer = AutoTokenizer.from_pretrained(repo_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
repo_id,
trust_remote_code=True,
torch_dtype=torch.float16,
).cuda()
prompt = "Write a Python function to check if a string is a palindrome."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=220,
temperature=0.2,
top_p=0.9,
do_sample=True,
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Limitations
- The model can still produce syntax or logic errors.
- Generated code should always be reviewed and tested.
- Not intended for safety-critical production use without validation.
Safety
Always run tests and static checks before using generated code in production.
- Downloads last month
- 29