Instructions to use Conlanger-LLM-CLEM/Nelya-neko-1b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Conlanger-LLM-CLEM/Nelya-neko-1b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Conlanger-LLM-CLEM/Nelya-neko-1b")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Conlanger-LLM-CLEM/Nelya-neko-1b") model = AutoModelForCausalLM.from_pretrained("Conlanger-LLM-CLEM/Nelya-neko-1b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Conlanger-LLM-CLEM/Nelya-neko-1b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Conlanger-LLM-CLEM/Nelya-neko-1b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Conlanger-LLM-CLEM/Nelya-neko-1b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Conlanger-LLM-CLEM/Nelya-neko-1b
- SGLang
How to use Conlanger-LLM-CLEM/Nelya-neko-1b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Conlanger-LLM-CLEM/Nelya-neko-1b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Conlanger-LLM-CLEM/Nelya-neko-1b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Conlanger-LLM-CLEM/Nelya-neko-1b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Conlanger-LLM-CLEM/Nelya-neko-1b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Conlanger-LLM-CLEM/Nelya-neko-1b with Docker Model Runner:
docker model run hf.co/Conlanger-LLM-CLEM/Nelya-neko-1b
You need to agree to share your contact information to access this model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
✨🩷 Pour accéder a Nelya-neko-1b, vous devez accepter nos conditions et notre license. Ainsi que utiliser le modèle ce pour quoi il a été entraîné. ✨🩷 Merci de remplir le formulaire ici-présent pour faire votre candidature.
Log in or Sign Up to review the conditions and access this model content.
🐱 Nelya-neko-1b: Professional Documentation
📌 Model Overview
Nelya-neko-1b is a Large Language Model (LLM) specifically engineered for the Nekolien language. While it carries 1 billion physical parameters, it is built with an original "from scratch" philosophy that prioritizes linguistic texture and unique syntax over standard, polished conventions. It is designed to forge its own original syntax rather than following generic or "smooth" solutions.
🚀 Key Features
- Native Nekolien Fluency: Operates using the native logic and vocabulary of Nekolien, such as "Juklok" and specialized verb structures.
- From-Scratch Originality: Developed without reliance on generic pre-existing datasets to ensure the language remains unpolished and authentic.
- Advanced Linguistic Texture: Focuses on nekolien conlangs.
🛠 Technical Specifications
- Parameters: 1 Billion (LLM scale).
- Architecture: Optimized for nekolien (conlangs) prediction and proprietary "from scratch" modeling.
- Dataset Strategy: Trained on highly specialized, proprietary datasets created from A to Z.
- Creator: Developed by Finisha, a developer specialized in original AI architectures and hardware.
📖 Usage & Interaction
Nelya-neko-1b does not use "polite" or generic filler. It generates dense, structured Nekolien text that integrates concepts of coding, music, and nature into a single semantic flow. Raw Output Example:
"Ji eta Nekolien qui scriba Juklok. modellia dona nouvia ab scriba veda ma ab ta dona poa veda Nekolien veda modellia utilisallia ? poa la de codia ma donna ** scriba donna senti multia que mota veda ma dona musica veda scriba scriba codia veda scriba ma de nouvia codia poa poa scriba utilisallia ma utilisallia ** ma la nouvia ave la codia dona codia ma poa la ? poa dona eta Juklok. codia ma dona scriba mota en dona ' ma ma que utilisallia multia et scriba grandia una ? poa la maxia suella nota la arboria ma poa"
🌟 About the Developer
Nelya-neko-1b is a creation of Finisha (Clémence). Born in 2007, she is a specialist in creating Small and Large Language Models.
- Downloads last month
- -
