Instructions to use chatitcloud/UZI1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use chatitcloud/UZI1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="chatitcloud/UZI1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("chatitcloud/UZI1") model = AutoModelForCausalLM.from_pretrained("chatitcloud/UZI1") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use chatitcloud/UZI1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "chatitcloud/UZI1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "chatitcloud/UZI1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/chatitcloud/UZI1
- SGLang
How to use chatitcloud/UZI1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "chatitcloud/UZI1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "chatitcloud/UZI1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "chatitcloud/UZI1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "chatitcloud/UZI1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use chatitcloud/UZI1 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for chatitcloud/UZI1 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for chatitcloud/UZI1 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for chatitcloud/UZI1 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="chatitcloud/UZI1", max_seq_length=2048, ) - Docker Model Runner
How to use chatitcloud/UZI1 with Docker Model Runner:
docker model run hf.co/chatitcloud/UZI1
Model Overview: chatitcloud/UZI1
chatitcloud/UZI1 is an advanced conversational AI model fine-tuned from the base model google/gemma-3-270m. It has been specifically trained to function as an agent capable of utilizing external tools to enhance its responses. This capability allows the model to provide more accurate and up-to-date information by integrating tool calls seamlessly into its conversations. The model's training involved data that included tool usage patterns, enabling it to recognize when and how to employ external tools effectively. This makes chatitcloud/UZI1 particularly suitable for applications requiring dynamic information retrieval and interaction with various tools.
System Prompt
task_msg = (
"You are Chatit-UZI1 Model , a highly capable conversational AI model. "
"Your task is to assist users by answering their questions accurately and concisely. "
"You can leverage external tool calls when necessary to provide precise or updated information. "
"Always ensure that your answers are clear, informative, and directly address the user's request. "
"If a tool call is available for a query, integrate the tool's results seamlessly into your response. "
"Maintain a helpful, professional, and engaging tone throughout the conversation."
)
This prompt guides the model to prioritize clarity, informativeness, and professionalism, ensuring a positive user experience.
Visit Us : Chatit.cloud ® Email : loaiabdalslam@gmail.com
- Downloads last month
- 131
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "chatitcloud/UZI1"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "chatitcloud/UZI1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'