Instructions to use deepnight-research/SaiLy_experiment_v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use deepnight-research/SaiLy_experiment_v1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="deepnight-research/SaiLy_experiment_v1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepnight-research/SaiLy_experiment_v1") model = AutoModelForCausalLM.from_pretrained("deepnight-research/SaiLy_experiment_v1") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use deepnight-research/SaiLy_experiment_v1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "deepnight-research/SaiLy_experiment_v1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deepnight-research/SaiLy_experiment_v1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/deepnight-research/SaiLy_experiment_v1
- SGLang
How to use deepnight-research/SaiLy_experiment_v1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "deepnight-research/SaiLy_experiment_v1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deepnight-research/SaiLy_experiment_v1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "deepnight-research/SaiLy_experiment_v1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deepnight-research/SaiLy_experiment_v1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use deepnight-research/SaiLy_experiment_v1 with Docker Model Runner:
docker model run hf.co/deepnight-research/SaiLy_experiment_v1
YAML Metadata Warning:The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Greetings Enthusiast, If you are reading this, you are just like us. So, here's the thing... We built this. What does it do? WE DON'T KNOW. What do we know? Well, it's 70 Billion Parameter Model, 8k context length, model can use upto 5k context perfectly without any precision loss. Majority of the loss in precision and contextual relation establishment is between 5k to 7k. How was it made? Random things. This is an experimental model, but, we didn't conduct the experiment. Our experiment conducted this experiment.
Now, everything that we know about this model, you know it too. Also, yes it is uncensored, please use responsibly.
Cheers!
- Team DEEPNIGHT
- Downloads last month
- 13