Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

OpceanAI-With-Omnireasoning
/
OwO

Text Generation
Transformers
PyTorch
English
Spanish
reasoning
omnireasoning
unsloth
axolotl
bilingual
opceanai
owo
qwq
YuuKi
yuuki
chat
math
code
Model card Files Files and versions
xet
Community

Instructions to use OpceanAI-With-Omnireasoning/OwO with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use OpceanAI-With-Omnireasoning/OwO with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("text-generation", model="OpceanAI-With-Omnireasoning/OwO")
    # Load model directly
    from transformers import AutoModel
    model = AutoModel.from_pretrained("OpceanAI-With-Omnireasoning/OwO", dtype="auto")
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • vLLM

    How to use OpceanAI-With-Omnireasoning/OwO with vLLM:

    Install from pip and serve model
    # Install vLLM from pip:
    pip install vllm
    # Start the vLLM server:
    vllm serve "OpceanAI-With-Omnireasoning/OwO"
    # Call the server using curl (OpenAI-compatible API):
    curl -X POST "http://localhost:8000/v1/completions" \
    	-H "Content-Type: application/json" \
    	--data '{
    		"model": "OpceanAI-With-Omnireasoning/OwO",
    		"prompt": "Once upon a time,",
    		"max_tokens": 512,
    		"temperature": 0.5
    	}'
    Use Docker
    docker model run hf.co/OpceanAI-With-Omnireasoning/OwO
  • SGLang

    How to use OpceanAI-With-Omnireasoning/OwO with SGLang:

    Install from pip and serve model
    # Install SGLang from pip:
    pip install sglang
    # Start the SGLang server:
    python3 -m sglang.launch_server \
        --model-path "OpceanAI-With-Omnireasoning/OwO" \
        --host 0.0.0.0 \
        --port 30000
    # Call the server using curl (OpenAI-compatible API):
    curl -X POST "http://localhost:30000/v1/completions" \
    	-H "Content-Type: application/json" \
    	--data '{
    		"model": "OpceanAI-With-Omnireasoning/OwO",
    		"prompt": "Once upon a time,",
    		"max_tokens": 512,
    		"temperature": 0.5
    	}'
    Use Docker images
    docker run --gpus all \
        --shm-size 32g \
        -p 30000:30000 \
        -v ~/.cache/huggingface:/root/.cache/huggingface \
        --env "HF_TOKEN=<secret>" \
        --ipc=host \
        lmsysorg/sglang:latest \
        python3 -m sglang.launch_server \
            --model-path "OpceanAI-With-Omnireasoning/OwO" \
            --host 0.0.0.0 \
            --port 30000
    # Call the server using curl (OpenAI-compatible API):
    curl -X POST "http://localhost:30000/v1/completions" \
    	-H "Content-Type: application/json" \
    	--data '{
    		"model": "OpceanAI-With-Omnireasoning/OwO",
    		"prompt": "Once upon a time,",
    		"max_tokens": 512,
    		"temperature": 0.5
    	}'
  • Unsloth Studio new

    How to use OpceanAI-With-Omnireasoning/OwO with Unsloth Studio:

    Install Unsloth Studio (macOS, Linux, WSL)
    curl -fsSL https://unsloth.ai/install.sh | sh
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for OpceanAI-With-Omnireasoning/OwO to start chatting
    Install Unsloth Studio (Windows)
    irm https://unsloth.ai/install.ps1 | iex
    # Run unsloth studio
    unsloth studio -H 0.0.0.0 -p 8888
    # Then open http://localhost:8888 in your browser
    # Search for OpceanAI-With-Omnireasoning/OwO to start chatting
    Using HuggingFace Spaces for Unsloth
    # No setup required
    # Open https://huggingface.co/spaces/unsloth/studio in your browser
    # Search for OpceanAI-With-Omnireasoning/OwO to start chatting
    Load model with FastModel
    pip install unsloth
    from unsloth import FastModel
    model, tokenizer = FastModel.from_pretrained(
        model_name="OpceanAI-With-Omnireasoning/OwO",
        max_seq_length=2048,
    )
  • Docker Model Runner

    How to use OpceanAI-With-Omnireasoning/OwO with Docker Model Runner:

    docker model run hf.co/OpceanAI-With-Omnireasoning/OwO
OwO
1.79 kB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 4 commits
OpceanAI's picture
OpceanAI
Update README.md
bfce4d3 verified about 2 months ago
  • .gitattributes
    1.52 kB
    initial commit 2 months ago
  • README.md
    268 Bytes
    Update README.md about 2 months ago