Instructions to use dimsavva/phi4-noesis with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use dimsavva/phi4-noesis with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="dimsavva/phi4-noesis", filename="phi4-noesis.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use dimsavva/phi4-noesis with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf dimsavva/phi4-noesis # Run inference directly in the terminal: llama-cli -hf dimsavva/phi4-noesis
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf dimsavva/phi4-noesis # Run inference directly in the terminal: llama-cli -hf dimsavva/phi4-noesis
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf dimsavva/phi4-noesis # Run inference directly in the terminal: ./llama-cli -hf dimsavva/phi4-noesis
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf dimsavva/phi4-noesis # Run inference directly in the terminal: ./build/bin/llama-cli -hf dimsavva/phi4-noesis
Use Docker
docker model run hf.co/dimsavva/phi4-noesis
- LM Studio
- Jan
- Ollama
How to use dimsavva/phi4-noesis with Ollama:
ollama run hf.co/dimsavva/phi4-noesis
- Unsloth Studio new
How to use dimsavva/phi4-noesis with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for dimsavva/phi4-noesis to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for dimsavva/phi4-noesis to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for dimsavva/phi4-noesis to start chatting
- Docker Model Runner
How to use dimsavva/phi4-noesis with Docker Model Runner:
docker model run hf.co/dimsavva/phi4-noesis
- Lemonade
How to use dimsavva/phi4-noesis with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull dimsavva/phi4-noesis
Run and chat with the model
lemonade run user.phi4-noesis-{{QUANT_TAG}}List all available models
lemonade list
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Phi-4 is a 14B parameter, state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets.
The model underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
Context length: 16k tokens
Primary use cases The model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:
Memory/compute constrained environments. Latency bound scenarios. Reasoning and logic. Out-of-scope use cases The models are not specifically designed or evaluated for all downstream purposes, thus:
Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English. Nothing contained in this readme should be interpreted as or deemed a restriction or modification to the license the model is released under.
About Noesis :
Phi4 Noesis was specifically designed for reasoning, with a focus on reasoning behaviour, leveraging the power of Phi4 with Deep Reasoning and an option for Fast Reasoning.
The defualt behaviour is Deep Reasoning. To activate Fast Reasoning, start your prompt with "Quick Think: "
For example : Quick Think: If an object is dropped from a certain height and takes 10 seconds to hit the ground, how long would it take to hit the ground if it was dropped from twice that height?
The full model weights are available at https://huggingface.co/dimsavva/phi4-noesis
Reach out to me at https://www.linkedin.com/in/dimsavva/ if you would like to collaborate on innovating on this model, if you have any questions, or if you would like your own finetuned version on your own company data, running 100% locally.
- Downloads last month
- 82