Instructions to use QuantFactory/Sensei-7B-V1-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use QuantFactory/Sensei-7B-V1-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Sensei-7B-V1-GGUF", filename="Sensei-7B-V1.Q2_K.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/Sensei-7B-V1-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use QuantFactory/Sensei-7B-V1-GGUF with Ollama:
ollama run hf.co/QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/Sensei-7B-V1-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Sensei-7B-V1-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Sensei-7B-V1-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/Sensei-7B-V1-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/Sensei-7B-V1-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/Sensei-7B-V1-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/Sensei-7B-V1-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Sensei-7B-V1-GGUF-Q4_K_M
List all available models
lemonade list
QuantFactory/Sensei-7B-V1-GGUF
This is quantized version of SciPhi/Sensei-7B-V1 created using llama.cpp
Original Model Card
Sensei-7B-V1 Model Card
Sensei-7B-V1 is a Large Language Model (LLM) fine-tuned from OpenPipe's mistral-ft-optimized-1218, which is based on Mistral-7B. Sensei-7B-V1 was was fine-tuned with a fully synthetic dataset to specialize at performing retrieval-augmented generation (RAG) over detailed web search results. This model strives to specialize in using search, such as AgentSearch, to generate accurate and well-cited summaries from a range of search results, providing more accurate answers to user queries. Please refer to the docs here for more information on how to run Sensei end-to-end.
Currently, Sensei is available via hosted api at https://www.sciphi.ai. You can try a demonstration here.
Model Architecture
Base Model: mistral-ft-optimized-1218
Architecture Features:
- Transformer-based model
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
Using the Model
It is recommended to use a single search query. The model will return an answer using search results as context.
Using the AgentSearch package an example is shown below.
export SCIPHI_API_KEY=MY_SCIPHI_API_KEY
# Use `Sensei` for LLM RAG w/ AgentSearch
python -m agent_search.scripts.run_rag run --query="What is Fermat's last theorem?"
Alternatively, you may provide your own search context directly to the model by adhereing to the following format:
### Instruction:
Your task is to perform retrieval augmented generation (RAG) over the given query and search results. Return your answer in a json format that includes a summary of the search results and a list of related queries.
Query:
{prompt}
\n\n
Search Results:
{context}
\n\n
Query:
{prompt}
### Response:
{"summary":
Note: The inclusion of the text '{"summary":' following the Response footer is intentional. This ensures that the model responds with the proper json format, failure to include this leading prefix can cause small deviaitons. Combining the output with the leading string '{"summary":' results in a properly formatted JSON with keys 'summary' and 'other_queries'.
References
- OpenPipe AI. (2023). Model Card for mistral-ft-optimized-1218. The mistral-ft-1218 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters optimized for downstream fine-tuning on a variety of tasks. For full details, please refer to the release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. Link
- Downloads last month
- 32
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
