Text Generation
llama-cpp-python
GGUF
English
rag
healthcare
clinical-decision-support
medical
merck-manual
retrieval-augmented-generation
mistral
Instructions to use jeremygracey-ai/FetchMerck_AI with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use jeremygracey-ai/FetchMerck_AI with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="jeremygracey-ai/FetchMerck_AI", filename="mistral-7b-instruct-v0.1.Q4_K_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- llama-cpp-python
How to use jeremygracey-ai/FetchMerck_AI with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="jeremygracey-ai/FetchMerck_AI", filename="mistral-7b-instruct-v0.1.Q4_K_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use jeremygracey-ai/FetchMerck_AI with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf jeremygracey-ai/FetchMerck_AI:Q4_K_M # Run inference directly in the terminal: llama-cli -hf jeremygracey-ai/FetchMerck_AI:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf jeremygracey-ai/FetchMerck_AI:Q4_K_M # Run inference directly in the terminal: llama-cli -hf jeremygracey-ai/FetchMerck_AI:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf jeremygracey-ai/FetchMerck_AI:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf jeremygracey-ai/FetchMerck_AI:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf jeremygracey-ai/FetchMerck_AI:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf jeremygracey-ai/FetchMerck_AI:Q4_K_M
Use Docker
docker model run hf.co/jeremygracey-ai/FetchMerck_AI:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use jeremygracey-ai/FetchMerck_AI with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "jeremygracey-ai/FetchMerck_AI" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jeremygracey-ai/FetchMerck_AI", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/jeremygracey-ai/FetchMerck_AI:Q4_K_M
- Ollama
How to use jeremygracey-ai/FetchMerck_AI with Ollama:
ollama run hf.co/jeremygracey-ai/FetchMerck_AI:Q4_K_M
- Unsloth Studio new
How to use jeremygracey-ai/FetchMerck_AI with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for jeremygracey-ai/FetchMerck_AI to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for jeremygracey-ai/FetchMerck_AI to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for jeremygracey-ai/FetchMerck_AI to start chatting
- Docker Model Runner
How to use jeremygracey-ai/FetchMerck_AI with Docker Model Runner:
docker model run hf.co/jeremygracey-ai/FetchMerck_AI:Q4_K_M
- Lemonade
How to use jeremygracey-ai/FetchMerck_AI with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull jeremygracey-ai/FetchMerck_AI:Q4_K_M
Run and chat with the model
lemonade run user.FetchMerck_AI-Q4_K_M
List all available models
lemonade list
| FROM python:3.12-slim | |
| # Install system dependencies for building llama-cpp-python | |
| RUN apt-get update && apt-get install -y \ | |
| build-essential \ | |
| python3-dev \ | |
| cmake \ | |
| && rm -rf /var/lib/apt/lists/* | |
| # Set working directory | |
| WORKDIR /app | |
| # Install required Python libraries | |
| RUN pip install --no-cache-dir \ | |
| fastapi \ | |
| uvicorn \ | |
| pydantic \ | |
| huggingface_hub==0.35.3 \ | |
| pandas==2.2.2 \ | |
| tiktoken==0.12.0 \ | |
| pymupdf==1.26.5 \ | |
| langchain==0.3.27 \ | |
| langchain-community==0.3.31 \ | |
| chromadb==1.1.1 \ | |
| sentence-transformers==5.1.1 \ | |
| llama-cpp-python==0.2.28 \ | |
| 'numpy<2.1.0' | |
| # Copy the application logic and FastAPI server | |
| COPY app_logic.py main.py ./ | |
| # Copy the persistent vector database directory | |
| COPY chroma_db/ ./chroma_db/ | |
| # Create the directory structure for the model to match main.py's MODEL_PATH | |
| RUN mkdir -p /root/.cache/huggingface/hub/models--TheBloke--Mistral-7B-Instruct-v0.1-GGUF/snapshots/731a9fc8f06f5f5e2db8a0cf9d256197eb6e05d1/ | |
| # Copy the quantized model file to the snapshot directory | |
| COPY mistral-7b-instruct-v0.1.Q4_K_M.gguf /root/.cache/huggingface/hub/models--TheBloke--Mistral-7B-Instruct-v0.1-GGUF/snapshots/731a9fc8f06f5f5e2db8a0cf9d256197eb6e05d1/ | |
| # Expose port 8000 for FastAPI | |
| EXPOSE 8000 | |
| # Command to start the FastAPI application | |
| CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] | |