Custom-LLM-Chat / .env.example
Bhaskar Ram
fix: sentence-aware chunking, score threshold, DOCX tables, streaming error handling, LLM_MODEL env var
2623b17
raw
history blame contribute delete
496 Bytes
# Environment variable template — copy to .env and fill in your values
# Required: Your Hugging Face API token (get one at https://huggingface.co/settings/tokens)
HF_TOKEN=hf_...
# Optional: Override the default LLM model (defaults to Llama 3.1 8B if not set)
# LLM_MODEL=meta-llama/Llama-3.1-8B-Instruct
# LLM_MODEL=mistralai/Mistral-7B-Instruct-v0.3
# LLM_MODEL=mistralai/Mixtral-8x7B-Instruct-v0.1
# Optional: Gradio server settings
# GRADIO_SERVER_PORT=7860
# GRADIO_SERVER_NAME=0.0.0.0