semmyKG / .env.example
semmyk's picture
v0.2.0: refactor lightRAG processing, split gradio-ui and logic. Minor fixes.
d628748
OPENAI_API_KEY=your-openai-api-key
OPENAI_API_BASE=your-LLM-inference-provider-endpoint
## (for locally hosted llm inference server like LMStudio or Jan.ai, follow ollama host adding /v1: http://localhost:1234/v1)
OPENAI_API_EMBED_BASE=your-embedding-provider-endpoint
## (for locally hosted, do not include /embedding)
GEMINI_API_KEY=your-gemini-api-key-if-using-GenAI
LLM_MODEL=your-LLM-model-Name
## (in the format: provider/model-identifier)
LLM_MODEL_EMBED=your-embedding-model
##(in the format: provider/embedding-name)
OLLAMA_HOST=http://localhost:11434
## change port #
OLLAMA_API_KEY= ##(include if required)
## LightRAG settings
LOG_DIR=desired-directory-for-lightRAG-logfile
VERBOSE_DEBUG=set-to-false-or-true
MAX_EMBED_TOKENS=default-is-8192