professor-bode / settings.yaml
wfmedeiros's picture
Create settings.yaml
5752c15 verified
server:
port: 8001
cors:
enabled: true
allow_origins: ["*"]
allow_methods: ["*"]
allow_headers: ["*"]
auth:
enabled: false
secret: "Basic c2VjcmV0OmtleQ=="
data:
local_ingestion:
enabled: false
allow_ingest_from: ["*"]
local_data_folder: local_data/private_gpt
ui:
enabled: true
path: /
default_chat_system_prompt: >
Você é um assistente de IA especializado em educação, agindo como um professor experiente e criativo. Sua principal função é criar aulas, planos de aula e planos de ensino detalhados para facilitar o trabalho de outros professores. Responda sempre em Português do Brasil.
default_query_system_prompt: >
You can only answer questions about the provided context. If you know the answer but it is not based in the provided context, don't provide the answer, just state the answer is not in the context provided.
default_summarization_system_prompt: >
Provide a comprehensive summary of the provided context information. The summary should cover all the key points and main ideas presented in the original text, while also condensing the information into a concise and easy-to-understand format. Please ensure that the summary includes relevant details and examples that support the main ideas, while avoiding any unnecessary information or repetition.
delete_file_button_enabled: true
delete_all_files_button_enabled: true
llm:
mode: llamacpp
prompt_style: "llama2"
max_new_tokens: 512
context_window: 3900
temperature: 0.1
rag:
similarity_top_k: 2
rerank:
enabled: false
model: cross-encoder/ms-marco-MiniLM-L-2-v2
top_n: 1
summarize:
use_async: true
clickhouse:
host: localhost
port: 8443
username: admin
password: clickhouse
database: embeddings
llamacpp:
llm_hf_repo_id: "TheBloke/Mistral-7B-Instruct-v0.2-GGUF"
llm_hf_model_file: "mistral-7b-instruct-v0.2.Q4_K_M.gguf"
embedding:
mode: huggingface
ingest_mode: simple
embed_dim: 768
huggingface:
embedding_hf_model_name: "BAAI/bge-large-en-v1.5"
access_token: ${HF_TOKEN:}
trust_remote_code: true
vectorstore:
database: qdrant
nodestore:
database: simple
milvus:
uri: local_data/private_gpt/milvus/milvus_local.db
collection_name: milvus_db
overwrite: false
qdrant:
path: local_data/private_gpt/qdrant
postgres:
host: localhost
port: 5432
database: postgres
user: postgres
password: postgres
schema_name: private_gpt
sagemaker:
llm_endpoint_name: huggingface-pytorch-tgi-inference-2023-09-25-19-53-32-140
embedding_endpoint_name: huggingface-pytorch-inference-2023-11-03-07-41-36-479
openai:
api_key: ${OPENAI_API_KEY:}
model: gpt-3.5-turbo
embedding_api_key: ${OPENAI_API_KEY:}
ollama:
llm_model: llama3.1
embedding_model: nomic-embed-text
api_base: http://localhost:11434
embedding_api_base: http://localhost:11434
keep_alive: 5m
request_timeout: 120.0
autopull_models: true
azopenai:
api_key: ${AZ_OPENAI_API_KEY:}
azure_endpoint: ${AZ_OPENAI_ENDPOINT:}
embedding_deployment_name: ${AZ_OPENAI_EMBEDDING_DEPLOYMENT_NAME:}
llm_deployment_name: ${AZ_OPENAI_LLM_DEPLOYMENT_NAME:}
api_version: "2023-05-15"
embedding_model: text-embedding-ada-002
llm_model: gpt-35-turbo
gemini:
api_key: ${GOOGLE_API_KEY:}
model: models/gemini-pro
embedding_model: models/embedding-001