Julia Ostheimer
Add README section on how to deploy code changes to HuggingFace Space
b712220
---
title: Prototyp Chatbot Kontextanalyse
emoji: 📝
colorFrom: blue
colorTo: yellow
sdk: gradio
python_version: 3.13
app_file: app.py
short_description: Aktueller Prototyp für KI-gestützte Kontextanalysen.
---
# KI-gestütze Kontextanalysen für Länderevaluierungen
This is the code repository for the DEval project "Durchführung und Unterstützung von KI-gestützten Text-und Datenanalysen und deren Aufbereitung in strukturierter Form im Bereich Länderkontexte".
---
## Table of Contents
- **[Get Started](#get-started)**
- [Set up `.env` file](#set-up-env-file)
- [Set up the environment](#set-up-the-environment)
- [Setup for Langfuse Prompt Management](#setup-for-langfuse-prompt-management)
- **[Run the code](#run-the-code)**
- **[Contributing](#contributing)**
- [Deploy code changes to HuggingFace Space](#deploy-code-changes-to-huggingface-space)
---
## Get started
### Set up `.env` file
To run, the project expects secret keys from a `.env` file. Locally set up this `.env` file. You require an `AWS_ACCESS_KEY_ID` and an `AWS_SECRET_ACCESS_KEY` to make connection to Amazon Bedrock LLMs and embedding models. Furthermore, API keys for the Langfuse integration are needed. The latter you can acquire after setting up a Langfuse project in the Langfuse GUI, either through a local setup or via Langfuse Cloud.
Here how the `.env` file should look like:
```
# .env
# AWS keys
AWS_ACCESS_KEY_ID= # Your AWS access key
AWS_SECRET_ACCESS_KEY= # Your AWS secret access key
# Langfuse credentials
LANGFUSE_PUBLIC_API_KEY= # Your Langfuse public API key
LANGFUSE_SECRET_API_KEY= # Your Langfuse secret API key
LANGFUSE_HOST="https://cloud.langfuse.com" # EU server for langfuse Cloud, can be different for other deployments
```
**Note**: Due to security reasons, this file should not be committed to version control!
### Set up the environment
We use `uv` as a python and our package dependency manager. Follow these [instructions](https://docs.astral.sh/uv/getting-started/installation/) to install with the standalone installer and `curl`
Next, to set up the local dependencies. You can find further information [here](https://docs.astral.sh/uv/guides/projects/#managing-dependencies)
```Bash
uv sync
```
This should give you a package structure like this with a `.venv` directory:
```
.
├── .venv
├── .python-version
├── app.py
├── pyproject.toml
├── README.md
└── uv.lock
```
#### uv.lock
`uv.lock` is a cross-platform lockfile that contains exact information about the project's dependencies. Unlike the `pyproject.toml` which is used to specify the broad requirements of the project, the lockfile contains the exact resolved versions that are installed in the project environment via `uv`. This file should be checked into version control, allowing for consistent and reproducible installations across machines. `uv.lock` is a human-readable TOML file but is managed by `uv` and should NOT be edited manually.
Alternatively, with a different dependency manager such as `venv` install directly from `pyproject.toml`.
```Bash
(.venv) $ pip install .
```
**Note**: the dependencies then need to be documented manually in the `pyproject.toml`.
### Setup for Langfuse Prompt Management
Langfuse Prompt Management is used within this project. Therefore, setting up the prompt templates within Langfuse is mandatory. As the prompts directly integrate with the code, alignment is necessary. Current setup requirements can be provided by the maintainers.
## Run the code
After installing the needed dependencies for the project and setting up the environment, execute the code from the root of the repository by running the `app.py` script via `uv` with the following command:
```Bash
uv run app.py
```
You will see the logging in the terminal and receive a link to access the currently locally hosted Gradio User Interface.
## Contributing
### Deploy code changes to HuggingFace Space
#### Option 1: Manually force pushing to Space from `main`
1. Install HuggingFace CLI via uv
```
uv pip install -U "huggingface_hub[cli]"
```
2. Log into HuggingFace via the Terminal
Important: use `uv run...` and then the command, if not it will not run it from the uv environment!
```
uv run huggingface-cli login
```
3. Verify as which user you are logged in
```
uv run huggingface-cli whoami
```
4. Add HuggingFace Space as 2nd remote
```
git remote add space https://huggingface.co/spaces/evaluatorhub42/Prototyp_Chatbot_Kontextanalyse_2
```
5. Check to have 2x remotes with command:
```
git remote -v
```
6. Force push current status of main branch to Space
```
git push --force space main
```
#### Option 2: Automatic Sync with GitHub Actions (Recommended for Ongoing Synchronicity)
This keeps the Space in sync every time we update the main branch.
Steps:
1. Add the HF token as a secret called `HF_TOKEN` in your GitHub repository's settings (Settings → Secrets and variables → Actions).
2. Add a workflow YAML file (e.g., `.github/workflows/push_to_hf_space.yml`) to the GitHub repo. More infos on setting up a GitHub action here in the HuggingFace docs: https://huggingface.co/docs/hub/spaces-github-actions