File size: 5,196 Bytes
88592ac 2440306 88592ac f837a27 58cb840 aea4d15 f837a27 b712220 f837a27 58cb840 6fa6cfb 58cb840 f837a27 58cb840 aea4d15 6fa6cfb aea4d15 58cb840 aea4d15 f837a27 58cb840 f837a27 58cb840 f837a27 58cb840 f837a27 58cb840 f837a27 58cb840 f837a27 2f15fb0 f837a27 b712220 58cb840 f837a27 58cb840 f837a27 b712220 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 | ---
title: Prototyp Chatbot Kontextanalyse
emoji: 📝
colorFrom: blue
colorTo: yellow
sdk: gradio
python_version: 3.13
app_file: app.py
short_description: Aktueller Prototyp für KI-gestützte Kontextanalysen.
---
# KI-gestütze Kontextanalysen für Länderevaluierungen
This is the code repository for the DEval project "Durchführung und Unterstützung von KI-gestützten Text-und Datenanalysen und deren Aufbereitung in strukturierter Form im Bereich Länderkontexte".
---
## Table of Contents
- **[Get Started](#get-started)**
- [Set up `.env` file](#set-up-env-file)
- [Set up the environment](#set-up-the-environment)
- [Setup for Langfuse Prompt Management](#setup-for-langfuse-prompt-management)
- **[Run the code](#run-the-code)**
- **[Contributing](#contributing)**
- [Deploy code changes to HuggingFace Space](#deploy-code-changes-to-huggingface-space)
---
## Get started
### Set up `.env` file
To run, the project expects secret keys from a `.env` file. Locally set up this `.env` file. You require an `AWS_ACCESS_KEY_ID` and an `AWS_SECRET_ACCESS_KEY` to make connection to Amazon Bedrock LLMs and embedding models. Furthermore, API keys for the Langfuse integration are needed. The latter you can acquire after setting up a Langfuse project in the Langfuse GUI, either through a local setup or via Langfuse Cloud.
Here how the `.env` file should look like:
```
# .env
# AWS keys
AWS_ACCESS_KEY_ID= # Your AWS access key
AWS_SECRET_ACCESS_KEY= # Your AWS secret access key
# Langfuse credentials
LANGFUSE_PUBLIC_API_KEY= # Your Langfuse public API key
LANGFUSE_SECRET_API_KEY= # Your Langfuse secret API key
LANGFUSE_HOST="https://cloud.langfuse.com" # EU server for langfuse Cloud, can be different for other deployments
```
**Note**: Due to security reasons, this file should not be committed to version control!
### Set up the environment
We use `uv` as a python and our package dependency manager. Follow these [instructions](https://docs.astral.sh/uv/getting-started/installation/) to install with the standalone installer and `curl`
Next, to set up the local dependencies. You can find further information [here](https://docs.astral.sh/uv/guides/projects/#managing-dependencies)
```Bash
uv sync
```
This should give you a package structure like this with a `.venv` directory:
```
.
├── .venv
├── .python-version
├── app.py
├── pyproject.toml
├── README.md
└── uv.lock
```
#### uv.lock
`uv.lock` is a cross-platform lockfile that contains exact information about the project's dependencies. Unlike the `pyproject.toml` which is used to specify the broad requirements of the project, the lockfile contains the exact resolved versions that are installed in the project environment via `uv`. This file should be checked into version control, allowing for consistent and reproducible installations across machines. `uv.lock` is a human-readable TOML file but is managed by `uv` and should NOT be edited manually.
Alternatively, with a different dependency manager such as `venv` install directly from `pyproject.toml`.
```Bash
(.venv) $ pip install .
```
**Note**: the dependencies then need to be documented manually in the `pyproject.toml`.
### Setup for Langfuse Prompt Management
Langfuse Prompt Management is used within this project. Therefore, setting up the prompt templates within Langfuse is mandatory. As the prompts directly integrate with the code, alignment is necessary. Current setup requirements can be provided by the maintainers.
## Run the code
After installing the needed dependencies for the project and setting up the environment, execute the code from the root of the repository by running the `app.py` script via `uv` with the following command:
```Bash
uv run app.py
```
You will see the logging in the terminal and receive a link to access the currently locally hosted Gradio User Interface.
## Contributing
### Deploy code changes to HuggingFace Space
#### Option 1: Manually force pushing to Space from `main`
1. Install HuggingFace CLI via uv
```
uv pip install -U "huggingface_hub[cli]"
```
2. Log into HuggingFace via the Terminal
Important: use `uv run...` and then the command, if not it will not run it from the uv environment!
```
uv run huggingface-cli login
```
3. Verify as which user you are logged in
```
uv run huggingface-cli whoami
```
4. Add HuggingFace Space as 2nd remote
```
git remote add space https://huggingface.co/spaces/evaluatorhub42/Prototyp_Chatbot_Kontextanalyse_2
```
5. Check to have 2x remotes with command:
```
git remote -v
```
6. Force push current status of main branch to Space
```
git push --force space main
```
#### Option 2: Automatic Sync with GitHub Actions (Recommended for Ongoing Synchronicity)
This keeps the Space in sync every time we update the main branch.
Steps:
1. Add the HF token as a secret called `HF_TOKEN` in your GitHub repository's settings (Settings → Secrets and variables → Actions).
2. Add a workflow YAML file (e.g., `.github/workflows/push_to_hf_space.yml`) to the GitHub repo. More infos on setting up a GitHub action here in the HuggingFace docs: https://huggingface.co/docs/hub/spaces-github-actions |