Spaces:
Sleeping
Sleeping
Commit ·
3a78489
1
Parent(s): b3f5b18
improve README
Browse files
README.md
CHANGED
|
@@ -10,3 +10,38 @@ short_description: A sample MLFlow server for demo
|
|
| 10 |
---
|
| 11 |
|
| 12 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
| 13 |
+
|
| 14 |
+
# 🚀 MLflow Tracking Server Configuration
|
| 15 |
+
|
| 16 |
+
This Space hosts a remote MLflow Tracking Server. It allows you to log parameters, metrics, and models from your training pipelines (like GitHub Actions) to a centralized location.
|
| 17 |
+
|
| 18 |
+
## 🛠 Environment Variables (Secrets)
|
| 19 |
+
|
| 20 |
+
To make this server functional, you need to go to **Settings > Variables and secrets** in your Hugging Face Space and add the following secrets.
|
| 21 |
+
|
| 22 |
+
| Variable Name | Description | Example Value |
|
| 23 |
+
| :--- | :--- | :--- |
|
| 24 |
+
| `BACKEND_STORE_URI` | The database URI where metrics and parameters are stored. | `postgresql://user:password@host:port/db` or `sqlite:///mlflow.db` |
|
| 25 |
+
| `ARTIFACT_STORE_URI` | The remote storage location for model artifacts (S3, GCS, etc.). | `s3://my-mlflow-bucket/artifacts` |
|
| 26 |
+
| `AWS_ACCESS_KEY_ID` | Required if your artifact store is on S3. | `AKIA...` |
|
| 27 |
+
| `AWS_SECRET_ACCESS_KEY` | Required if your artifact store is on S3. | `wJalr...` |
|
| 28 |
+
| `PORT` | The port the application listens on (HF Spaces defaults to 7860). | `7860` |
|
| 29 |
+
|
| 30 |
+
> **Note:** The `AWS_` credentials are required because the Dockerfile installs the AWS CLI to handle interactions with S3 buckets.
|
| 31 |
+
|
| 32 |
+
## 🐳 Dockerfile Overview
|
| 33 |
+
|
| 34 |
+
The `Dockerfile` is built on top of `continuumio/miniconda3` to ensure a robust Python environment. Here is what happens during the build:
|
| 35 |
+
|
| 36 |
+
1. **Tool Installation:** Installs utility tools (`nano`, `unzip`, `curl`) and the **AWS CLI** (via the official installer) to allow MLflow to communicate with S3 buckets.
|
| 37 |
+
2. **Dependencies:** Installs Python packages listed in `requirements.txt` (typically `mlflow`, `boto3`, `psycopg2`, etc.).
|
| 38 |
+
3. **Startup Command:** The container launches the MLflow server with specific flags to ensure it works in a cloud environment.
|
| 39 |
+
|
| 40 |
+
### Understanding the Launch Parameters
|
| 41 |
+
|
| 42 |
+
The `CMD` in the Dockerfile uses the following flags:
|
| 43 |
+
|
| 44 |
+
* `--host 0.0.0.0`: Binds the server to all network interfaces so it is accessible from outside the container.
|
| 45 |
+
* `--serve-artifacts`: Enables the MLflow server to act as a proxy for artifact downloads/uploads (useful if the client doesn't have direct S3 access).
|
| 46 |
+
* `--allowed-hosts '*'`: Disables the Host Header validation check. This is necessary in cloud environments (like HF Spaces) where the internal IP and public DNS might mismatch.
|
| 47 |
+
* `--cors-allowed-origins '*'`: Sets the Cross-Origin Resource Sharing policy to allow any domain to access this API. This is useful for teaching environments but can be restricted for production.
|