Spaces:
Running
A newer version of the Gradio SDK is available: 6.9.0
title: Quote Request Handler MVP
emoji: 🧾
colorFrom: indigo
colorTo: blue
sdk: gradio
app_file: app.py
python_version: '3.10'
pinned: false
Quote Request Handler MVP
A Hugging Face Gradio Space for handling quote requests today while building a cleaner training dataset for tomorrow.
What this MVP does
This app treats your workbook as both:
- a quote-request handler for current intake, and
- a supervised learning dataset for a future ML model.
It uses your core schema on Sheet1:
- Column A —
Request: the incoming customer request - Column B —
Information Extracted: subject matter expert knowledge, domain interpretation, or normalized technical context - Column C —
Design: quote-facing design guidance that should influence the final quote
It also supports a separate SME_Notes sheet for broader business/domain notes that should influence generations across all requests.
Why your Space failed to build before
In a Hugging Face Space, metadata is read from the YAML block at the top of README.md, and python_version is defined as a string field. Keeping it quoted avoids YAML coercing 3.10 into 3.1, which can trigger the python:3.1 not found build error. citeturn996945search0turn996945search12
This MVP fixes that by using:
python_version: "3.10"
Files
app.py— main Gradio apprequirements.txt— Python dependenciesdata/quote_request_training.xlsx— seed workbook used if no persisted workbook exists yet
Hugging Face Space setup
Create a Gradio Space and upload these files.
Then add this Secret in the Space settings:
ANTHROPIC_API_KEY
Optional environment variable:
CLAUDE_MODEL— defaults toclaude-sonnet-4-6
The app uses Anthropic's Messages API via the Python SDK. Anthropic’s current examples use Anthropic(...).messages.create(...), which is the pattern used in this MVP. citeturn996945search7turn996945search19
Hardware recommendation
This MVP does not require a GPU because the model inference happens remotely through Anthropic’s API, while the Space itself mainly handles form input, workbook I/O, retrieval, and response rendering. Start on a CPU Space unless you later add local embedding models, local rerankers, or local generation. Hugging Face documents both standard GPU upgrades and ZeroGPU for workloads that actually perform local GPU computation. citeturn996945search6turn996945search2
Persistence
The app automatically uses /data/quote_request_handler when /data is available, which is the path Hugging Face uses for attached persistent storage. If persistent storage is not attached, it falls back to the repository ./data folder. citeturn996945search1
Main workflow
1) Generate Quote Guidance
Use the generation tab to:
- paste a new customer request
- optionally add SME/domain notes for that request
- retrieve similar historical rows
- generate:
information_extracteddesign- structured
quote_inputs assumptions
2) Add Data to Column A or Column B
The intake/curation tab lets you:
- append a request-only row to Column A
- append a full A/B/C row
- append new SME knowledge to Column B for an existing row
- edit any existing row
- add reusable global SME notes for all future generations
3) Build the future ML dataset
The export tools produce:
- CSV for analysis and training prep
- JSONL for later fine-tuning, distillation, or evaluation pipelines
Future ML architecture
This MVP supports the progression you described:
- Near term: retrieval + Anthropic generation
- Mid term: use accepted quote outcomes as labeled targets
- Later: train a dedicated model where
(Request, SME knowledge) -> Design/quote guidance
Notes
- The first worksheet should contain the headers:
Request,Information Extracted,Design - Additional sheets are preserved when you replace the workbook
- The app includes a JSON repair pass if the model returns malformed JSON