Spaces:
Running
Running
File size: 1,422 Bytes
bc3fe83 7bf15ff bc3fe83 7bf15ff 72fa36e 7bf15ff f96adbd 7bf15ff 2ac2aee 7bf15ff 72fa36e 7bf15ff 72fa36e 7bf15ff 72fa36e 7bf15ff 72fa36e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | ---
title: hf-hub-query
emoji: π
colorFrom: blue
colorTo: indigo
sdk: docker
app_port: 7860
short_description: Raw fast-agent MCP server for HF Hub queries.
---
# hf-hub-query
This Space runs a raw-passthrough fast-agent MCP server backed by the released Monty build used for Hugging Face Hub querying.
The deployed card uses `tool_result_mode: passthrough`, so tool results are returned directly rather than rewritten by a second LLM pass.
## Auth
This Space is configured for Hugging Face OAuth/token passthrough:
- `FAST_AGENT_SERVE_OAUTH=hf`
- `FAST_AGENT_OAUTH_SCOPES=inference-api`
- `--instance-scope request`
These are configured as Space settings:
- Variables:
- `FAST_AGENT_SERVE_OAUTH`
- `FAST_AGENT_OAUTH_SCOPES`
- `FAST_AGENT_OAUTH_RESOURCE_URL`
- Secret:
- `HF_TOKEN` (dummy startup token)
Clients can either:
- send `Authorization: Bearer <HF_TOKEN>` directly, or
- use MCP OAuth discovery/auth flow
## Model
The deployed card uses:
- `hf.openai/gpt-oss-120b:sambanova`
## Main files
- `hf-hub-query.md` β raw MCP card
- `monty_api_tool_v2.py` β Hub query tool implementation
- `_monty_codegen_shared.md` β shared codegen instructions
- `wheels/` β optional local fast-agent wheel staging directory for one-off deploys
## Note on Monty
The Space now installs the released `pydantic-monty==0.0.8` package from PyPI, so the custom bundled Monty wheel is no longer required.
|