Dhanushsaireddy144 commited on
Commit
9682111
·
1 Parent(s): 661e5da

Mcp Configured for HF Spaces deployment

Browse files
Dockerfile ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use python 3.10 as base
2
+ FROM python:3.10-slim
3
+
4
+ WORKDIR /app
5
+
6
+ # Copy dependency definition
7
+ COPY pyproject.toml .
8
+
9
+ # Install dependencies
10
+ RUN pip install --no-cache-dir mcp huggingface_hub python-dotenv
11
+
12
+ # Copy application code
13
+ COPY server.py utils.py ./
14
+
15
+
16
+ EXPOSE 7860
17
+
18
+ # Determine mode based on env var, default to stdio for local mcp
19
+ # But for HF Spaces deployment as a "federated" node, we might want a server.
20
+ # FastMCP doesn't strictly have a "serve_http" method exposed in the simple CLI wrapper easily without 'uvicorn' usually.
21
+ # However, for now we will set the entrypoint to python server.py which does mcp.run() (stdio).
22
+ # If the user wants SSE, they would change the run command.
23
+ CMD ["python", "deploy.py"]
README.md CHANGED
@@ -1,10 +1,88 @@
1
  ---
2
- title: Multi Task Codefetch Mcp
3
- emoji: 🐨
4
- colorFrom: gray
5
- colorTo: gray
6
  sdk: docker
7
- pinned: false
 
8
  ---
 
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: MCP Server
 
 
 
3
  sdk: docker
4
+ app_port: 7860
5
+ emoji: 🤖
6
  ---
7
+ # Hugging Face MCP Server
8
 
9
+ A Model Context Protocol (MCP) server that exposes Hugging Face Inference tools for Multimodal, Computer Vision, NLP, and Audio tasks. This server allows LLMs to interact with the Hugging Face Inference API to perform complex tasks.
10
+
11
+ ## Features
12
+
13
+ - **Multimodal**: Visual Question Answering, Text-to-Image, Image-to-Text.
14
+ - **Computer Vision**: Image Classification, Object Detection.
15
+ - **NLP**: Text Generation, Summarization, Translation, Text Classification.
16
+ - **Audio**: Text-to-Speech, Automatic Speech Recognition.
17
+ - **Generic Support**: Run any HF Inference task via `generic_hf_inference`.
18
+
19
+ ## Setup
20
+
21
+ ### Prerequisites
22
+
23
+ - Python 3.10+
24
+ - A Hugging Face Account and Access Token (Access Token should be write-capable if posting data, but read is often enough for inference).
25
+
26
+ ### Installation
27
+
28
+ 1. Clone this repository.
29
+ 2. Install dependencies:
30
+ ```bash
31
+ pip install .
32
+ ```
33
+ Or manually:
34
+ ```bash
35
+ pip install mcp huggingface_hub python-dotenv returns requests pillow
36
+ ```
37
+
38
+ ### Configuration
39
+
40
+ Create a `.env` file or export the variable:
41
+
42
+ ```bash
43
+ export HF_TOKEN="hf_..."
44
+ ```
45
+
46
+ ## Usage
47
+
48
+ ### Local Running (Stdio)
49
+
50
+ Run the server using `mcp`:
51
+
52
+ ```bash
53
+ mcp run server.py
54
+ ```
55
+
56
+ Or just python:
57
+
58
+ ```bash
59
+ python server.py
60
+ ```
61
+
62
+ ### Hugging Face Spaces Deployment (Docker)
63
+
64
+ 1. Create a new Space on Hugging Face.
65
+ 2. Select **Docker** as the SDK.
66
+ 3. Upload the files in this repository (include `deploy.py` and `Dockerfile`).
67
+ 4. Add your `HF_TOKEN` in the Space's "Settings" -> "Variables and secrets" section.
68
+ 5. The server will start properly on port 7860 using SSE. The access URL will be your Space's URL (e.g., `https://huggingface.co/spaces/user/space-name`).
69
+ *Note: The `Dockerfile` uses `deploy.py` to ensure the server listens nicely on 0.0.0.0:7860.*
70
+
71
+ ## Tools List
72
+
73
+ - `visual_question_answering`
74
+ - `text_to_image`
75
+ - `image_classification`
76
+ - `object_detection`
77
+ - `image_to_text` (Captioning)
78
+ - `text_generation`
79
+ - `summarization`
80
+ - `translation`
81
+ - `text_classification`
82
+ - `automatic_speech_recognition`
83
+ - `text_to_speech`
84
+ - `generic_hf_inference`
85
+
86
+ ## Federated Projects
87
+
88
+ This server is designed to be stateless and can be deployed as a node in a larger federated system. Ensure network connectivity and proper token management.
__pycache__/server.cpython-313.pyc ADDED
Binary file (10.3 kB). View file
 
__pycache__/utils.cpython-313.pyc ADDED
Binary file (2.98 kB). View file
 
api_dump.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ ['__annotations__', '__class__', '__class_getitem__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__firstlineno__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__orig_bases__', '__parameters__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__static_attributes__', '__str__', '__subclasshook__', '__weakref__', '_auth_server_provider', '_custom_starlette_routes', '_event_store', '_mcp_server', '_normalize_path', '_prompt_manager', '_resource_manager', '_retry_interval', '_session_manager', '_setup_handlers', '_token_verifier', '_tool_manager', 'add_prompt', 'add_resource', 'add_tool', 'call_tool', 'completion', 'custom_route', 'dependencies', 'get_context', 'get_prompt', 'icons', 'instructions', 'list_prompts', 'list_resource_templates', 'list_resources', 'list_tools', 'name', 'prompt', 'read_resource', 'remove_tool', 'resource', 'run', 'run_sse_async', 'run_stdio_async', 'run_streamable_http_async', 'session_manager', 'settings', 'sse_app', 'streamable_http_app', 'tool', 'website_url']
check.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ from server import mcp
2
+ with open("api_dump.txt", "w") as f:
3
+ f.write(str(dir(mcp)))
deploy.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ from server import mcp
2
+
3
+ # FastMCP 'sse_app' is often a property that returns the Starlette app
4
+ # We assign it to 'app' so uvicorn can find it
5
+ app = mcp.sse_app
6
+
7
+ if __name__ == "__main__":
8
+ import uvicorn
9
+ # Use 7860 for HF Spaces
10
+ uvicorn.run(app, host="0.0.0.0", port=7860)
pyproject.toml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "hf-mcp-server"
3
+ version = "0.1.0"
4
+ description = "A Model Context Protocol server exposing Hugging Face Inference capabilities"
5
+ readme = "README.md"
6
+ requires-python = ">=3.10"
7
+ dependencies = [
8
+ "mcp>=1.0.0",
9
+ "huggingface_hub>=0.27.0",
10
+ "python-dotenv>=1.0.0",
11
+ "requests>=2.0.0",
12
+ "pillow>=10.0.0", # Added for image handling
13
+ "uvicorn>=0.20.0",
14
+ "sse-starlette>=1.8.0",
15
+ ]
16
+
17
+ [build-system]
18
+ requires = ["hatchling"]
19
+ build-backend = "hatchling.build"
20
+
21
+ [tool.hatch.build.targets.wheel]
22
+ packages = ["src"]
server.py ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from mcp.server.fastmcp import FastMCP
2
+ import os
3
+ from typing import Optional, List, Any, Dict
4
+ from huggingface_hub import InferenceClient
5
+
6
+ # Initialize the MCP server
7
+ mcp = FastMCP("Hugging Face tools")
8
+
9
+ # Get token from environment
10
+ HF_TOKEN = os.environ.get("HF_TOKEN")
11
+ if not HF_TOKEN:
12
+ print("Warning: HF_TOKEN environment variable not set. Some authenticated requests may fail.")
13
+
14
+ client = InferenceClient(token=HF_TOKEN)
15
+
16
+ @mcp.tool()
17
+ def list_available_tasks() -> str:
18
+ """Lists all the AI tasks supported by this server."""
19
+ tasks = [
20
+ "Audio-Text-to-Text", "Image-Text-to-Text", "Image-Text-to-Image",
21
+ "Image-Text-to-Video", "Visual Question Answering", "Document Question Answering",
22
+ "Video-Text-to-Text", "Visual Document Retrieval", "Depth Estimation",
23
+ "Image Classification", "Object Detection", "Image Segmentation",
24
+ "Text-to-Image", "Image-to-Text", "Image-to-Image", "Image-to-Video",
25
+ "Unconditional Image Generation", "Video Classification", "Text-to-Video",
26
+ "Zero-Shot Image Classification", "Mask Generation", "Zero-Shot Object Detection",
27
+ "Text-to-3D", "Image-to-3D", "Image Feature Extraction", "Keypoint Detection",
28
+ "Video-to-Video", "Text Classification", "Token Classification",
29
+ "Table Question Answering", "Question Answering", "Zero-Shot Classification",
30
+ "Translation", "Summarization", "Feature Extraction", "Text Generation",
31
+ "Fill-Mask", "Sentence Similarity", "Text Ranking", "Text-to-Speech",
32
+ "Text-to-Audio", "Automatic Speech Recognition", "Audio-to-Audio",
33
+ "Audio Classification", "Voice Activity Detection", "Tabular Classification",
34
+ "Tabular Regression", "Time Series Forecasting", "Reinforcement Learning",
35
+ "Robotics", "Graph Machine Learning"
36
+ ]
37
+ return f"Supported Tasks: {', '.join(tasks)}"
38
+
39
+ @mcp.tool()
40
+ def visual_question_answering(image: str, question: str, model: Optional[str] = None) -> str:
41
+ """
42
+ Answer questions about an image.
43
+ Args:
44
+ image: URL or Base64 string of the image.
45
+ question: The question to answer.
46
+ model: Optional model ID (e.g., 'dandelin/vilt-b32-finetuned-vqa').
47
+ """
48
+ try:
49
+ # Note: client.visual_question_answering takes URL/path or bytes/PIL, but for robustness we might pass URL directly if supported
50
+ # or decode. utils.decode_image returns a PIL Image.
51
+ # InferenceClient.visual_question_answering supports: image: Union[str, Path, bytes, BinaryIO]
52
+ # If it's a URL, we can pass it directly. If it's B64, we need to decode.
53
+ # For simplicity, let's decode everything to confirm it's valid, relying on utils.
54
+ # Wait, utils needs 'requests' which is not in pyproject.toml yet. I need to add it or use urllib.
55
+ # Actually client handles URLs.
56
+ result = client.visual_question_answering(image, question, model=model)
57
+ # Result is typically a list of dicts or a single object depending on api
58
+ return str(result)
59
+ except Exception as e:
60
+ return f"Error: {e}"
61
+
62
+ @mcp.tool()
63
+ def text_to_image(prompt: str, model: Optional[str] = None) -> str:
64
+ """
65
+ Generate an image from text.
66
+ Returns: Base64 encoded image string.
67
+ """
68
+ try:
69
+ img = client.text_to_image(prompt, model=model)
70
+ # Check if img is a PIL Image, sometimes it's bytes
71
+ import utils
72
+ if not isinstance(img, utils.Image.Image):
73
+ # It might be bytes
74
+ import io
75
+ img = utils.Image.open(io.BytesIO(img))
76
+ return utils.encode_image(img)
77
+ except Exception as e:
78
+ return f"Error: {e}"
79
+
80
+ @mcp.tool()
81
+ def image_classification(image: str, model: Optional[str] = None) -> str:
82
+ """
83
+ Classify an image.
84
+ Args:
85
+ image: URL or Base64 string.
86
+ """
87
+ try:
88
+ result = client.image_classification(image, model=model)
89
+ return str(result)
90
+ except Exception as e:
91
+ return f"Error: {e}"
92
+
93
+ @mcp.tool()
94
+ def object_detection(image: str, model: Optional[str] = None) -> str:
95
+ """
96
+ Detect objects in an image.
97
+ Args:
98
+ image: URL or Base64 string.
99
+ """
100
+ try:
101
+ result = client.object_detection(image, model=model)
102
+ return str(result)
103
+ except Exception as e:
104
+ return f"Error: {e}"
105
+
106
+ @mcp.tool()
107
+ def image_to_text(image: str, model: Optional[str] = None) -> str:
108
+ """
109
+ Generate a caption or text description for an image.
110
+ Args:
111
+ image: URL or Base64 string.
112
+ """
113
+ try:
114
+ result = client.image_to_text(image, model=model)
115
+ return str(result)
116
+ except Exception as e:
117
+ return f"Error: {e}"
118
+
119
+ @mcp.tool()
120
+ def text_generation(prompt: str, model: Optional[str] = None, max_new_tokens: int = 500) -> str:
121
+ """
122
+ Generate text based on a prompt.
123
+ Args:
124
+ prompt: Input text.
125
+ model: Model ID.
126
+ max_new_tokens: Maximum tokens to generate.
127
+ """
128
+ try:
129
+ return client.text_generation(prompt, model=model, max_new_tokens=max_new_tokens)
130
+ except Exception as e:
131
+ return f"Error: {e}"
132
+
133
+ @mcp.tool()
134
+ def summarization(text: str, model: Optional[str] = None) -> str:
135
+ """
136
+ Summarize a text.
137
+ """
138
+ try:
139
+ result = client.summarization(text, model=model)
140
+ # Result is typically a list containing {'summary_text': ...}
141
+ if isinstance(result, list) and len(result) > 0:
142
+ return result[0].get('summary_text', str(result))
143
+ return str(result)
144
+ except Exception as e:
145
+ return f"Error: {e}"
146
+
147
+ @mcp.tool()
148
+ def translation(text: str, model: Optional[str] = None) -> str:
149
+ """
150
+ Translate text. Model usually determines source/target languages.
151
+ """
152
+ try:
153
+ # Note: InferenceClient translation often expects src_lang/tgt_lang depending on model,
154
+ # but the simple API just takes text.
155
+ result = client.translation(text, model=model)
156
+ if isinstance(result, list) and len(result) > 0:
157
+ return result[0].get('translation_text', str(result))
158
+ return str(result)
159
+ except Exception as e:
160
+ return f"Error: {e}"
161
+
162
+ @mcp.tool()
163
+ def text_classification(text: str, model: Optional[str] = None) -> str:
164
+ """
165
+ Classify text (e.g. sentiment analysis).
166
+ """
167
+ try:
168
+ result = client.text_classification(text, model=model)
169
+ return str(result)
170
+ except Exception as e:
171
+ return f"Error: {e}"
172
+
173
+ @mcp.tool()
174
+ def automatic_speech_recognition(audio: str, model: Optional[str] = None) -> str:
175
+ """
176
+ Transcribe audio.
177
+ Args:
178
+ audio: URL or Base64 string of the audio file.
179
+ """
180
+ try:
181
+ # client.automatic_speech_recognition handles URLs/bytes
182
+ # If URL, pass directly. If not, maybe need to decode bytes?
183
+ # ASR usually takes bytes or filename.
184
+ # If base64 provided, we should decode.
185
+ import base64
186
+ if not (audio.startswith("http://") or audio.startswith("https://")):
187
+ audio_data = base64.b64decode(audio)
188
+ result = client.automatic_speech_recognition(audio_data, model=model)
189
+ else:
190
+ result = client.automatic_speech_recognition(audio, model=model)
191
+
192
+ if isinstance(result, dict):
193
+ return result.get('text', str(result))
194
+ return str(result)
195
+ except Exception as e:
196
+ return f"Error: {e}"
197
+
198
+ @mcp.tool()
199
+ def text_to_speech(text: str, model: Optional[str] = None) -> str:
200
+ """
201
+ Generate audio from text.
202
+ Returns: Base64 encoded audio.
203
+ """
204
+ try:
205
+ audio_bytes = client.text_to_speech(text, model=model)
206
+ import base64
207
+ return base64.b64encode(audio_bytes).decode('utf-8')
208
+ except Exception as e:
209
+ return f"Error: {e}"
210
+
211
+ @mcp.tool()
212
+ def generic_hf_inference(task: str, inputs: Dict[str, Any], model: Optional[str] = None) -> str:
213
+ """
214
+ Run any Hugging Face inference task that doesn't have a specific tool.
215
+ Args:
216
+ task: The task name (e.g., 'text-generation', 'translation').
217
+ inputs: Dictionary of inputs required for the task.
218
+ model: Model ID to use.
219
+ """
220
+ try:
221
+ # We can use client.post for raw access
222
+ # but parameters depend heavily on the task.
223
+ # This is a fallback.
224
+ import json
225
+ result = client.post(json=inputs, model=model, task=task)
226
+ return str(result)
227
+ except Exception as e:
228
+ return f"Error: {e}"
229
+
utils.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import io
3
+ from typing import Any, Dict, Optional, Union
4
+ from huggingface_hub import InferenceClient
5
+ from PIL import Image
6
+
7
+ def encode_image(image: Image.Image) -> str:
8
+ """Encodes a PIL Image to base64 string."""
9
+ buffered = io.BytesIO()
10
+ image.save(buffered, format="PNG")
11
+ return base64.b64encode(buffered.getvalue()).decode("utf-8")
12
+
13
+ def decode_image(image_data: Union[str, bytes]) -> Image.Image:
14
+ """Decodes base64 string or bytes to PIL Image."""
15
+ if isinstance(image_data, str):
16
+ # Check if it's a URL or base64
17
+ if image_data.startswith("http://") or image_data.startswith("https://"):
18
+ import requests
19
+ response = requests.get(image_data)
20
+ response.raise_for_status()
21
+ image_data = response.content
22
+ else:
23
+ # Assume base64
24
+ image_data = base64.b64decode(image_data)
25
+ return Image.open(io.BytesIO(image_data))
26
+
27
+ def handle_hf_error(func):
28
+ """Decorator to handle Hugging Face API errors gracefully."""
29
+ def wrapper(*args, **kwargs):
30
+ try:
31
+ return func(*args, **kwargs)
32
+ except Exception as e:
33
+ return f"Error executing task: {str(e)}"
34
+ return wrapper
35
+
36
+ @handle_hf_error
37
+ def run_text_generation(client: InferenceClient, prompt: str, model: Optional[str] = None, **kwargs) -> str:
38
+ return client.text_generation(prompt, model=model, **kwargs)
39
+
40
+ @handle_hf_error
41
+ def run_image_generation(client: InferenceClient, prompt: str, model: Optional[str] = None, **kwargs) -> Image.Image:
42
+ return client.text_to_image(prompt, model=model, **kwargs)
43
+
44
+ # Add more specific wrappers as needed to normalize outputs