metadata
title: MCP Server
sdk: docker
app_port: 7860
emoji: 🤖
Hugging Face MCP Server
A Model Context Protocol (MCP) server that exposes Hugging Face Inference tools for Multimodal, Computer Vision, NLP, and Audio tasks. This server allows LLMs to interact with the Hugging Face Inference API to perform complex tasks.
Features
- Multimodal: Visual Question Answering, Text-to-Image, Image-to-Text.
- Computer Vision: Image Classification, Object Detection.
- NLP: Text Generation, Summarization, Translation, Text Classification.
- Audio: Text-to-Speech, Automatic Speech Recognition.
- Generic Support: Run any HF Inference task via
generic_hf_inference.
Setup
Prerequisites
- Python 3.10+
- A Hugging Face Account and Access Token (Access Token should be write-capable if posting data, but read is often enough for inference).
Installation
- Clone this repository.
- Install dependencies:
Or manually:pip install .pip install mcp huggingface_hub python-dotenv returns requests pillow
Configuration
Create a .env file or export the variable:
export HF_TOKEN="hf_..."
Usage
Local Running (Stdio)
Run the server using mcp:
mcp run server.py
Or just python:
python server.py
Hugging Face Spaces Deployment (Docker)
- Create a new Space on Hugging Face.
- Select Docker as the SDK.
- Upload the files in this repository (include
deploy.pyandDockerfile). - Add your
HF_TOKENin the Space's "Settings" -> "Variables and secrets" section. - The server will start properly on port 7860 using SSE. The access URL will be your Space's URL (e.g.,
https://huggingface.co/spaces/user/space-name). Note: TheDockerfileusesdeploy.pyto ensure the server listens nicely on 0.0.0.0:7860.
Tools List
visual_question_answeringtext_to_imageimage_classificationobject_detectionimage_to_text(Captioning)text_generationsummarizationtranslationtext_classificationautomatic_speech_recognitiontext_to_speechgeneric_hf_inference
Federated Projects
This server is designed to be stateless and can be deployed as a node in a larger federated system. Ensure network connectivity and proper token management.