Dhanushsaireddy144's picture
Mcp Configured for HF Spaces deployment
9682111
metadata
title: MCP Server
sdk: docker
app_port: 7860
emoji: 🤖

Hugging Face MCP Server

A Model Context Protocol (MCP) server that exposes Hugging Face Inference tools for Multimodal, Computer Vision, NLP, and Audio tasks. This server allows LLMs to interact with the Hugging Face Inference API to perform complex tasks.

Features

  • Multimodal: Visual Question Answering, Text-to-Image, Image-to-Text.
  • Computer Vision: Image Classification, Object Detection.
  • NLP: Text Generation, Summarization, Translation, Text Classification.
  • Audio: Text-to-Speech, Automatic Speech Recognition.
  • Generic Support: Run any HF Inference task via generic_hf_inference.

Setup

Prerequisites

  • Python 3.10+
  • A Hugging Face Account and Access Token (Access Token should be write-capable if posting data, but read is often enough for inference).

Installation

  1. Clone this repository.
  2. Install dependencies:
    pip install .
    
    Or manually:
    pip install mcp huggingface_hub python-dotenv returns requests pillow
    

Configuration

Create a .env file or export the variable:

export HF_TOKEN="hf_..."

Usage

Local Running (Stdio)

Run the server using mcp:

mcp run server.py

Or just python:

python server.py

Hugging Face Spaces Deployment (Docker)

  1. Create a new Space on Hugging Face.
  2. Select Docker as the SDK.
  3. Upload the files in this repository (include deploy.py and Dockerfile).
  4. Add your HF_TOKEN in the Space's "Settings" -> "Variables and secrets" section.
  5. The server will start properly on port 7860 using SSE. The access URL will be your Space's URL (e.g., https://huggingface.co/spaces/user/space-name). Note: The Dockerfile uses deploy.py to ensure the server listens nicely on 0.0.0.0:7860.

Tools List

  • visual_question_answering
  • text_to_image
  • image_classification
  • object_detection
  • image_to_text (Captioning)
  • text_generation
  • summarization
  • translation
  • text_classification
  • automatic_speech_recognition
  • text_to_speech
  • generic_hf_inference

Federated Projects

This server is designed to be stateless and can be deployed as a node in a larger federated system. Ensure network connectivity and proper token management.