File size: 2,360 Bytes
661e5da
9682111
661e5da
9682111
 
661e5da
9682111
661e5da
9682111
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
title: MCP Server
sdk: docker
app_port: 7860
emoji: 🤖
---
# Hugging Face MCP Server

A Model Context Protocol (MCP) server that exposes Hugging Face Inference tools for Multimodal, Computer Vision, NLP, and Audio tasks. This server allows LLMs to interact with the Hugging Face Inference API to perform complex tasks.

## Features

- **Multimodal**: Visual Question Answering, Text-to-Image, Image-to-Text.
- **Computer Vision**: Image Classification, Object Detection.
- **NLP**: Text Generation, Summarization, Translation, Text Classification.
- **Audio**: Text-to-Speech, Automatic Speech Recognition.
- **Generic Support**: Run any HF Inference task via `generic_hf_inference`.

## Setup

### Prerequisites

- Python 3.10+
- A Hugging Face Account and Access Token (Access Token should be write-capable if posting data, but read is often enough for inference).

### Installation

1.  Clone this repository.
2.  Install dependencies:
    ```bash
    pip install .
    ```
    Or manually:
    ```bash
    pip install mcp huggingface_hub python-dotenv returns requests pillow
    ```

### Configuration

Create a `.env` file or export the variable:

```bash
export HF_TOKEN="hf_..."
```

## Usage

### Local Running (Stdio)

Run the server using `mcp`:

```bash
mcp run server.py
```

Or just python:

```bash
python server.py
```

### Hugging Face Spaces Deployment (Docker)

1.  Create a new Space on Hugging Face.
2.  Select **Docker** as the SDK.
3.  Upload the files in this repository (include `deploy.py` and `Dockerfile`).
4.  Add your `HF_TOKEN` in the Space's "Settings" -> "Variables and secrets" section.
5.   The server will start properly on port 7860 using SSE. The access URL will be your Space's URL (e.g., `https://huggingface.co/spaces/user/space-name`).
    *Note: The `Dockerfile` uses `deploy.py` to ensure the server listens nicely on 0.0.0.0:7860.*

## Tools List

- `visual_question_answering`
- `text_to_image`
- `image_classification`
- `object_detection`
- `image_to_text` (Captioning)
- `text_generation`
- `summarization`
- `translation`
- `text_classification`
- `automatic_speech_recognition`
- `text_to_speech`
- `generic_hf_inference`

## Federated Projects

This server is designed to be stateless and can be deployed as a node in a larger federated system. Ensure network connectivity and proper token management.