ohmp commited on
Commit
7da81aa
·
verified ·
1 Parent(s): 30561c6

Upload folder using huggingface_hub

Browse files
Files changed (8) hide show
  1. .gitignore +3 -0
  2. .gradio/certificate.pem +31 -0
  3. README.md +60 -6
  4. agent.py +59 -0
  5. app.py +16 -0
  6. dummy_treasure.txt +5 -0
  7. mcp_server.py +54 -0
  8. requirements.txt +5 -0
.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ .env
2
+ __pycache__
3
+ .venv
.gradio/certificate.pem ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -----BEGIN CERTIFICATE-----
2
+ MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
3
+ TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
4
+ cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
5
+ WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
6
+ ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
7
+ MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
8
+ h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
9
+ 0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
10
+ A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
11
+ T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
12
+ B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
13
+ B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
14
+ KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
15
+ OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
16
+ jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
17
+ qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
18
+ rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
19
+ HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
20
+ hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
21
+ ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
22
+ 3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
23
+ NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
24
+ ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
25
+ TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
26
+ jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
27
+ oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
28
+ 4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
29
+ mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
30
+ emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
31
+ -----END CERTIFICATE-----
README.md CHANGED
@@ -1,12 +1,66 @@
1
  ---
2
  title: SecrectDocumentLocker
3
- emoji: 💻
4
- colorFrom: red
5
- colorTo: red
6
  sdk: gradio
7
  sdk_version: 6.3.0
8
- app_file: app.py
9
- pinned: false
10
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
1
  ---
2
  title: SecrectDocumentLocker
3
+ app_file: app.py
 
 
4
  sdk: gradio
5
  sdk_version: 6.3.0
 
 
6
  ---
7
+ # SmolAgents with FastMCP Demo
8
+
9
+ This project demonstrates a [SmolAgents](https://github.com/huggingface/smolagents) agent interacting with a local [MCP](https://modelcontextprotocol.io/) server created using [FastMCP](https://github.com/jlowin/fastmcp).
10
+
11
+ ## Features
12
+
13
+ - **MCP Server**: (`mcp_server.py`)
14
+ - Implements a dummy authentication tool.
15
+ - Protects a "treasure" resource (`treasure://secret`) which is only accessible after authentication.
16
+ - Exposes a `read_treasure` tool.
17
+ - **Agent**: (`agent.py`)
18
+ - Uses `smolagents.MCPClient` to connect to the local MCP server.
19
+ - Dynamically loads tools from the MCP server.
20
+ - Uses `meta-llama/Llama-4-Scout` model via Hugging Face Inference API.
21
+ - **UI**: (`app.py`)
22
+ - Provides a Gradio chat interface for the agent.
23
+
24
+ ## Setup
25
+
26
+ 1. **Install Dependencies**:
27
+ ```bash
28
+ pip install -r requirements.txt
29
+ ```
30
+
31
+ 2. **Environment Variables**:
32
+ You need a Hugging Face Token to use the Inference API.
33
+ Create a `.env` file in the root directory:
34
+ ```bash
35
+ HF_TOKEN=your_hf_token_here
36
+ ```
37
+
38
+ ## Running Locally
39
+
40
+ 1. Run the Gradio app:
41
+ ```bash
42
+ python app.py
43
+ ```
44
+ 2. Open the link provided in the terminal (usually http://127.0.0.1:7860).
45
+ 3. Chat with the agent! Try:
46
+ > "Read the treasure file."
47
+ > (The agent should try, fail, then realize it needs to authenticate. Hint: the password is "open sesame")
48
+
49
+ ## Deploying to Hugging Face Spaces
50
+
51
+ 1. Create a new Space on Hugging Face (SDK: **Gradio**).
52
+ 2. Upload the files:
53
+ - `app.py`
54
+ - `agent.py`
55
+ - `mcp_server.py`
56
+ - `dummy_treasure.txt`
57
+ - `requirements.txt`
58
+ 3. Set the `HF_TOKEN` in the Space's **Settings > Variables and secrets** (if not automatically handled, though Spaces usually have access to the token of the owner if configured, but explicit token is safer for specific model access).
59
+ 4. The Space should build and run automatically!
60
+
61
+ ## Files
62
 
63
+ - `mcp_server.py`: The FastMCP server implementation.
64
+ - `agent.py`: Agent logic including MCP connection.
65
+ - `app.py`: Entry point for Gradio.
66
+ - `dummy_treasure.txt`: The protected content.
agent.py ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from smolagents import CodeAgent, InferenceClientModel, MCPClient
2
+ from mcp import StdioServerParameters
3
+ import os
4
+ from dotenv import load_dotenv
5
+
6
+ load_dotenv()
7
+
8
+ def create_agent_with_mcp():
9
+ # Path to server
10
+ server_path = os.path.join(os.path.dirname(__file__), "mcp_server.py")
11
+
12
+ # Parameters
13
+ params = StdioServerParameters(
14
+ command="python",
15
+ args=[server_path],
16
+ env=os.environ.copy()
17
+ )
18
+
19
+ # Initialize Client
20
+ # We create the client but need to manage its lifecycle.
21
+ # For a simple demo script or Gradio app, we can keep the client content_manager logic in the caller
22
+ # OR just instantiate it and manualy disconnect later.
23
+ client = MCPClient(params, structured_output=True)
24
+
25
+ # The tools are now available in client.get_tools()
26
+ # Note: connect() is called inside __init__ of MCPClient
27
+ mcp_tools = client.get_tools()
28
+ print(f"Debug: mcp_tools type: {type(mcp_tools)}")
29
+ if mcp_tools:
30
+ print(f"Debug: first tool type: {type(mcp_tools[0])}")
31
+ print(f"Debug: mcp_tools: {mcp_tools}")
32
+
33
+ # Using InferenceClientModel with Llama 4 Scout
34
+ model = InferenceClientModel(
35
+ model_id="meta-llama/Llama-4-Scout-17B-16E-Instruct",
36
+ api_key=os.getenv("HF_TOKEN"),
37
+ )
38
+
39
+ agent = CodeAgent(
40
+ tools=mcp_tools,
41
+ model=model,
42
+ )
43
+ return agent, client
44
+
45
+ if __name__ == "__main__":
46
+ try:
47
+ agent, client = create_agent_with_mcp()
48
+ tools_list = agent.tools.values() if isinstance(agent.tools, dict) else agent.tools
49
+ print("Agent created with tools:", [t.name for t in tools_list])
50
+
51
+ # Test run
52
+ print("Running agent test...")
53
+ response = agent.run("Please authenticate with 'open sesame' and then read the treasure.")
54
+ print("Agent Response:", response)
55
+ except Exception as e:
56
+ print(f"An error occurred: {e}")
57
+ finally:
58
+ if 'client' in locals():
59
+ client.disconnect()
app.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from smolagents import GradioUI
2
+ from agent import create_agent_with_mcp
3
+ import os
4
+
5
+ # Initialize the agent and client
6
+ # The client connection stays alive as long as this process is running
7
+ agent, client = create_agent_with_mcp()
8
+
9
+ def main():
10
+ # Create the Gradio UI
11
+ ui = GradioUI(agent)
12
+ # Launch with public link if possible, or just local
13
+ ui.launch()
14
+
15
+ if __name__ == "__main__":
16
+ main()
dummy_treasure.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ CONFIDENTIAL TREASURE DOCUMENT
2
+ ------------------------------
3
+ This is the hidden wisdom of the MCP server.
4
+ You have successfully authenticated and retrieved this file.
5
+ The secret code is: 42-OCTOPUS-OMEGA
mcp_server.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastmcp import FastMCP
2
+ import os
3
+
4
+ # Create an MCP server
5
+ mcp = FastMCP("TreasureKeeper")
6
+
7
+ # Global state for authentication (Mock)
8
+ # In a real scenario, this would be session-based or token-based
9
+ auth_state = {
10
+ "authenticated": False
11
+ }
12
+
13
+ TREASURE_PATH = os.path.join(os.path.dirname(__file__), "dummy_treasure.txt")
14
+
15
+ @mcp.tool()
16
+ def authenticate(password: str) -> str:
17
+ """
18
+ Authenticate to access the restricted treasure.
19
+
20
+ Args:
21
+ password: The password to unlock the treasure. Hint: It's 'open sesame'.
22
+ """
23
+ if password.lower() == "open sesame":
24
+ auth_state["authenticated"] = True
25
+ return "Authentication successful! You can now access the treasure."
26
+ else:
27
+ return "Authentication failed. Incorrect password."
28
+
29
+ def _read_treasure_logic() -> str:
30
+ if not auth_state["authenticated"]:
31
+ return "ACCESS DENIED: You must authenticate first using the 'authenticate' tool."
32
+
33
+ try:
34
+ with open(TREASURE_PATH, "r") as f:
35
+ return f.read()
36
+ except Exception as e:
37
+ return f"Error reading treasure: {str(e)}"
38
+
39
+ @mcp.resource("treasure://secret")
40
+ def get_treasure() -> str:
41
+ """
42
+ Get the secret treasure content. Requires authentication first.
43
+ """
44
+ return _read_treasure_logic()
45
+
46
+ @mcp.tool()
47
+ def read_treasure() -> str:
48
+ """
49
+ Read the secret treasure content. Requires authentication first.
50
+ """
51
+ return _read_treasure_logic()
52
+
53
+ if __name__ == "__main__":
54
+ mcp.run()
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ smolagents[mcp]
2
+ fastmcp
3
+ gradio
4
+ huggingface_hub
5
+ python-dotenv