burtenshaw commited on
Commit
7b4813b
·
unverified ·
2 Parent(s): 0f7763976b6c77

Merge pull request #2 from huggingface/first-release-unit-1-2

Browse files
units/en/_toctree.yml CHANGED
@@ -15,40 +15,30 @@
15
  title: The Communication Protocol
16
  - local: unit1/capabilities
17
  title: Understanding MCP Capabilities
 
 
 
 
18
  - local: unit1/gradio-mcp
19
  title: Gradio MCP Integration
20
 
21
- - title: "2. Use Case: Building with MCP"
22
  sections:
23
  - local: unit2/introduction
24
- title: Introduction
25
- - local: unit2/environment-setup
26
- title: Setting Up Your Development Environment & SDKs
27
- - local: unit2/building-server
28
- title: Building Your First MCP Server
29
- - local: unit2/server-capabilities
30
- title: Implementing Server Capabilities
31
- - local: unit2/developing-clients
32
- title: Developing MCP Clients
33
- - local: unit2/configuration
34
- title: Configuration, Authentication, and Debugging
35
- - local: unit2/hub-mcp-servers
36
- title: MCP Servers on Hugging Face Hub
37
 
38
- - title: "3. Use Case: Deploying with MCP"
39
  sections:
40
  - local: unit3/introduction
41
  title: Introduction
42
- - local: unit3/advanced-features
43
- title: Exploring Advanced MCP Features
44
- - local: unit3/security
45
- title: Security Deep Dive - Threats and Mitigation Strategies
46
- - local: unit3/limitations
47
- title: Limitations, Challenges, and Comparisons
48
- - local: unit3/huggingface-ecosystem
49
- title: Hugging Face's Tiny Agents and MCP
50
- - local: unit3/final-project
51
- title: Final Project - Building a Complete MCP Application
52
 
53
  - title: "Bonus Units"
54
  sections:
 
15
  title: The Communication Protocol
16
  - local: unit1/capabilities
17
  title: Understanding MCP Capabilities
18
+ - local: unit1/sdk
19
+ title: MCP SDK
20
+ - local: unit1/mcp-clients
21
+ title: MCP Clients
22
  - local: unit1/gradio-mcp
23
  title: Gradio MCP Integration
24
 
25
+ - title: "2. Use Case: End-to-End MCP Application"
26
  sections:
27
  - local: unit2/introduction
28
+ title: Introduction to Building an MCP Application
29
+ - local: unit2/gradio-server
30
+ title: Building the Gradio MCP Server
31
+ - local: unit2/clients
32
+ title: Using MCP Clients with your application
33
+ - local: unit2/gradio-client
34
+ title: Building an MCP Client with Gradio
35
+ - local: unit2/tiny-agents
36
+ title: Building a Tiny Agent with TypeScript
 
 
 
 
37
 
38
+ - title: "3. Use Case: Advanced MCP Development"
39
  sections:
40
  - local: unit3/introduction
41
  title: Introduction
 
 
 
 
 
 
 
 
 
 
42
 
43
  - title: "Bonus Units"
44
  sections:
units/en/unit0/introduction.mdx ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Welcome to the 🤗 Model Context Protocol (MCP) Course
2
+
3
+ ![MCP Course thumbnail](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit0/1.png)
4
+
5
+ Welcome to the most exciting topic in AI today: **Model Context Protocol (MCP)**!
6
+
7
+ This free course will take you on a journey, **from beginner to informed**, in understanding, using, and building applications with MCP.
8
+
9
+ This first unit will help you onboard:
10
+
11
+ * Discover the **course's syllabus**.
12
+ * **Get more information about the certification process and the schedule**.
13
+ * Get to know the team behind the course.
14
+ * Create your **account**.
15
+ * **Sign-up to our Discord server**, and meet your classmates and us.
16
+
17
+ Let's get started!
18
+
19
+ ## What to expect from this course?
20
+
21
+ In this course, you will:
22
+
23
+ * 📖 Study Model Context Protocol in **theory, design, and practice.**
24
+ * 🧑‍💻 Learn to **use established MCP SDKs and frameworks**.
25
+ * 💾 **Share your projects** and explore applications created by the community.
26
+ * 🏆 Participate in challenges where you will **evaluate your MCP implementations against other students'.**
27
+ * 🎓 **Earn a certificate of completion** by completing assignments.
28
+
29
+ And more!
30
+
31
+ At the end of this course, you'll understand **how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards**.
32
+
33
+ Don't forget to [**sign up to the course!**](https://huggingface.co/mcp-course)
34
+
35
+ ## What does the course look like?
36
+
37
+ The course is composed of:
38
+
39
+ * _Foundational Units_: where you learn MCP **concepts in theory**.
40
+ * _Hands-on_: where you'll learn **to use established MCP SDKs** to build your applications. These hands-on sections will have pre-configured environments.
41
+ * _Use case assignments_: where you'll apply the concepts you've learned to solve a real-world problem that you'll choose.
42
+ * _Collaborations_: We're collaborating with Hugging Face's partners to give you the latest MCP implementations and tools.
43
+
44
+ This **course is a living project, evolving with your feedback and contributions!** Feel free to open issues and PRs in GitHub, and engage in discussions in our Discord server.
45
+
46
+ After you have gone through the course, you can also send your feedback 👉 using this form [LINK TO FEEDBACK FORM]
47
+
48
+ ## What's the syllabus?
49
+
50
+ Here is the **general syllabus for the course**. A more detailed list of topics will be released with each unit.
51
+
52
+ | Chapter | Topic | Description |
53
+ | ------- | ------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- |
54
+ | 0 | Onboarding | Set you up with the tools and platforms that you will use. |
55
+ | 1 | MCP Fundamentals, Architecture and Core Concepts | Explain core concepts, architecture, and components of Model Context Protocol. Show a simple use case using MCP. |
56
+ | 2 | End-to-end Use case: MCP in Action | Build a simple end-to-end MCP application that you can share with the community. |
57
+ | 3 | Deployed Use case: MCP in Action | Build a deployed MCP application using a the Hugging Face ecosystem and partners' services. |
58
+ | 4 | Bonus Units | Bonus units to help you get more out of the course, working with partners' libraries and services. |
59
+
60
+ ## What are the prerequisites?
61
+
62
+ To be able to follow this course, you should have:
63
+
64
+ * Basic understanding of AI and LLM concepts
65
+ * Familiarity with software development principles and API concepts
66
+ * Experience with at least one programming language (Python or TypeScript examples will be shown)
67
+
68
+ If you don't have any of these, don't worry! Here are some resources that can help you:
69
+
70
+ * [LLM Course](https://huggingface.co/learn/llm-course/en/chapter1/10) will guide you through the basics of using and building with LLMs.
71
+ * [Agents Course](https://huggingface.co/learn/agents-course/en/chapter1/10) will guide you through building AI agents with LLMs.
72
+
73
+ <Tip>
74
+
75
+ The above courses are not prerequisites in themselves, so if you understand the concepts of LLMs and agents, you can start the course now!
76
+
77
+ </Tip>
78
+
79
+ ## What tools do I need?
80
+
81
+ You only need 2 things:
82
+
83
+ * _A computer_ with an internet connection.
84
+ * An _Account_: to access the course resources and create projects. If you don't have an account yet, you can create one [here](https://huggingface.co/join) (it's free).
85
+
86
+ ## The Certification Process
87
+
88
+ You can choose to follow this course _in audit mode_, or do the activities and _get one of the two certificates we'll issue_. If you audit the course, you can participate in all the challenges and do assignments if you want, and **you don't need to notify us**.
89
+
90
+ The certification process is **completely free**:
91
+
92
+ * _To get a certification for fundamentals_: you need to complete Unit 1 of the course. This is intended for students that want to get up to date with the latest trends in MCP, without the need to build a full application.
93
+ * _To get a certificate of completion_: you need to complete the use case units (2 and 3). This is intended for students that want to build a full application and share it with the community.
94
+
95
+ ## What is the recommended pace?
96
+
97
+ Each chapter in this course is designed **to be completed in 1 week, with approximately 3-4 hours of work per week**.
98
+
99
+ Since there's a deadline, we provide you a recommended pace:
100
+
101
+ ![Recommended Pace](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit0/2.png)
102
+
103
+ ## How to get the most out of the course?
104
+
105
+ To get the most out of the course, we have some advice:
106
+
107
+ 1. Join study groups in Discord: studying in groups is always easier. To do that, you need to join our discord server and verify your account.
108
+ 2. **Do the quizzes and assignments**: the best way to learn is through hands-on practice and self-assessment.
109
+ 3. **Define a schedule to stay in sync**: you can use our recommended pace schedule below or create yours.
110
+
111
+ ![Course advice](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit0/3.png)
112
+
113
+ ## Who are we
114
+
115
+ About the authors:
116
+
117
+ ### Ben Burtenshaw
118
+
119
+ Ben is a Machine Learning Engineer at Hugging Face who focuses building LLM applications, with post training and agentic approaches.
120
+
121
+ <!-- ## Acknowledgments -->
122
+
123
+ <!-- We would like to extend our gratitude to the following individuals and partners for their invaluable contributions and support: -->
124
+
125
+ <!-- TODO: @burtenshaw add contributors and partners -->
126
+
127
+ ## I found a bug, or I want to improve the course
128
+
129
+ Contributions are **welcome** 🤗
130
+
131
+ * If you _found a bug 🐛 in a notebook_, please open an issue and **describe the problem**.
132
+ * If you _want to improve the course_, you can open a Pull Request.
133
+ * If you _want to add a full section or a new unit_, the best is to open an issue and **describe what content you want to add before starting to write it so that we can guide you**.
134
+
135
+ ## I still have questions
136
+
137
+ Please ask your question in our discord server #mcp-course-questions.
138
+
139
+ Now that you have all the information, let's get on board ⛵
units/en/unit1/architectural-components.mdx ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Architectural Components of MCP
2
+
3
+ In the previous section, we discussed the key concepts and terminology of MCP. Now, let's dive deeper into the architectural components that make up the MCP ecosystem.
4
+
5
+ ## Host, Client, and Server
6
+
7
+ The Model Context Protocol (MCP) is built on a client-server architecture that enables structured communication between AI models and external systems.
8
+
9
+ ![MCP Architecture](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/4.png)
10
+
11
+ The MCP architecture consists of three primary components, each with well-defined roles and responsibilities: Host, Client, and Server. We touched on these in the previous section, but let's dive deeper into each component and their responsibilities.
12
+
13
+ ### Host
14
+
15
+ The **Host** is the user-facing AI application that end-users interact with directly.
16
+
17
+ Examples include:
18
+ - AI Chat apps like OpenAI ChatGPT or Anthropic's Claude Desktop
19
+ - AI-enhanced IDEs like Cursor, or integrations to tools like Continue.dev
20
+ - Custom AI agents and applications built in libraries like LangChain or smolagents
21
+
22
+ The Host's responsibilities include:
23
+ - Managing user interactions and permissions
24
+ - Initiating connections to MCP Servers via MCP Clients
25
+ - Orchestrating the overall flow between user requests, LLM processing, and external tools
26
+ - Rendering results back to users in a coherent format
27
+
28
+ In most cases, users will select their host application based on their needs and preferences. For example, a developer may choose Cursor for its powerful code editing capabilities, while domain experts may use custom applications built in smolagents.
29
+
30
+ ### Client
31
+
32
+ The **Client** is a component within the Host application that manages communication with a specific MCP Server. Key characteristics include:
33
+
34
+ - Each Client maintains a 1:1 connection with a single Server
35
+ - Handles the protocol-level details of MCP communication
36
+ - Acts as the intermediary between the Host's logic and the external Server
37
+
38
+ ### Server
39
+
40
+ The **Server** is an external program or service that exposes capabilities to AI models via the MCP protocol. Servers:
41
+
42
+ - Provide access to specific external tools, data sources, or services
43
+ - Act as lightweight wrappers around existing functionality
44
+ - Can run locally (on the same machine as the Host) or remotely (over a network)
45
+ - Expose their capabilities in a standardized format that Clients can discover and use
46
+
47
+ ## Communication Flow
48
+
49
+ Let's examine how these components interact in a typical MCP workflow:
50
+
51
+ <Tip>
52
+
53
+ In the next section, we'll dive deeper into the communication protocol that enables these components with practical examples.
54
+
55
+ </Tip>
56
+
57
+ 1. **User Interaction**: The user interacts with the **Host** application, expressing an intent or query.
58
+
59
+ 2. **Host Processing**: The **Host** processes the user's input, potentially using an LLM to understand the request and determine which external capabilities might be needed.
60
+
61
+ 3. **Client Connection**: The **Host** directs its **Client** component to connect to the appropriate Server(s).
62
+
63
+ 4. **Capability Discovery**: The **Client** queries the **Server** to discover what capabilities (Tools, Resources, Prompts) it offers.
64
+
65
+ 5. **Capability Invocation**: Based on the user's needs or the LLM's determination, the Host instructs the **Client** to invoke specific capabilities from the **Server**.
66
+
67
+ 6. **Server Execution**: The **Server** executes the requested functionality and returns results to the **Client**.
68
+
69
+ 7. **Result Integration**: The **Client** relays these results back to the **Host**, which incorporates them into the context for the LLM or presents them directly to the user.
70
+
71
+ A key advantage of this architecture is its modularity. A single **Host** can connect to multiple **Servers** simultaneously via different **Clients**. New **Servers** can be added to the ecosystem without requiring changes to existing **Hosts**. Capabilities can be easily composed across different **Servers**.
72
+
73
+ <Tip>
74
+
75
+ As we discussed in the previous section, this modularity transforms the traditional M×N integration problem (M AI applications connecting to N tools/services) into a more manageable M+N problem, where each Host and Server needs to implement the MCP standard only once.
76
+
77
+ </Tip>
78
+
79
+ The architecture might appear simple, but its power lies in the standardization of the communication protocol and the clear separation of responsibilities between components. This design allows for a cohesive ecosystem where AI models can seamlessly connect with an ever-growing array of external tools and data sources.
80
+
81
+ ## Conclusion
82
+
83
+ These interaction patterns are guided by several key principles that shape the design and evolution of MCP. The protocol emphasizes **standardization** by providing a universal protocol for AI connectivity, while maintaining **simplicity** by keeping the core protocol straightforward yet enabling advanced features. **Safety** is prioritized by requiring explicit user approval for sensitive operations, and discoverability enables dynamic discovery of capabilities. The protocol is built with **extensibility** in mind, supporting evolution through versioning and capability negotiation, and ensures **interoperability** across different implementations and environments.
84
+
85
+ In the next section, we'll explore the communication protocol that enables these components to work together effectively.
units/en/unit1/capabilities.mdx ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Understanding MCP Capabilities
2
+
3
+ MCP Servers expose a variety of capabilities to Clients through the communication protocol. These capabilities fall into four main categories, each with distinct characteristics and use cases. Let's explore these core primitives that form the foundation of MCP's functionality.
4
+
5
+ <Tip>
6
+
7
+ In this section, we'll show examples as framework agnostic functions in each language. This is to focus on the concepts and how they work together, rather than the complexities of any framework.
8
+
9
+ In the coming units, we'll show how these concepts are implemented in MCP specific code.
10
+
11
+ </Tip>
12
+
13
+ ## Tools
14
+
15
+ Tools are executable functions or actions that the AI model can invoke through the MCP protocol.
16
+
17
+ - **Control**: Tools are typically **model-controlled**, meaning that the AI model (LLM) decides when to call them based on the user's request and context.
18
+ - **Safety**: Due to their ability to perform actions with side effects, tool execution can be dangerous. Therefore, they typically require explicit user approval.
19
+ - **Use Cases**: Sending messages, creating tickets, querying APIs, performing calculations.
20
+
21
+ **Example**: A weather tool that fetches current weather data for a given location:
22
+
23
+ <hfoptions id="tool-example">
24
+ <hfoption id="python">
25
+
26
+ ```python
27
+ def get_weather(location: str) -> dict:
28
+ """Get the current weather for a specified location."""
29
+ # Connect to weather API and fetch data
30
+ return {
31
+ "temperature": 72,
32
+ "conditions": "Sunny",
33
+ "humidity": 45
34
+ }
35
+ ```
36
+
37
+ </hfoption>
38
+ <hfoption id="javascript">
39
+
40
+ ```javascript
41
+ function getWeather(location) {
42
+ // Connect to weather API and fetch data
43
+ return {
44
+ temperature: 72,
45
+ conditions: 'Sunny',
46
+ humidity: 45
47
+ };
48
+ }
49
+ ```
50
+
51
+ </hfoption>
52
+ </hfoptions>
53
+
54
+ ## Resources
55
+
56
+ Resources provide read-only access to data sources, allowing the AI model to retrieve context without executing complex logic.
57
+
58
+ - **Control**: Resources are **application-controlled**, meaning the Host application typically decides when to access them.
59
+ - **Nature**: They are designed for data retrieval with minimal computation, similar to GET endpoints in REST APIs.
60
+ - **Safety**: Since they are read-only, they typically present lower security risks than Tools.
61
+ - **Use Cases**: Accessing file contents, retrieving database records, reading configuration information.
62
+
63
+ **Example**: A resource that provides access to file contents:
64
+
65
+ <hfoptions id="resource-example">
66
+ <hfoption id="python">
67
+
68
+ ```python
69
+ def read_file(file_path: str) -> str:
70
+ """Read the contents of a file at the specified path."""
71
+ with open(file_path, 'r') as f:
72
+ return f.read()
73
+ ```
74
+
75
+ </hfoption>
76
+ <hfoption id="javascript">
77
+
78
+ ```javascript
79
+ function readFile(filePath) {
80
+ // Using fs.readFile to read file contents
81
+ const fs = require('fs');
82
+ return new Promise((resolve, reject) => {
83
+ fs.readFile(filePath, 'utf8', (err, data) => {
84
+ if (err) {
85
+ reject(err);
86
+ return;
87
+ }
88
+ resolve(data);
89
+ });
90
+ });
91
+ }
92
+ ```
93
+
94
+ </hfoption>
95
+ </hfoptions>
96
+
97
+ ## Prompts
98
+
99
+ Prompts are predefined templates or workflows that guide the interaction between the user, the AI model, and the Server's capabilities.
100
+
101
+ - **Control**: Prompts are **user-controlled**, often presented as options in the Host application's UI.
102
+ - **Purpose**: They structure interactions for optimal use of available Tools and Resources.
103
+ - **Selection**: Users typically select a prompt before the AI model begins processing, setting context for the interaction.
104
+ - **Use Cases**: Common workflows, specialized task templates, guided interactions.
105
+
106
+ **Example**: A prompt template for generating a code review:
107
+
108
+ <hfoptions id="prompt-example">
109
+ <hfoption id="python">
110
+
111
+ ```python
112
+ def code_review(code: str, language: str) -> list:
113
+ """Generate a code review for the provided code snippet."""
114
+ return [
115
+ {
116
+ "role": "system",
117
+ "content": f"You are a code reviewer examining {language} code. Provide a detailed review highlighting best practices, potential issues, and suggestions for improvement."
118
+ },
119
+ {
120
+ "role": "user",
121
+ "content": f"Please review this {language} code:\n\n```{language}\n{code}\n```"
122
+ }
123
+ ]
124
+ ```
125
+
126
+ </hfoption>
127
+ <hfoption id="javascript">
128
+
129
+ ```javascript
130
+ function codeReview(code, language) {
131
+ return [
132
+ {
133
+ role: 'system',
134
+ content: `You are a code reviewer examining ${language} code. Provide a detailed review highlighting best practices, potential issues, and suggestions for improvement.`
135
+ },
136
+ {
137
+ role: 'user',
138
+ content: `Please review this ${language} code:\n\n\`\`\`${language}\n${code}\n\`\`\``
139
+ }
140
+ ];
141
+ }
142
+ ```
143
+
144
+ </hfoption>
145
+ </hfoptions>
146
+
147
+ ## Sampling
148
+
149
+ Sampling allows Servers to request the Client (specifically, the Host application) to perform LLM interactions.
150
+
151
+ - **Control**: Sampling is **server-initiated** but requires Client/Host facilitation.
152
+ - **Purpose**: It enables server-driven agentic behaviors and potentially recursive or multi-step interactions.
153
+ - **Safety**: Like Tools, sampling operations typically require user approval.
154
+ - **Use Cases**: Complex multi-step tasks, autonomous agent workflows, interactive processes.
155
+
156
+ **Example**: A Server might request the Client to analyze data it has processed:
157
+
158
+ <hfoptions id="sampling-example">
159
+ <hfoption id="python">
160
+
161
+ ```python
162
+ def request_sampling(messages, system_prompt=None, include_context="none"):
163
+ """Request LLM sampling from the client."""
164
+ # In a real implementation, this would send a request to the client
165
+ return {
166
+ "role": "assistant",
167
+ "content": "Analysis of the provided data..."
168
+ }
169
+ ```
170
+
171
+ </hfoption>
172
+ <hfoption id="javascript">
173
+
174
+ ```javascript
175
+ function requestSampling(messages, systemPrompt = null, includeContext = 'none') {
176
+ // In a real implementation, this would send a request to the client
177
+ return {
178
+ role: 'assistant',
179
+ content: 'Analysis of the provided data...'
180
+ };
181
+ }
182
+
183
+ function handleSamplingRequest(request) {
184
+ const { messages, systemPrompt, includeContext } = request;
185
+ // In a real implementation, this would process the request and return a response
186
+ return {
187
+ role: 'assistant',
188
+ content: 'Response to the sampling request...'
189
+ };
190
+ }
191
+ ```
192
+
193
+ </hfoption>
194
+ </hfoptions>
195
+
196
+ The sampling flow follows these steps:
197
+ 1. Server sends a `sampling/createMessage` request to the client
198
+ 2. Client reviews the request and can modify it
199
+ 3. Client samples from an LLM
200
+ 4. Client reviews the completion
201
+ 5. Client returns the result to the server
202
+
203
+ <Tip>
204
+
205
+ This human-in-the-loop design ensures users maintain control over what the LLM sees and generates. When implementing sampling, it's important to provide clear, well-structured prompts and include relevant context.
206
+
207
+ </Tip>
208
+
209
+ ## How Capabilities Work Together
210
+
211
+ Let's look at how these capabilities work together to enable complex interactions. In the table below, we've outlined the capabilities, who controls them, the direction of control, and some other details.
212
+
213
+ | Capability | Controlled By | Direction | Side Effects | Approval Needed | Typical Use Cases |
214
+ |------------|---------------|-----------|--------------|-----------------|-------------------|
215
+ | Tools | Model (LLM) | Client → Server | Yes (potentially) | Yes | Actions, API calls, data manipulation |
216
+ | Resources | Application | Client → Server | No (read-only) | Typically no | Data retrieval, context gathering |
217
+ | Prompts | User | Server → Client | No | No (selected by user) | Guided workflows, specialized templates |
218
+ | Sampling | Server | Server → Client → Server | Indirectly | Yes | Multi-step tasks, agentic behaviors |
219
+
220
+ These capabilities are designed to work together in complementary ways:
221
+
222
+ 1. A user might select a **Prompt** to start a specialized workflow
223
+ 2. The Prompt might include context from **Resources**
224
+ 3. During processing, the AI model might call **Tools** to perform specific actions
225
+ 4. For complex operations, the Server might use **Sampling** to request additional LLM processing
226
+
227
+ The distinction between these primitives provides a clear structure for MCP interactions, enabling AI models to access information, perform actions, and engage in complex workflows while maintaining appropriate control boundaries.
228
+
229
+ ## Discovery Process
230
+
231
+ One of MCP's key features is dynamic capability discovery. When a Client connects to a Server, it can query the available Tools, Resources, and Prompts through specific list methods:
232
+
233
+ - `tools/list`: Discover available Tools
234
+ - `resources/list`: Discover available Resources
235
+ - `prompts/list`: Discover available Prompts
236
+
237
+ This dynamic discovery mechanism allows Clients to adapt to the specific capabilities each Server offers without requiring hardcoded knowledge of the Server's functionality.
238
+
239
+ ## Conclusion
240
+
241
+ Understanding these core primitives is essential for working with MCP effectively. By providing distinct types of capabilities with clear control boundaries, MCP enables powerful interactions between AI models and external systems while maintaining appropriate safety and control mechanisms.
242
+
243
+ In the next section, we'll explore how Gradio integrates with MCP to provide easy-to-use interfaces for these capabilities.
units/en/unit1/communication-protocol.mdx ADDED
@@ -0,0 +1,274 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The Communication Protocol
2
+
3
+ MCP defines a standardized communication protocol that enables Clients and Servers to exchange messages in a consistent, predictable way. This standardization is critical for interoperability across the community. In this section, we'll explore the protocol structure and transport mechanisms used in MCP.
4
+
5
+ <Tip warning={true}>
6
+
7
+ We're getting down to the nitty-gritty details of the MCP protocol. You won't need to know all of this to build with MCP, but it's good to know that it exists and how it works.
8
+
9
+ </Tip>
10
+
11
+ ## JSON-RPC: The Foundation
12
+
13
+ At its core, MCP uses **JSON-RPC 2.0** as the message format for all communication between Clients and Servers. JSON-RPC is a lightweight remote procedure call protocol encoded in JSON, which makes it:
14
+
15
+ - Human-readable and easy to debug
16
+ - Language-agnostic, supporting implementation in any programming environment
17
+ - Well-established, with clear specifications and widespread adoption
18
+
19
+ ![message types](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/5.png)
20
+
21
+ The protocol defines three types of messages:
22
+
23
+ ### 1. Requests
24
+
25
+ Sent from Client to Server to initiate an operation. A Request message includes:
26
+ - A unique identifier (`id`)
27
+ - The method name to invoke (e.g., `tools/call`)
28
+ - Parameters for the method (if any)
29
+
30
+ Example Request:
31
+
32
+ ```json
33
+ {
34
+ "jsonrpc": "2.0",
35
+ "id": 1,
36
+ "method": "tools/call",
37
+ "params": {
38
+ "name": "weather",
39
+ "arguments": {
40
+ "location": "San Francisco"
41
+ }
42
+ }
43
+ }
44
+ ```
45
+
46
+ ### 2. Responses
47
+
48
+ Sent from Server to Client in reply to a Request. A Response message includes:
49
+ - The same `id` as the corresponding Request
50
+ - Either a `result` (for success) or an `error` (for failure)
51
+
52
+ Example Success Response:
53
+ ```json
54
+ {
55
+ "jsonrpc": "2.0",
56
+ "id": 1,
57
+ "result": {
58
+ "temperature": 62,
59
+ "conditions": "Partly cloudy"
60
+ }
61
+ }
62
+ ```
63
+
64
+ Example Error Response:
65
+ ```json
66
+ {
67
+ "jsonrpc": "2.0",
68
+ "id": 1,
69
+ "error": {
70
+ "code": -32602,
71
+ "message": "Invalid location parameter"
72
+ }
73
+ }
74
+ ```
75
+
76
+ ### 3. Notifications
77
+
78
+ One-way messages that don't require a response. Typically sent from Server to Client to provide updates or notifications about events.
79
+
80
+ Example Notification:
81
+ ```json
82
+ {
83
+ "jsonrpc": "2.0",
84
+ "method": "progress",
85
+ "params": {
86
+ "message": "Processing data...",
87
+ "percent": 50
88
+ }
89
+ }
90
+ ```
91
+
92
+ ## Transport Mechanisms
93
+
94
+ JSON-RPC defines the message format, but MCP also specifies how these messages are transported between Clients and Servers. Two primary transport mechanisms are supported:
95
+
96
+ ### stdio (Standard Input/Output)
97
+
98
+ The stdio transport is used for local communication, where the Client and Server run on the same machine:
99
+
100
+ The Host application launches the Server as a subprocess and communicates with it by writing to its standard input (stdin) and reading from its standard output (stdout).
101
+
102
+ <Tip>
103
+
104
+ **Use cases** for this transport are local tools like file system access or running local scripts.
105
+
106
+ </Tip>
107
+
108
+ The main **Advantages** of this transport are that it's simple, no network configuration required, and securely sandboxed by the operating system.
109
+
110
+ ### HTTP + SSE (Server-Sent Events) / Streamable HTTP
111
+
112
+ The HTTP+SSE transport is used for remote communication, where the Client and Server might be on different machines:
113
+
114
+ Communication happens over HTTP, with the Server using Server-Sent Events (SSE) to push updates to the Client over a persistent connection.
115
+
116
+ <Tip>
117
+
118
+ **Use cases** for this transport are connecting to remote APIs, cloud services, or shared resources.
119
+
120
+ </Tip>
121
+
122
+ The main **Advantages** of this transport are that it works across networks, enables integration with web services, and is compatible with serverless environments.
123
+
124
+ Recent updates to the MCP standard have introduced or refined "Streamable HTTP," which offers more flexibility by allowing servers to dynamically upgrade to SSE for streaming when needed, while maintaining compatibility with serverless environments.
125
+
126
+ ## The Interaction Lifecycle
127
+
128
+ In the previous section, we discussed the lifecycle of a single interaction between a Client (💻) and a Server (🌐). Let's now look at the lifecycle of a complete interaction between a Client and a Server in the context of the MCP protocol.
129
+
130
+ The MCP protocol defines a structured interaction lifecycle between Clients and Servers:
131
+
132
+ <style>
133
+ .diagram {
134
+ margin: 20px 0;
135
+ font-family: monospace;
136
+ }
137
+ .client {
138
+ background-color: lightgreen;
139
+ }
140
+ .server {
141
+ background-color: lightblue;
142
+ }
143
+ .client, .server {
144
+ display: inline-block;
145
+ width: 50px;
146
+ text-align: center;
147
+ padding: 10px;
148
+ border: 1px solid #ccc;
149
+ border-radius: 4px;
150
+ margin: 0px;
151
+ }
152
+ .arrow {
153
+ display: inline-block;
154
+ width: 100px;
155
+ text-align: center;
156
+ color: #666;
157
+ position: relative;
158
+ }
159
+ .arrow::before {
160
+ content: "→";
161
+ position: absolute;
162
+ bottom: -15px;
163
+ left: 50%;
164
+ transform: translateX(-50%);
165
+ }
166
+ .arrow.reverse::before {
167
+ content: "←";
168
+ }
169
+ .message {
170
+ display: block;
171
+ margin: 5px 0 20px 0;
172
+ color: #333;
173
+ }
174
+ </style>
175
+
176
+ ### Initialization
177
+
178
+ The Client connects to the Server and they exchange protocol versions and capabilities, and the Server responds with its supported protocol version and capabilities.
179
+
180
+ <div class="diagram">
181
+ <div class="client">💻</div>
182
+ <div class="arrow">
183
+ <span class="message">initialize</span>
184
+ </div>
185
+ <div class="server">🌐</div>
186
+ <br>
187
+ <div class="client">💻</div>
188
+ <div class="arrow reverse">
189
+ <span class="message">response</span>
190
+ </div>
191
+ <div class="server">🌐</div>
192
+ <br>
193
+ <div class="client">💻</div>
194
+ <div class="arrow">
195
+ <span class="message">initialized</span>
196
+ </div>
197
+ <div class="server">🌐</div>
198
+ </div>
199
+
200
+ The Client confirms the initialization is complete via a notification message.
201
+
202
+ ### Discovery
203
+
204
+ The Client requests information about available capabilities and the Server responds with a list of available tools.
205
+
206
+ <div class="diagram">
207
+ <div class="client">💻</div>
208
+ <div class="arrow">
209
+ <span class="message">tools/list</span>
210
+ </div>
211
+ <div class="server">🌐</div>
212
+ <br>
213
+ <div class="client">💻</div>
214
+ <div class="arrow reverse">
215
+ <span class="message">response</span>
216
+ </div>
217
+ <div class="server">🌐</div>
218
+ </div>
219
+
220
+ This process could be repeated for each tool, resource, or prompt type.
221
+
222
+ ### Execution
223
+
224
+ The Client invokes capabilities based on the Host's needs.
225
+
226
+ <div class="diagram">
227
+ <div class="client">💻</div>
228
+ <div class="arrow">
229
+ <span class="message">tools/call</span>
230
+ </div>
231
+ <div class="server">🌐</div>
232
+ <br>
233
+ <div class="client">💻</div>
234
+ <div class="arrow reverse">
235
+ <span class="message">notification (optional progress)</span>
236
+ </div>
237
+ <div class="server">🌐</div>
238
+ <br>
239
+ <div class="client">💻</div>
240
+ <div class="arrow reverse">
241
+ <span class="message">response</span>
242
+ </div>
243
+ <div class="server">🌐</div>
244
+ </div>
245
+
246
+ ### Termination
247
+
248
+ The connection is gracefully closed when no longer needed and the Server acknowledges the shutdown request.
249
+
250
+ <div class="diagram">
251
+ <div class="client">💻</div>
252
+ <div class="arrow">
253
+ <span class="message">shutdown</span>
254
+ </div>
255
+ <div class="server">🌐</div>
256
+ <br>
257
+ <div class="client">💻</div>
258
+ <div class="arrow reverse">
259
+ <span class="message">response</span>
260
+ </div>
261
+ <div class="server">🌐</div>
262
+ <br>
263
+ <div class="client">💻</div>
264
+ <div class="arrow">
265
+ <span class="message">exit</span>
266
+ </div>
267
+ <div class="server">🌐</div>
268
+ </div>
269
+
270
+ The Client sends the final exit message to complete the termination.
271
+
272
+ ## Protocol Evolution
273
+
274
+ The MCP protocol is designed to be extensible and adaptable. The initialization phase includes version negotiation, allowing for backward compatibility as the protocol evolves. Additionally, capability discovery enables Clients to adapt to the specific features each Server offers, enabling a mix of basic and advanced Servers in the same ecosystem.
units/en/unit1/gradio-mcp.mdx ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Gradio MCP Integration
2
+
3
+ We've now explored the core concepts of the MCP protocol and how to implement an MCP Servers and Clients. In this section, we're going to make things slightly easier by using Gradio to create an MCP Server!
4
+
5
+ <Tip>
6
+
7
+ Gradio is a popular Python library for quickly creating customizable web interfaces for machine learning models.
8
+
9
+ </Tip>
10
+
11
+ ## Introduction to Gradio
12
+
13
+ Gradio allows developers to create UIs for their models with just a few lines of Python code. It's particularly useful for:
14
+
15
+ - Creating demos and prototypes
16
+ - Sharing models with non-technical users
17
+ - Testing and debugging model behavior
18
+
19
+ With the addition of MCP support, Gradio now offers a straightforward way to expose AI model capabilities through the standardized MCP protocol.
20
+
21
+ Combining Gradio with MCP allows you to create both human-friendly interfaces and AI-accessible tools with minimal code. But best of all, Gradio is already well-used by the AI community, so you can use it to share your MCP Servers with others.
22
+
23
+ ## Prerequisites
24
+
25
+ To use Gradio with MCP support, you'll need to install Gradio with the MCP extra:
26
+
27
+ ```bash
28
+ pip install "gradio[mcp]"
29
+ ```
30
+
31
+ You'll also need an LLM application that supports tool calling using the MCP protocol, such as Cursor ( known as "MCP Hosts").
32
+
33
+ ## Creating an MCP Server with Gradio
34
+
35
+ Let's walk through a basic example of creating an MCP Server using Gradio:
36
+
37
+ ```python
38
+ import gradio as gr
39
+
40
+ def letter_counter(word: str, letter: str) -> int:
41
+ """
42
+ Count the number of occurrences of a letter in a word or text.
43
+
44
+ Args:
45
+ word (str): The input text to search through
46
+ letter (str): The letter to search for
47
+
48
+ Returns:
49
+ int: The number of times the letter appears in the text
50
+ """
51
+ word = word.lower()
52
+ letter = letter.lower()
53
+ count = word.count(letter)
54
+ return count
55
+
56
+ # Create a standard Gradio interface
57
+ demo = gr.Interface(
58
+ fn=letter_counter,
59
+ inputs=["textbox", "textbox"],
60
+ outputs="number",
61
+ title="Letter Counter",
62
+ description="Enter text and a letter to count how many times the letter appears in the text."
63
+ )
64
+
65
+ # Launch both the Gradio web interface and the MCP server
66
+ if __name__ == "__main__":
67
+ demo.launch(mcp_server=True)
68
+ ```
69
+
70
+ With this setup, your letter counter function is now accessible through:
71
+
72
+ 1. A traditional Gradio web interface for direct human interaction
73
+ 2. An MCP Server that can be connected to compatible clients
74
+
75
+ The MCP server will be accessible at:
76
+ ```
77
+ http://your-server:port/gradio_api/mcp/sse
78
+ ```
79
+
80
+ The application itself will still be accessible and it looks like this:
81
+
82
+ ![Gradio MCP Server](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/7.png)
83
+
84
+ ## How It Works Behind the Scenes
85
+
86
+ When you set `mcp_server=True` in `launch()`, several things happen:
87
+
88
+ 1. Gradio functions are automatically converted to MCP Tools
89
+ 2. Input components map to tool argument schemas
90
+ 3. Output components determine the response format
91
+ 4. The Gradio server now also listens for MCP protocol messages
92
+ 5. JSON-RPC over HTTP+SSE is set up for client-server communication
93
+
94
+ ## Key Features of the Gradio <> MCP Integration
95
+
96
+ 1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit `http://your-server:port/gradio_api/mcp/schema` or go to the "View API" link in the footer of your Gradio app, and then click on "MCP".
97
+
98
+ 2. **Environment Variable Support**: There are two ways to enable the MCP server functionality:
99
+ - Using the `mcp_server` parameter in `launch()`:
100
+ ```python
101
+ demo.launch(mcp_server=True)
102
+ ```
103
+ - Using environment variables:
104
+ ```bash
105
+ export GRADIO_MCP_SERVER=True
106
+ ```
107
+
108
+ 3. **File Handling**: The server automatically handles file data conversions, including:
109
+ - Converting base64-encoded strings to file data
110
+ - Processing image files and returning them in the correct format
111
+ - Managing temporary file storage
112
+
113
+ It is **strongly** recommended that input images and files be passed as full URLs ("http://..." or "https://...") as MCP Clients do not always handle local files correctly.
114
+
115
+ 4. **Hosted MCP Servers on 🤗 Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools
116
+
117
+ ## Troubleshooting Tips
118
+
119
+ 1. **Type Hints and Docstrings**: Ensure you provide type hints and valid docstrings for your functions. The docstring should include an "Args:" block with indented parameter names.
120
+
121
+ 2. **String Inputs**: When in doubt, accept input arguments as `str` and convert them to the desired type inside the function.
122
+
123
+ 3. **SSE Support**: Some MCP Hosts don't support SSE-based MCP Servers. In those cases, you can use `mcp-remote`:
124
+ ```json
125
+ {
126
+ "mcpServers": {
127
+ "gradio": {
128
+ "command": "npx",
129
+ "args": [
130
+ "mcp-remote",
131
+ "http://your-server:port/gradio_api/mcp/sse"
132
+ ]
133
+ }
134
+ }
135
+ }
136
+ ```
137
+
138
+ 4. **Restart**: If you encounter connection issues, try restarting both your MCP Client and MCP Server.
139
+
140
+ ## Share your MCP Server
141
+
142
+ You can share your MCP Server by publishing your Gradio app to Hugging Face Spaces. The video below shows how to create a Hugging Face Space.
143
+
144
+ <Youtube id="3bSVKNKb_PY" />
145
+
146
+ Now, you can share your MCP Server with others by sharing your Hugging Face Space.
147
+
148
+ ## Conclusion
149
+
150
+ Gradio's integration with MCP provides an accessible entry point to the MCP ecosystem. By leveraging Gradio's simplicity and adding MCP's standardization, developers can quickly create both human-friendly interfaces and AI-accessible tools with minimal code.
151
+
152
+ As we progress through this course, we'll explore more sophisticated MCP implementations, but Gradio offers an excellent starting point for understanding and experimenting with the protocol.
153
+
154
+ In the next unit, we'll dive deeper into building MCP applications, focusing on setting up development environments, exploring SDKs, and implementing more advanced MCP Servers and Clients.
units/en/unit1/introduction.mdx ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction to Model Context Protocol (MCP)
2
+
3
+ Welcome to Unit 1 of the MCP Course! In this unit, we'll explore the fundamentals of Model Context Protocol.
4
+
5
+ ## What You Will Learn
6
+
7
+ In this unit, you will:
8
+
9
+ * Understand what Model Context Protocol is and why it's important
10
+ * Learn the key concepts and terminology associated with MCP
11
+ * Explore the integration challenges that MCP solves
12
+ * Walk through the key benefits and goals of MCP
13
+ * See a simple example of MCP integration in action
14
+
15
+ By the end of this unit, you'll have a solid understanding of the foundational concepts of MCP and be ready to dive deeper into its architecture and implementation in the next unit.
16
+
17
+ ## Importance of MCP
18
+
19
+ The AI ecosystem is evolving rapidly, with Large Language Models (LLMs) and other AI systems becoming increasingly capable. However, these models are often limited by their training data and lack access to real-time information or specialized tools. This limitation hinders the potential of AI systems to provide truly relevant, accurate, and helpful responses in many scenarios.
20
+
21
+ This is where Model Context Protocol (MCP) comes in. MCP enables AI models to connect with external data sources, tools, and environments, allowing for the seamless transfer of information and capabilities between AI systems and the broader digital world. This interoperability is crucial for the growth and adoption of truly useful AI applications.
22
+
23
+ ## Overview of Unit 1
24
+
25
+ Here's a brief overview of what we'll cover in this unit:
26
+
27
+ 1. **What is Model Context Protocol?** - We'll start by defining what MCP is and discussing its role in the AI ecosystem.
28
+ 2. **Key Concepts** - We'll explore the fundamental concepts and terminology associated with MCP.
29
+ 3. **Integration Challenges** - We'll examine the problems that MCP aims to solve, particularly the "M×N Integration Problem."
30
+ 4. **Benefits and Goals** - We'll discuss the key benefits and goals of MCP, including standardization, enhanced AI capabilities, and interoperability.
31
+ 5. **Simple Example** - Finally, we'll walk through a simple example of MCP integration to see how it works in practice.
32
+
33
+ Let's dive in and explore the exciting world of Model Context Protocol!
units/en/unit1/key-concepts.mdx ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Key Concepts and Terminology
2
+
3
+ Before diving deeper into the Model Context Protocol, it's important to understand the key concepts and terminology that form the foundation of MCP. This section will introduce the fundamental ideas that underpin the protocol and provide a common vocabulary for discussing MCP implementations throughout the course.
4
+
5
+ MCP is often described as the "USB-C for AI applications." Just as USB-C provides a standardized physical and logical interface for connecting various peripherals to computing devices, MCP offers a consistent protocol for linking AI models to external capabilities. This standardization benefits the entire ecosystem:
6
+
7
+ - **users** enjoy simpler and more consistent experiences across AI applications
8
+ - **AI application developers** gain easy integration with a growing ecosystem of tools and data sources
9
+ - **tool and data providers** need only create a single implementation that works with multiple AI applications
10
+ - the broader ecosystem benefits from increased interoperability, innovation, and reduced fragmentation
11
+
12
+ ## The Integration Problem
13
+
14
+ The **M×N Integration Problem** refers to the challenge of connecting M different AI applications to N different external tools or data sources without a standardized approach.
15
+
16
+ ### Without MCP (M×N Problem)
17
+
18
+ Without a protocol like MCP, developers would need to create M×N custom integrations—one for each possible pairing of an AI application with an external capability.
19
+
20
+ ![Without MCP](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/1.png)
21
+
22
+ Each AI application would need to integrate with each tool/data source individually. This is a very complex and expensive process which introduces a lot of friction for developers, and high maintenance costs.
23
+
24
+ ### With MCP (M+N Solution)
25
+
26
+ MCP transforms this into an M+N problem by providing a standard interface: each AI application implements the client side of MCP once, and each tool/data source implements the server side once. This dramatically reduces integration complexity and maintenance burden.
27
+
28
+ ![With MCP](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/2.png)
29
+
30
+ Each AI application implements the client side of MCP once, and each tool/data source implements the server side once.
31
+
32
+ ## Core MCP Terminology
33
+
34
+ Now that we understand the problem that MCP solves, let's dive into the core terminology and concepts that make up the MCP protocol.
35
+
36
+ <Tip>
37
+
38
+ MCP is a standard like HTTP or USB-C, and is a protocol for connecting AI applications to external tools and data sources. Therefore, using standard terminology is crucial to making the MCP work effectively.
39
+
40
+ When documenting our applications and communincating with the community, we should use the following terminology.
41
+
42
+ </Tip>
43
+
44
+ ### Components
45
+
46
+ Just like client server relationships in HTTP, MCP has a client and a server.
47
+
48
+ ![MCP Components](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/3.png)
49
+
50
+ - **Host**: The user-facing AI application that end-users interact with directly. Examples include Anthropic's Claude Desktop, AI-enhanced IDEs like Cursor, inference libraries like Hugging Face Python SDK, or custom applications built in libraries like LangChain or smolagents. Hosts initiate connections to MCP Servers and orchestrate the overall flow between user requests, LLM processing, and external tools.
51
+
52
+ - **Client**: A component within the host application that manages communication with a specific MCP Server. Each Client maintains a 1:1 connection with a single Server, handling the protocol-level details of MCP communication and acting as an intermediary between the Host's logic and the external Server.
53
+
54
+ - **Server**: An external program or service that exposes capabilities (Tools, Resources, Prompts) via the MCP protocol.
55
+
56
+ <Tip warning={true}>
57
+
58
+ A lot of content uses 'Client' and 'Host' interchangeably. Technically speaking, the host is the user-facing application, and the client is the component within the host application that manages communication with a specific MCP Server.
59
+
60
+ </Tip>
61
+
62
+ ### Capabilities
63
+
64
+ Of course, your application's value is the sum of the capabilities it offers. So the capabilities are the most important part of your application. MCP's can connect with any software service, but there are some common capabilities that are used for many AI applications.
65
+
66
+ | Capability | Description | Example |
67
+ | ---------- | ----------- | ------- |
68
+ | **Tools** | Executable functions that the AI model can invoke to perform actions or retrieve computed data. Typically relating to the use case of the application. | A tool for a weather application might be a function that returns the weather in a specific location. |
69
+ | **Resources** | Read-only data sources that provide context without significant computation. | A researcher assistant might have a resource for scientific papers. |
70
+ | **Prompts** | Pre-defined templates or workflows that guide interactions between users, AI models, and the available capabilities. | A summarization prompt. |
71
+ | **Sampling** | Server-initiated requests for the Client/Host to perform LLM interactions, enabling recursive actions where the LLM can review generated content and make further decisions. | A writing application reviewing its own output and decide to refine it further. |
72
+
73
+ In the following diagram, we can see the collective capabilities applied to a use case for a code agent.
74
+
75
+ ![collective diagram](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/8.png)
76
+
77
+ This application might use their MCP entities in the following way:
78
+
79
+ | Entity | Name | Description |
80
+ | --- | --- | --- |
81
+ | Tool | Code Interpreter | A tool that can execute code that the LLM writes. |
82
+ | Resource | Documentation | A resource that contains the documentation of the application. |
83
+ | Prompt | Code Style | A prompt that guides the LLM to generate code. |
84
+ | Sampling | Code Review | A sampling that allows the LLM to review the code and make further decisions. |
85
+
86
+ ### Conclusion
87
+
88
+ Understanding these key concepts and terminology provides the foundation for working with MCP effectively. In the following sections, we'll build on this foundation to explore the architectural components, communication protocol, and capabilities that make up the Model Context Protocol.
units/en/unit1/mcp-clients.mdx ADDED
@@ -0,0 +1,342 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MCP Clients
2
+
3
+ Now that we have a basic understanding of the Model Context Protocol, we can explore the essential role of MCP Clients in the Model Context Protocol ecosystem.
4
+
5
+ In this part of Unit 1, we'll explore the essential role of MCP Clients in the Model Context Protocol ecosystem.
6
+
7
+ In this section, you will:
8
+
9
+ * Understand what MCP Clients are and their role in the MCP architecture
10
+ * Learn about the key responsibilities of MCP Clients
11
+ * Explore the major MCP Client implementations
12
+ * Discover how to use Hugging Face's MCP Client implementation
13
+ * See practical examples of MCP Client usage
14
+
15
+ ## Understanding MCP Clients
16
+
17
+ MCP Clients are crucial components that act as the bridge between AI applications (Hosts) and external capabilities provided by MCP Servers. Think of the Host as your main application (like an AI assistant or IDE) and the Client as a specialized module within that Host responsible for handling MCP communications.
18
+
19
+ ## User Interface Client
20
+
21
+ Let's start by exploring the user interface clients that are available for the MCP.
22
+
23
+ ### Chat Interface Clients
24
+
25
+ Anthropic's Claude Desktop stands as one of the most prominent MCP Clients, providing integration with various MCP Servers.
26
+
27
+ ### Interactive Development Clients
28
+
29
+ Cursor's MCP Client implementation enables AI-powered coding assistance through direct integration with code editing capabilities. It supports multiple MCP Server connections and provides real-time tool invocation during coding, making it a powerful tool for developers.
30
+
31
+ Continue.dev is another example of an interactive development client that supports MCP and connects to an MCP server from VS Code.
32
+
33
+ ## Configuring MCP Clients
34
+
35
+ Now that we've covered the core of the MCP protocol, let's look at how to configure your MCP servers and clients.
36
+
37
+ Effective deployment of MCP servers and clients requires proper configuration.
38
+
39
+ <Tip>
40
+
41
+ The MCP specification is still evolving, so the configuration methods are subject to evolution. We'll focus on the current best practices for configuration.
42
+
43
+ </Tip>
44
+
45
+ ### MCP Configuration Files
46
+
47
+ MCP hosts use configuration files to manage server connections. These files define which servers are available and how to connect to them.
48
+
49
+ Fortunately, the configuration files are very simple, easy to understand, and consistent across major MCP hosts.
50
+
51
+ #### `mcp.json` Structure
52
+
53
+ The standard configuration file for MCP is named `mcp.json`. Here's the basic structure:
54
+
55
+ ```json
56
+ {
57
+ "servers": [
58
+ {
59
+ "name": "Server Name",
60
+ "transport": {
61
+ "type": "stdio|sse",
62
+ // Transport-specific configuration
63
+ }
64
+ }
65
+ ]
66
+ }
67
+ ```
68
+
69
+ In this example, we have a single server with a name and a transport type. The transport type is either `stdio` or `sse`.
70
+
71
+ #### Configuration for stdio Transport
72
+
73
+ For local servers using stdio transport, the configuration includes the command and arguments to launch the server process:
74
+
75
+ ```json
76
+ {
77
+ "servers": [
78
+ {
79
+ "name": "File Explorer",
80
+ "transport": {
81
+ "type": "stdio",
82
+ "command": "python",
83
+ "args": ["/path/to/file_explorer_server.py"]
84
+ }
85
+ }
86
+ ]
87
+ }
88
+ ```
89
+
90
+ Here, we have a server called "File Explorer" that is a local script.
91
+
92
+ #### Configuration for HTTP+SSE Transport
93
+
94
+ For remote servers using HTTP+SSE transport, the configuration includes the server URL:
95
+
96
+ ```json
97
+ {
98
+ "servers": [
99
+ {
100
+ "name": "Remote API Server",
101
+ "transport": {
102
+ "type": "sse",
103
+ "url": "https://example.com/mcp-server"
104
+ }
105
+ }
106
+ ]
107
+ }
108
+ ```
109
+
110
+ #### Environment Variables in Configuration
111
+
112
+ Environment variables can be passed to server processes using the `env` field. Here's how to access them in your server code:
113
+
114
+ <hfoptions id="env-variables">
115
+ <hfoption id="python">
116
+
117
+ In Python, we use the `os` module to access environment variables:
118
+
119
+ ```python
120
+ import os
121
+
122
+ # Access environment variables
123
+ github_token = os.environ.get("GITHUB_TOKEN")
124
+ if not github_token:
125
+ raise ValueError("GITHUB_TOKEN environment variable is required")
126
+
127
+ # Use the token in your server code
128
+ def make_github_request():
129
+ headers = {"Authorization": f"Bearer {github_token}"}
130
+ # ... rest of your code
131
+ ```
132
+
133
+ </hfoption>
134
+ <hfoption id="javascript">
135
+
136
+ In JavaScript, we use the `process.env` object to access environment variables:
137
+
138
+ ```javascript
139
+ // Access environment variables
140
+ const githubToken = process.env.GITHUB_TOKEN;
141
+ if (!githubToken) {
142
+ throw new Error("GITHUB_TOKEN environment variable is required");
143
+ }
144
+
145
+ // Use the token in your server code
146
+ function makeGithubRequest() {
147
+ const headers = { "Authorization": `Bearer ${githubToken}` };
148
+ // ... rest of your code
149
+ }
150
+ ```
151
+
152
+ </hfoption>
153
+ </hfoptions>
154
+
155
+ The corresponding configuration in `mcp.json` would look like this:
156
+
157
+ ```json
158
+ {
159
+ "servers": [
160
+ {
161
+ "name": "GitHub API",
162
+ "transport": {
163
+ "type": "stdio",
164
+ "command": "python",
165
+ "args": ["/path/to/github_server.py"],
166
+ "env": {
167
+ "GITHUB_TOKEN": "your_github_token"
168
+ }
169
+ }
170
+ }
171
+ ]
172
+ }
173
+ ```
174
+
175
+ ### Configuration Examples
176
+
177
+ Let's look at some real-world configuration scenarios:
178
+
179
+ #### Scenario 1: Local Server Configuration
180
+
181
+ In this scenario, we have a local server that is a Python script which could be a file explorer or a code editor.
182
+
183
+ ```json
184
+ {
185
+ "servers": [
186
+ {
187
+ "name": "File Explorer",
188
+ "transport": {
189
+ "type": "stdio",
190
+ "command": "python",
191
+ "args": ["/path/to/file_explorer_server.py"]
192
+ }
193
+ }
194
+ ]
195
+ }
196
+ ```
197
+
198
+ #### Scenario 2: Remote Server Configuration
199
+
200
+ In this scenario, we have a remote server that is a weather API.
201
+
202
+ ```json
203
+ {
204
+ "servers": [
205
+ {
206
+ "name": "Weather API",
207
+ "transport": {
208
+ "type": "sse",
209
+ "url": "https://example.com/mcp-server"
210
+ }
211
+ }
212
+ ]
213
+ }
214
+ ```
215
+
216
+ Proper configuration is essential for successfully deploying MCP integrations. By understanding these aspects, you can create robust and reliable connections between AI applications and external capabilities.
217
+
218
+ In the next section, we'll explore the ecosystem of MCP servers available on Hugging Face Hub and how to publish your own servers there.
219
+
220
+ ## Code Clients
221
+
222
+ You can also use the MCP Client in within code so that the tools are available to the LLM. Let's explore some examples in `smolagents`.
223
+
224
+ First, let's explore our weather server from the previous page. In `smolagents`, we can use the `ToolCollection` class to automatically discover and register tools from an MCP server. This is done by passing the `StdioServerParameters` or `SSEServerParameters` to the `ToolCollection.from_mcp` method. We can then print the tools to the console.
225
+
226
+ ```python
227
+ from smolagents import ToolCollection, CodeAgent
228
+ from mcp.client.stdio import StdioServerParameters
229
+
230
+ server_parameters = StdioServerParameters(command="uv", args=["run", "server.py"])
231
+
232
+ with ToolCollection.from_mcp(
233
+ server_parameters, trust_remote_code=True
234
+ ) as tools:
235
+ print("\n".join(f"{t.name}: {t.description}" for t in tools))
236
+
237
+ ```
238
+
239
+ <details>
240
+ <summary>
241
+ Output
242
+ </summary>
243
+
244
+ ```sh
245
+ Weather API: Get the weather in a specific location
246
+
247
+ ```
248
+
249
+ </details>
250
+
251
+ We can also connect to an MCP server that is hosted on a remote machine. In this case, we need to pass the `SSEServerParameters` to the `ToolCollection.from_mcp` method.
252
+
253
+ ```python
254
+ from smolagents.mcp_client import MCPClient
255
+
256
+ with MCPClient(
257
+ {"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse"}
258
+ ) as tools:
259
+ # Tools from the remote server are available
260
+ print("\n".join(f"{t.name}: {t.description}" for t in tools))
261
+ ```
262
+
263
+ <details>
264
+ <summary>
265
+ Output
266
+ </summary>
267
+
268
+ ```sh
269
+ prime_factors: Compute the prime factorization of a positive integer.
270
+ generate_cheetah_image: Generate a cheetah image.
271
+ image_orientation: Returns whether image is portrait or landscape.
272
+ sepia: Apply a sepia filter to the input image.
273
+ ```
274
+
275
+ </details>
276
+
277
+ Now, let's see how we can use the MCP Client in a code agent.
278
+
279
+ ```python
280
+ from smolagents import ToolCollection, CodeAgent
281
+ from mcp.client.stdio import StdioServerParameters
282
+ from smolagents import CodeAgent, InferenceClientModel
283
+
284
+ model = InferenceClientModel()
285
+
286
+ server_parameters = StdioServerParameters(command="uv", args=["run", "server.py"])
287
+
288
+ with ToolCollection.from_mcp(
289
+ server_parameters, trust_remote_code=True
290
+ ) as tool_collection:
291
+ agent = CodeAgent(tools=[*tool_collection.tools], model=model)
292
+ agent.run("What's the weather in Tokyo?")
293
+
294
+ ```
295
+
296
+ <details>
297
+ <summary>
298
+ Output
299
+ </summary>
300
+
301
+ ```sh
302
+ The weather in Tokyo is sunny with a temperature of 20 degrees Celsius.
303
+ ```
304
+
305
+ </details>
306
+
307
+ We can also connect to an MCP packages. Here's an example of connecting to the `pubmedmcp` package.
308
+
309
+ ```python
310
+ from smolagents import ToolCollection, CodeAgent
311
+ from mcp import StdioServerParameters
312
+
313
+ server_parameters = StdioServerParameters(
314
+ command="uv",
315
+ args=["--quiet", "pubmedmcp@0.1.3"],
316
+ env={"UV_PYTHON": "3.12", **os.environ},
317
+ )
318
+
319
+ with ToolCollection.from_mcp(server_parameters, trust_remote_code=True) as tool_collection:
320
+ agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True)
321
+ agent.run("Please find a remedy for hangover.")
322
+ ```
323
+
324
+ <details>
325
+ <summary>
326
+ Output
327
+ </summary>
328
+
329
+ ```sh
330
+ The remedy for hangover is to drink water.
331
+ ```
332
+
333
+ </details>
334
+
335
+ ## Next Steps
336
+
337
+ Now that you understand MCP Clients, you're ready to:
338
+ * Explore specific MCP Server implementations
339
+ * Learn about creating custom MCP Clients
340
+ * Dive into advanced MCP integration patterns
341
+
342
+ Let's continue our journey into the world of Model Context Protocol!
units/en/unit1/sdk.mdx ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MCP SDK
2
+
3
+ The Model Context Protocol provides official SDKs for both JavaScript, Python and other languages. This makes it easy to implement MCP clients and servers in your applications. These SDKs handle the low-level protocol details, allowing you to focus on building your application's capabilities.
4
+
5
+ ## SDK Overview
6
+
7
+ Both SDKs provide similar core functionality, following the MCP protocol specification we discussed earlier. They handle:
8
+
9
+ - Protocol-level communication
10
+ - Capability registration and discovery
11
+ - Message serialization/deserialization
12
+ - Connection management
13
+ - Error handling
14
+
15
+ ## Core Primitives Implementation
16
+
17
+ Let's explore how to implement each of the core primitives (Tools, Resources, and Prompts) using both SDKs.
18
+
19
+ <hfoptions id="server-implementation">
20
+ <hfoption id="python">
21
+
22
+ <Youtube id="exzrb5QNUis" />
23
+
24
+ ```python
25
+ from mcp.server.fastmcp import FastMCP
26
+
27
+ # Create an MCP server
28
+ mcp = FastMCP("Weather Service")
29
+
30
+
31
+ @mcp.tool()
32
+ def get_weather(location: str) -> str:
33
+ """Get the current weather for a specified location."""
34
+ return f"Weather in {location}: Sunny, 72°F"
35
+
36
+
37
+ @mcp.resource("weather://{location}")
38
+ def weather_resource(location: str) -> str:
39
+ """Provide weather data as a resource."""
40
+ return f"Weather data for {location}: Sunny, 72°F"
41
+
42
+
43
+ @mcp.prompt()
44
+ def weather_report(location: str) -> str:
45
+ """Create a weather report prompt."""
46
+ return f"""You are a weather reporter. Weather report for {location}?"""
47
+
48
+
49
+ # Run the server
50
+ if __name__ == "__main__":
51
+ mcp.run()
52
+
53
+ ```
54
+
55
+ </hfoption>
56
+ <hfoption id="javascript">
57
+
58
+ ```javascript
59
+ import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js";
60
+ import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
61
+ import { z } from "zod";
62
+
63
+ // Create an MCP server
64
+ const server = new McpServer({
65
+ name: "Weather Service",
66
+ version: "1.0.0"
67
+ });
68
+
69
+ // Tool implementation
70
+ server.tool("get_weather",
71
+ { location: z.string() },
72
+ async ({ location }) => ({
73
+ content: [{
74
+ type: "text",
75
+ text: `Weather in ${location}: Sunny, 72°F`
76
+ }]
77
+ })
78
+ );
79
+
80
+ // Resource implementation
81
+ server.resource(
82
+ "weather",
83
+ new ResourceTemplate("weather://{location}", { list: undefined }),
84
+ async (uri, { location }) => ({
85
+ contents: [{
86
+ uri: uri.href,
87
+ text: `Weather data for ${location}: Sunny, 72°F`
88
+ }]
89
+ })
90
+ );
91
+
92
+ // Prompt implementation
93
+ server.prompt(
94
+ "weather_report",
95
+ { location: z.string() },
96
+ async ({ location }) => ({
97
+ messages: [
98
+ {
99
+ role: "assistant",
100
+ content: {
101
+ type: "text",
102
+ text: "You are a weather reporter."
103
+ }
104
+ },
105
+ {
106
+ role: "user",
107
+ content: {
108
+ type: "text",
109
+ text: `Weather report for ${location}?`
110
+ }
111
+ }
112
+ ]
113
+ })
114
+ );
115
+
116
+ // Run the server
117
+ const transport = new StdioServerTransport();
118
+ await server.connect(transport);
119
+ ```
120
+
121
+ </hfoption>
122
+ </hfoptions>
123
+
124
+ Once you have your server implemented, you can start it by running the server script.
125
+
126
+ ```bash
127
+ mcp dev server.py
128
+ ```
129
+
130
+ This will initialize a development server running the file `server.py`. And log the following output:
131
+
132
+ ```bash
133
+ Starting MCP inspector...
134
+ ⚙️ Proxy server listening on port 6277
135
+ Spawned stdio transport
136
+ Connected MCP client to backing server transport
137
+ Created web app transport
138
+ Set up MCP proxy
139
+ 🔍 MCP Inspector is up and running at http://127.0.0.1:6274 🚀
140
+ ```
141
+
142
+ You can then open the MCP Inspector at [http://127.0.0.1:6274](http://127.0.0.1:6274) to see the server's capabilities and interact with them.
143
+
144
+ You'll see the server's capabilities and the ability to call them via the UI.
145
+
146
+ ![MCP Inspector]([./images/mcp-inspector.png](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/6.png)
147
+
148
+ ## MCP SDKs
149
+
150
+ MCP is designed to be language-agnostic, and there are official SDKs available for several popular programming languages:
151
+
152
+ | Language | Repository | Maintainer(s) | Status |
153
+ |----------|------------|---------------|--------|
154
+ | TypeScript | [github.com/modelcontextprotocol/typescript-sdk](https://github.com/modelcontextprotocol/typescript-sdk) | Anthropic | Active |
155
+ | Python | [github.com/modelcontextprotocol/python-sdk](https://github.com/modelcontextprotocol/python-sdk) | Anthropic | Active |
156
+ | Java | [github.com/modelcontextprotocol/java-sdk](https://github.com/modelcontextprotocol/java-sdk) | Spring AI (VMware) | Active |
157
+ | Kotlin | [github.com/modelcontextprotocol/kotlin-sdk](https://github.com/modelcontextprotocol/kotlin-sdk) | JetBrains | Active |
158
+ | C# | [github.com/modelcontextprotocol/csharp-sdk](https://github.com/modelcontextprotocol/csharp-sdk) | Microsoft | Active (Preview) |
159
+ | Swift | [github.com/modelcontextprotocol/swift-sdk](https://github.com/modelcontextprotocol/swift-sdk) | loopwork-ai | Active |
160
+ | Rust | [github.com/modelcontextprotocol/rust-sdk](https://github.com/modelcontextprotocol/rust-sdk) | Anthropic/Community | Active |
161
+
162
+ These SDKs provide language-specific abstractions that simplify working with the MCP protocol, allowing you to focus on implementing the core logic of your servers or clients rather than dealing with low-level protocol details.
163
+
164
+ ## Next Steps
165
+
166
+ We've only scratched the surface of what you can do with the MCP but you've already got a basic server running. In fact, you've also connected to it using the MCP Client in the browser.
167
+
168
+ In the next section, we'll look at how to connect to your server from an LLM.
units/en/unit2/clients.mdx ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Building MCP Clients
2
+
3
+ In this section, we'll create clients that can interact with our MCP server using different programming languages. We'll implement both a JavaScript client using HuggingFace.js and a Python client using smolagents.
4
+
5
+ ## Configuring MCP Clients
6
+
7
+ Effective deployment of MCP servers and clients requires proper configuration. The MCP specification is still evolving, so the configuration methods are subject to evolution. We'll focus on the current best practices for configuration.
8
+
9
+ ### MCP Configuration Files
10
+
11
+ MCP hosts use configuration files to manage server connections. These files define which servers are available and how to connect to them.
12
+
13
+ The configuration files are very simple, easy to understand, and consistent across major MCP hosts.
14
+
15
+ #### `mcp.json` Structure
16
+
17
+ The standard configuration file for MCP is named `mcp.json`. Here's the basic structure:
18
+
19
+ ```json
20
+ {
21
+ "servers": [
22
+ {
23
+ "name": "MCP Server",
24
+ "transport": {
25
+ "type": "sse",
26
+ "url": "http://localhost:7860/gradio_api/mcp/sse"
27
+ }
28
+ }
29
+ ]
30
+ }
31
+ ```
32
+
33
+ In this example, we have a single server configured to use SSE transport, connecting to a local Gradio server running on port 7860.
34
+
35
+ <Tip>
36
+
37
+ We've connected to the Gradio app via SSE transport because we assume that the gradio app is running on a remote server. However, if you want to connect to a local script, `stdio` transport instead of `sse` transport is a better option.
38
+
39
+ </Tip>
40
+
41
+ #### Configuration for HTTP+SSE Transport
42
+
43
+ For remote servers using HTTP+SSE transport, the configuration includes the server URL:
44
+
45
+ ```json
46
+ {
47
+ "servers": [
48
+ {
49
+ "name": "Remote MCP Server",
50
+ "transport": {
51
+ "type": "sse",
52
+ "url": "https://example.com/gradio_api/mcp/sse"
53
+ }
54
+ }
55
+ ]
56
+ }
57
+ ```
58
+
59
+ This configuration allows your UI client to communicate with the Gradio MCP server using the MCP protocol, enabling seamless integration between your frontend and the MCP service.
60
+
61
+ ## Configuring a UI MCP Client
62
+
63
+ When working with Gradio MCP servers, you can configure your UI client to connect to the server using the MCP protocol. Here's how to set it up:
64
+
65
+ ### Basic Configuration
66
+
67
+ Create a new file called `config.json` with the following configuration:
68
+
69
+ ```json
70
+ {
71
+ "mcpServers": {
72
+ "mcp": {
73
+ "url": "http://localhost:7860/gradio_api/mcp/sse"
74
+ }
75
+ }
76
+ }
77
+ ```
78
+
79
+ This configuration allows your UI client to communicate with the Gradio MCP server using the MCP protocol, enabling seamless integration between your frontend and the MCP service.
80
+
units/en/unit2/gradio-client.mdx ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Gradio as an MCP Client
2
+
3
+ In the previous section, we explored how to create an MCP Server using Gradio and connect to it using an MCP Client. In this section, we're going to explore how to use Gradio as an MCP Client to connect to an MCP Server.
4
+
5
+ <Tip>
6
+
7
+ Gradio is best suited to the creation of UI clients and MCP servers, but it is also possible to use it as an MCP Client and expose that as a UI.
8
+
9
+ </Tip>
10
+
11
+ We'll connect to the MCP server we created in the previous section and use it to answer questions.
12
+
13
+ ## MCP Client in Gradio
14
+
15
+ First, we need to install the `smolagents`, gradio and mcp-client libraries, if we haven't already:
16
+
17
+ ```bash
18
+ pip install smolagents[mcp] gradio[mcp] mcp
19
+ ```
20
+
21
+ Now, we can import the necessary libraries and create a simple Gradio interface that uses the MCP Client to connect to the MCP Server.
22
+
23
+ ```python
24
+ ```python
25
+ import gradio as gr
26
+
27
+ from mcp.client.stdio import StdioServerParameters
28
+ from smolagents import ToolCollection, CodeAgent
29
+ from smolagents import CodeAgent, InferenceClientModel
30
+ from smolagents.mcp_client import MCPClient
31
+ ```
32
+
33
+ Next, we'll connect to the MCP Server and get the tools that we can use to answer questions.
34
+
35
+ ```python
36
+ mcp_client = MCPClient(
37
+ {"url": "http://localhost:7860/gradio_api/mcp/sse"}
38
+ )
39
+ tools = mcp_client.get_tools()
40
+ ```
41
+
42
+ Now that we have the tools, we can create a simple agent that uses them to answer questions. We'll just use a simple `InferenceClientModel` and the default model from `smolagents` for now.
43
+
44
+ ```python
45
+ model = InferenceClientModel()
46
+ agent = CodeAgent(tools=[*tools], model=model)
47
+ ```
48
+
49
+ Now, we can create a simple Gradio interface that uses the agent to answer questions.
50
+
51
+ ```python
52
+ demo = gr.ChatInterface(
53
+ fn=lambda message, history: str(agent.run(message)),
54
+ type="messages",
55
+ examples=["Prime factorization of 68"],
56
+ title="Agent with MCP Tools",
57
+ description="This is a simple agent that uses MCP tools to answer questions.",
58
+ messages=[],
59
+ )
60
+
61
+ demo.launch()
62
+ ```
63
+
64
+ And that's it! We've created a simple Gradio interface that uses the MCP Client to connect to the MCP Server and answer questions.
65
+
66
+ <iframe
67
+ src="https://mcp-course-unit2-gradio-client.hf.space"
68
+ frameborder="0"
69
+ width="850"
70
+ height="450"
71
+ ></iframe>
72
+
73
+
74
+ ## Complete Example
75
+
76
+ Here's the complete example of the MCP Client in Gradio:
77
+
78
+ ```python
79
+ import gradio as gr
80
+
81
+ from mcp.client.stdio import StdioServerParameters
82
+ from smolagents import ToolCollection, CodeAgent
83
+ from smolagents import CodeAgent, InferenceClientModel
84
+ from smolagents.mcp_client import MCPClient
85
+
86
+
87
+ try:
88
+ mcp_client = MCPClient(
89
+ # {"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse"}
90
+ {"url": "http://localhost:7860/gradio_api/mcp/sse"}
91
+ )
92
+ tools = mcp_client.get_tools()
93
+
94
+ model = InferenceClientModel()
95
+ agent = CodeAgent(tools=[*tools], model=model)
96
+
97
+ def call_agent(message, history):
98
+ return str(agent.run(message))
99
+
100
+
101
+ demo = gr.ChatInterface(
102
+ fn=lambda message, history: str(agent.run(message)),
103
+ type="messages",
104
+ examples=["Prime factorization of 68"],
105
+ title="Agent with MCP Tools",
106
+ description="This is a simple agent that uses MCP tools to answer questions.",
107
+ messages=[],
108
+ )
109
+
110
+ demo.launch()
111
+ finally:
112
+ mcp_client.close()
113
+ ```
114
+
115
+ You'll notice that we're closing the MCP Client in the `finally` block. This is important because the MCP Client is a long-lived object that needs to be closed when the program exits.
116
+
117
+ ## Deploying to Hugging Face Spaces
118
+
119
+ To make your server available to others, you can deploy it to Hugging Face Spaces, just like we did in the previous section.
120
+ To deploy your Gradio MCP client to Hugging Face Spaces:
121
+
122
+ 1. Create a new Space on Hugging Face:
123
+ - Go to huggingface.co/spaces
124
+ - Click "Create new Space"
125
+ - Choose "Gradio" as the SDK
126
+ - Name your space (e.g., "mcp-client")
127
+
128
+ 2. Create a `requirements.txt` file:
129
+ ```txt
130
+ gradio[mcp]
131
+ smolagents[mcp]
132
+ ```
133
+
134
+ 3. Push your code to the Space:
135
+ ```bash
136
+ git init
137
+ git add server.py requirements.txt
138
+ git commit -m "Initial commit"
139
+ git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/mcp-client
140
+ git push -u origin main
141
+ ```
142
+
143
+ ## Conclusion
144
+
145
+ In this section, we've explored how to use Gradio as an MCP Client to connect to an MCP Server. We've also seen how to deploy the MCP Client in Hugging Face Spaces.
146
+
147
+
units/en/unit2/gradio-server.mdx ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Building the Gradio MCP Server
2
+
3
+ In this section, we'll create our sentiment analysis MCP server using Gradio. This server will expose a sentiment analysis tool that can be used by both human users through a web interface and AI models through the MCP protocol.
4
+
5
+ ## Introduction to Gradio MCP Integration
6
+
7
+ Gradio provides a straightforward way to create MCP servers by automatically converting your Python functions into MCP tools. When you set `mcp_server=True` in `launch()`, Gradio:
8
+
9
+ 1. Automatically converts your functions into MCP Tools
10
+ 2. Maps input components to tool argument schemas
11
+ 3. Determines response formats from output components
12
+ 4. Sets up JSON-RPC over HTTP+SSE for client-server communication
13
+ 5. Creates both a web interface and an MCP server endpoint
14
+
15
+ ## Setting Up the Project
16
+
17
+ First, let's create a new directory for our project and set up the required dependencies:
18
+
19
+ ```bash
20
+ mkdir mcp-sentiment
21
+ cd mcp-sentiment
22
+ python -m venv venv
23
+ source venv/bin/activate # On Windows: venv\Scripts\activate
24
+ pip install "gradio[mcp]" textblob
25
+ ```
26
+
27
+ ## Creating the Server
28
+
29
+ Create a new file called `server.py` with the following code:
30
+
31
+ ```python
32
+ import gradio as gr
33
+ from textblob import TextBlob
34
+
35
+ def sentiment_analysis(text: str) -> dict:
36
+ """
37
+ Analyze the sentiment of the given text.
38
+
39
+ Args:
40
+ text (str): The text to analyze
41
+
42
+ Returns:
43
+ dict: A dictionary containing polarity, subjectivity, and assessment
44
+ """
45
+ blob = TextBlob(text)
46
+ sentiment = blob.sentiment
47
+
48
+ return {
49
+ "polarity": round(sentiment.polarity, 2), # -1 (negative) to 1 (positive)
50
+ "subjectivity": round(sentiment.subjectivity, 2), # 0 (objective) to 1 (subjective)
51
+ "assessment": "positive" if sentiment.polarity > 0 else "negative" if sentiment.polarity < 0 else "neutral"
52
+ }
53
+
54
+ # Create the Gradio interface
55
+ demo = gr.Interface(
56
+ fn=sentiment_analysis,
57
+ inputs=gr.Textbox(placeholder="Enter text to analyze..."),
58
+ outputs=gr.JSON(),
59
+ title="Text Sentiment Analysis",
60
+ description="Analyze the sentiment of text using TextBlob"
61
+ )
62
+
63
+ # Launch the interface and MCP server
64
+ if __name__ == "__main__":
65
+ demo.launch(mcp_server=True)
66
+ ```
67
+
68
+ ## Understanding the Code
69
+
70
+ Let's break down the key components:
71
+
72
+ 1. **Function Definition**:
73
+ - The `sentiment_analysis` function takes a text input and returns a dictionary
74
+ - It uses TextBlob to analyze the sentiment
75
+ - The docstring is crucial as it helps Gradio generate the MCP tool schema
76
+ - Type hints (`str` and `dict`) help define the input/output schema
77
+
78
+ 2. **Gradio Interface**:
79
+ - `gr.Interface` creates both the web UI and MCP server
80
+ - The function is exposed as an MCP tool automatically
81
+ - Input and output components define the tool's schema
82
+ - The JSON output component ensures proper serialization
83
+
84
+ 3. **MCP Server**:
85
+ - Setting `mcp_server=True` enables the MCP server
86
+ - The server will be available at `http://localhost:7860/gradio_api/mcp/sse`
87
+ - You can also enable it using the environment variable:
88
+ ```bash
89
+ export GRADIO_MCP_SERVER=True
90
+ ```
91
+
92
+ ## Running the Server
93
+
94
+ Start the server by running:
95
+
96
+ ```bash
97
+ python server.py
98
+ ```
99
+
100
+ You should see output indicating that both the web interface and MCP server are running. The web interface will be available at `http://localhost:7860`, and the MCP server at `http://localhost:7860/gradio_api/mcp/sse`.
101
+
102
+ ## Testing the Server
103
+
104
+ You can test the server in two ways:
105
+
106
+ 1. **Web Interface**:
107
+ - Open `http://localhost:7860` in your browser
108
+ - Enter some text and click "Submit"
109
+ - You should see the sentiment analysis results
110
+
111
+ 2. **MCP Schema**:
112
+ - Visit `http://localhost:7860/gradio_api/mcp/schema`
113
+ - This shows the MCP tool schema that clients will use
114
+ - You can also find this in the "View API" link in the footer of your Gradio app
115
+
116
+ ## Troubleshooting Tips
117
+
118
+ 1. **Type Hints and Docstrings**:
119
+ - Always provide type hints for your function parameters and return values
120
+ - Include a docstring with an "Args:" block for each parameter
121
+ - This helps Gradio generate accurate MCP tool schemas
122
+
123
+ 2. **String Inputs**:
124
+ - When in doubt, accept input arguments as `str`
125
+ - Convert them to the desired type inside the function
126
+ - This provides better compatibility with MCP clients
127
+
128
+ 3. **SSE Support**:
129
+ - Some MCP clients don't support SSE-based MCP Servers
130
+ - In those cases, use `mcp-remote`:
131
+ ```json
132
+ {
133
+ "mcpServers": {
134
+ "gradio": {
135
+ "command": "npx",
136
+ "args": [
137
+ "mcp-remote",
138
+ "http://localhost:7860/gradio_api/mcp/sse"
139
+ ]
140
+ }
141
+ }
142
+ }
143
+ ```
144
+
145
+ 4. **Connection Issues**:
146
+ - If you encounter connection problems, try restarting both the client and server
147
+ - Check that the server is running and accessible
148
+ - Verify that the MCP schema is available at the expected URL
149
+
150
+ ## Deploying to Hugging Face Spaces
151
+
152
+ To make your server available to others, you can deploy it to Hugging Face Spaces:
153
+
154
+ 1. Create a new Space on Hugging Face:
155
+ - Go to huggingface.co/spaces
156
+ - Click "Create new Space"
157
+ - Choose "Gradio" as the SDK
158
+ - Name your space (e.g., "mcp-sentiment")
159
+
160
+ 2. Create a `requirements.txt` file:
161
+ ```txt
162
+ gradio[mcp]
163
+ textblob
164
+ ```
165
+
166
+ 3. Push your code to the Space:
167
+ ```bash
168
+ git init
169
+ git add server.py requirements.txt
170
+ git commit -m "Initial commit"
171
+ git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/mcp-sentiment
172
+ git push -u origin main
173
+ ```
174
+
175
+ Your MCP server will now be available at:
176
+ ```
177
+ https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse
178
+ ```
179
+
180
+ ## Next Steps
181
+
182
+ Now that we have our MCP server running, we'll create clients to interact with it. In the next sections, we'll:
183
+
184
+ 1. Create a HuggingFace.js-based client inspired by Tiny Agents
185
+ 2. Implement a SmolAgents-based Python client
186
+ 3. Test both clients with our deployed server
187
+
188
+ Let's move on to building our first client!
units/en/unit2/introduction.mdx ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Building an End-to-End MCP Application
2
+
3
+ Welcome to Unit 2 of the MCP Course!
4
+
5
+ In this unit, we'll build a complete MCP application from scratch, focusing on creating a server with Gradio and connecting it with multiple clients. This hands-on approach will give you practical experience with the entire MCP ecosystem.
6
+
7
+ <Tip>
8
+
9
+ In this unit, we're going to build a simple MCP server and client using Gradio and the HuggingFace hub. In the next unit, we'll build a more complex server that tackles a real-world use case.
10
+
11
+ </Tip>
12
+
13
+ ## What You'll Learn
14
+
15
+ In this unit, you will:
16
+
17
+ - Create an MCP Server using Gradio's built-in MCP support
18
+ - Build a sentiment analysis tool that can be used by AI models
19
+ - Connect to the server using different client implementations:
20
+ - A HuggingFace.js-based client
21
+ - A SmolAgents-based client for Python
22
+ - Deploy your MCP Server to Hugging Face Spaces
23
+ - Test and debug the complete system
24
+
25
+ By the end of this unit, you'll have a working MCP application that demonstrates the power and flexibility of the protocol.
26
+
27
+ ## Prerequisites
28
+
29
+ Before proceeding with this unit, make sure you:
30
+
31
+ - Have completed Unit 1 or have a basic understanding of MCP concepts
32
+ - Are comfortable with both Python and JavaScript/TypeScript
33
+ - Have a basic understanding of APIs and client-server architecture
34
+ - Have a development environment with:
35
+ - Python 3.10+
36
+ - Node.js 18+
37
+ - A Hugging Face account (for deployment)
38
+
39
+ ## Our End-to-End Project
40
+
41
+ We'll build a sentiment analysis application that consists of three main parts: the server, the client, and the deployment.
42
+
43
+ ![sentiment analysis application](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit2/1.png)
44
+
45
+ ### Server Side
46
+
47
+ - Uses Gradio to create a web interface and MCP server via `gr.Interface`
48
+ - Implements a sentiment analysis tool using TextBlob
49
+ - Exposes the tool through both HTTP and MCP protocols
50
+
51
+ ### Client Side
52
+
53
+ - Implements a HuggingFace.js client
54
+ - Or, creates a smolagents Python client
55
+ - Demonstrates how to use the same server with different client implementations
56
+
57
+ ### Deployment
58
+
59
+ - Deploys the server to Hugging Face Spaces
60
+ - Configures the clients to work with the deployed server
61
+
62
+ ## Let's Get Started!
63
+
64
+ Are you ready to build your first end-to-end MCP application? Let's begin by setting up the development environment and creating our Gradio MCP server.
units/en/unit2/tiny-agents.mdx ADDED
@@ -0,0 +1,457 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Tiny Agents: an MCP-powered agent in 50 lines of code
2
+
3
+ Now that we've built MCP servers in Gradio let's explore MCP clients even further. This section builds on the experimental project [Tiny Agents](https://huggingface.co/blog/tiny-agents), which demonstrates a super simple way of deploying MCP clients that can connect to services like our Gradio sentiment analysis server.
4
+
5
+ In this short exercise, we will walk you through how to implement a TypeScript (JS) MCP client that can communicate with any MCP server, including the Gradio-based sentiment analysis server we built in the previous section. You'll see how MCP standardizes the way agents interact with tools, making Agentic AI development significantly simpler.
6
+
7
+ ![meme](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/tiny-agents/thumbnail.jpg)
8
+ <figcaption>Image credit https://x.com/adamdotdev</figcaption>
9
+
10
+ We will show you how to connect your tiny agent to Gradio-based MCP servers, allowing it to leverage both your custom sentiment analysis tool and other pre-built tools.
11
+
12
+ ## How to run the complete demo
13
+
14
+ If you have NodeJS (with `pnpm` or `npm`), just run this in a terminal:
15
+
16
+ ```bash
17
+ npx @huggingface/mcp-client
18
+ ```
19
+
20
+ or if using `pnpm`:
21
+
22
+ ```bash
23
+ pnpx @huggingface/mcp-client
24
+ ```
25
+
26
+ This installs the package into a temporary folder then executes its command.
27
+
28
+ You'll see your simple Agent connect to multiple MCP servers (running locally), loading their tools (similar to how it would load your Gradio sentiment analysis tool), then prompting you for a conversation.
29
+
30
+ <video controls autoplay loop>
31
+ <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/tiny-agents/use-filesystem.mp4" type="video/mp4">
32
+ </video>
33
+
34
+ By default our example Agent connects to the following two MCP servers:
35
+
36
+ - the "canonical" [file system server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem), which gets access to your Desktop,
37
+ - and the [Playwright MCP](https://github.com/microsoft/playwright-mcp) server, which knows how to use a sandboxed Chromium browser for you.
38
+
39
+ You can easily add your Gradio sentiment analysis server to this list, as we'll demonstrate later in this section.
40
+
41
+ > [!NOTE]
42
+ > Note: this is a bit counter-intuitive but currently, all MCP servers in tiny agents are actually local processes (though remote servers are coming soon). This doesn't includes our Gradio server running on localhost:7860.
43
+
44
+ Our input for this first video was:
45
+
46
+ > write a haiku about the Hugging Face community and write it to a file named "hf.txt" on my Desktop
47
+
48
+ Now let us try this prompt that involves some Web browsing:
49
+
50
+ > do a Web Search for HF inference providers on Brave Search and open the first 3 results
51
+
52
+ <video controls autoplay loop>
53
+ <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/tiny-agents/brave-search.mp4" type="video/mp4">
54
+ </video>
55
+
56
+ With our Gradio sentiment analysis tool connected, we could similarly ask:
57
+ > analyze the sentiment of this review: "I absolutely loved the product, it exceeded all my expectations!"
58
+
59
+ ### Default model and provider
60
+
61
+ In terms of model/provider pair, our example Agent uses by default:
62
+ - ["Qwen/Qwen2.5-72B-Instruct"](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
63
+ - running on [Nebius](https://huggingface.co/docs/inference-providers/providers/nebius)
64
+
65
+ This is all configurable through env variables! Here, we'll also show how to add our Gradio MCP server:
66
+
67
+ ```ts
68
+ const agent = new Agent({
69
+ provider: process.env.PROVIDER ?? "nebius",
70
+ model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
71
+ apiKey: process.env.HF_TOKEN,
72
+ servers: [
73
+ // Default servers
74
+ {
75
+ command: "npx",
76
+ args: ["@modelcontextprotocol/servers", "filesystem"]
77
+ },
78
+ {
79
+ command: "npx",
80
+ args: ["playwright-mcp"]
81
+ },
82
+ // Our Gradio sentiment analysis server
83
+ {
84
+ command: "npx",
85
+ args: [
86
+ "mcp-remote",
87
+ "http://localhost:7860/gradio_api/mcp/sse"
88
+ ]
89
+ }
90
+ ],
91
+ });
92
+ ```
93
+
94
+ <Tip>
95
+
96
+ We connect to our Gradio based MCP server via the [`mcp-remote`](https://www.npmjs.com/package/mcp-remote) package.
97
+
98
+ </Tip>
99
+
100
+
101
+ ## The foundation for this: tool calling native support in LLMs.
102
+
103
+ What makes connecting Gradio MCP servers to our Tiny Agent possible is that recent LLMs (both closed and open) have been trained for function calling, aka. tool use. This same capability powers our integration with the sentiment analysis tool we built with Gradio.
104
+
105
+ A tool is defined by its name, a description, and a JSONSchema representation of its parameters - exactly how we defined our sentiment analysis function in the Gradio server. Let's look at a simple example:
106
+
107
+ ```ts
108
+ const weatherTool = {
109
+ type: "function",
110
+ function: {
111
+ name: "get_weather",
112
+ description: "Get current temperature for a given location.",
113
+ parameters: {
114
+ type: "object",
115
+ properties: {
116
+ location: {
117
+ type: "string",
118
+ description: "City and country e.g. Bogotá, Colombia",
119
+ },
120
+ },
121
+ },
122
+ },
123
+ };
124
+ ```
125
+
126
+ Our Gradio sentiment analysis tool would have a similar structure, with `text` as the input parameter instead of `location`.
127
+
128
+ The canonical documentation I will link to here is [OpenAI's function calling doc](https://platform.openai.com/docs/guides/function-calling?api-mode=chat). (Yes... OpenAI pretty much defines the LLM standards for the whole community 😅).
129
+
130
+ Inference engines let you pass a list of tools when calling the LLM, and the LLM is free to call zero, one or more of those tools.
131
+ As a developer, you run the tools and feed their result back into the LLM to continue the generation.
132
+
133
+ > [!NOTE]
134
+ > Note that in the backend (at the inference engine level), the tools are simply passed to the model in a specially-formatted `chat_template`, like any other message, and then parsed out of the response (using model-specific special tokens) to expose them as tool calls.
135
+
136
+ ## Implementing an MCP client on top of InferenceClient
137
+
138
+ Now that we know what a tool is in recent LLMs, let's implement the actual MCP client that will communicate with our Gradio server and other MCP servers.
139
+
140
+ The official doc at https://modelcontextprotocol.io/quickstart/client is fairly well-written. You only have to replace any mention of the Anthropic client SDK by any other OpenAI-compatible client SDK. (There is also a [llms.txt](https://modelcontextprotocol.io/llms-full.txt) you can feed into your LLM of choice to help you code along).
141
+
142
+ As a reminder, we use HF's `InferenceClient` for our inference client.
143
+
144
+ > [!TIP]
145
+ > The complete `McpClient.ts` code file is [here](https://github.com/huggingface/huggingface.js/blob/main/packages/mcp-client/src/McpClient.ts) if you want to follow along using the actual code 🤓
146
+
147
+ Our `McpClient` class has:
148
+ - an Inference Client (works with any Inference Provider, and `huggingface/inference` supports both remote and local endpoints)
149
+ - a set of MCP client sessions, one for each connected MCP server (this allows us to connect to multiple servers, including our Gradio server)
150
+ - and a list of available tools that is going to be filled from the connected servers and just slightly re-formatted.
151
+
152
+ ```ts
153
+ export class McpClient {
154
+ protected client: InferenceClient;
155
+ protected provider: string;
156
+ protected model: string;
157
+ private clients: Map<ToolName, Client> = new Map();
158
+ public readonly availableTools: ChatCompletionInputTool[] = [];
159
+
160
+ constructor({ provider, model, apiKey }: { provider: InferenceProvider; model: string; apiKey: string }) {
161
+ this.client = new InferenceClient(apiKey);
162
+ this.provider = provider;
163
+ this.model = model;
164
+ }
165
+
166
+ // [...]
167
+ }
168
+ ```
169
+
170
+ To connect to a MCP server (like our Gradio sentiment analysis server), the official `@modelcontextprotocol/sdk/client` TypeScript SDK provides a `Client` class with a `listTools()` method:
171
+
172
+ ```ts
173
+ async addMcpServer(server: StdioServerParameters): Promise<void> {
174
+ const transport = new StdioClientTransport({
175
+ ...server,
176
+ env: { ...server.env, PATH: process.env.PATH ?? "" },
177
+ });
178
+ const mcp = new Client({ name: "@huggingface/mcp-client", version: packageVersion });
179
+ await mcp.connect(transport);
180
+
181
+ const toolsResult = await mcp.listTools();
182
+ debug(
183
+ "Connected to server with tools:",
184
+ toolsResult.tools.map(({ name }) => name)
185
+ );
186
+
187
+ for (const tool of toolsResult.tools) {
188
+ this.clients.set(tool.name, mcp);
189
+ }
190
+
191
+ this.availableTools.push(
192
+ ...toolsResult.tools.map((tool) => {
193
+ return {
194
+ type: "function",
195
+ function: {
196
+ name: tool.name,
197
+ description: tool.description,
198
+ parameters: tool.inputSchema,
199
+ },
200
+ } satisfies ChatCompletionInputTool;
201
+ })
202
+ );
203
+ }
204
+ ```
205
+
206
+ `StdioServerParameters` is an interface from the MCP SDK that will let you easily spawn a local process: as we mentioned earlier, currently, all MCP servers are actually local processes, including our Gradio server (though we access it via HTTP).
207
+
208
+ For each MCP server we connect to (including our Gradio sentiment analysis server), we slightly re-format its list of tools and add them to `this.availableTools`.
209
+
210
+ ### How to use the tools
211
+
212
+ Using our sentiment analysis tool (or any other MCP tool) is straightforward. You just pass `this.availableTools` to your LLM chat-completion, in addition to your usual array of messages:
213
+
214
+ ```ts
215
+ const stream = this.client.chatCompletionStream({
216
+ provider: this.provider,
217
+ model: this.model,
218
+ messages,
219
+ tools: this.availableTools,
220
+ tool_choice: "auto",
221
+ });
222
+ ```
223
+
224
+ `tool_choice: "auto"` is the parameter you pass for the LLM to generate zero, one, or multiple tool calls.
225
+
226
+ When parsing or streaming the output, the LLM will generate some tool calls (i.e. a function name, and some JSON-encoded arguments), which you (as a developer) need to compute. The MCP client SDK once again makes that very easy; it has a `client.callTool()` method:
227
+
228
+ ```ts
229
+ const toolName = toolCall.function.name;
230
+ const toolArgs = JSON.parse(toolCall.function.arguments);
231
+
232
+ const toolMessage: ChatCompletionInputMessageTool = {
233
+ role: "tool",
234
+ tool_call_id: toolCall.id,
235
+ content: "",
236
+ name: toolName,
237
+ };
238
+
239
+ /// Get the appropriate session for this tool
240
+ const client = this.clients.get(toolName);
241
+ if (client) {
242
+ const result = await client.callTool({ name: toolName, arguments: toolArgs });
243
+ toolMessage.content = result.content[0].text;
244
+ } else {
245
+ toolMessage.content = `Error: No session found for tool: ${toolName}`;
246
+ }
247
+ ```
248
+
249
+ If the LLM chooses to use our sentiment analysis tool, this code will automatically route the call to our Gradio server, execute the analysis, and return the result back to the LLM.
250
+
251
+ Finally you will add the resulting tool message to your `messages` array and back into the LLM.
252
+
253
+ ## Our 50-lines-of-code Agent 🤯
254
+
255
+ Now that we have an MCP client capable of connecting to arbitrary MCP servers (including our Gradio sentiment analysis server) to get lists of tools and capable of injecting them and parsing them from the LLM inference, well... what is an Agent?
256
+
257
+ > Once you have an inference client with a set of tools, then an Agent is just a while loop on top of it.
258
+
259
+ In more detail, an Agent is simply a combination of:
260
+ - a system prompt
261
+ - an LLM Inference client
262
+ - an MCP client to hook a set of Tools into it from a bunch of MCP servers (including our Gradio server)
263
+ - some basic control flow (see below for the while loop)
264
+
265
+ > [!TIP]
266
+ > The complete `Agent.ts` code file is [here](https://github.com/huggingface/huggingface.js/blob/main/packages/mcp-client/src/Agent.ts).
267
+
268
+ Our Agent class simply extends McpClient:
269
+
270
+ ```ts
271
+ export class Agent extends McpClient {
272
+ private readonly servers: StdioServerParameters[];
273
+ protected messages: ChatCompletionInputMessage[];
274
+
275
+ constructor({
276
+ provider,
277
+ model,
278
+ apiKey,
279
+ servers,
280
+ prompt,
281
+ }: {
282
+ provider: InferenceProvider;
283
+ model: string;
284
+ apiKey: string;
285
+ servers: StdioServerParameters[];
286
+ prompt?: string;
287
+ }) {
288
+ super({ provider, model, apiKey });
289
+ this.servers = servers;
290
+ this.messages = [
291
+ {
292
+ role: "system",
293
+ content: prompt ?? DEFAULT_SYSTEM_PROMPT,
294
+ },
295
+ ];
296
+ }
297
+ }
298
+ ```
299
+
300
+ By default, we use a very simple system prompt inspired by the one shared in the [GPT-4.1 prompting guide](https://cookbook.openai.com/examples/gpt4-1_prompting_guide).
301
+
302
+ Even though this comes from OpenAI 😈, this sentence in particular applies to more and more models, both closed and open:
303
+
304
+ > We encourage developers to exclusively use the tools field to pass tools, rather than manually injecting tool descriptions into your prompt and writing a separate parser for tool calls, as some have reported doing in the past.
305
+
306
+ Which is to say, we don't need to provide painstakingly formatted lists of tool use examples in the prompt. The `tools: this.availableTools` param is enough, and the LLM will know how to use both the filesystem tools and our Gradio sentiment analysis tool.
307
+
308
+ Loading the tools on the Agent is literally just connecting to the MCP servers we want (in parallel because it's so easy to do in JS):
309
+
310
+ ```ts
311
+ async loadTools(): Promise<void> {
312
+ await Promise.all(this.servers.map((s) => this.addMcpServer(s)));
313
+ }
314
+ ```
315
+
316
+ We add two extra tools (outside of MCP) that can be used by the LLM for our Agent's control flow:
317
+
318
+ ```ts
319
+ const taskCompletionTool: ChatCompletionInputTool = {
320
+ type: "function",
321
+ function: {
322
+ name: "task_complete",
323
+ description: "Call this tool when the task given by the user is complete",
324
+ parameters: {
325
+ type: "object",
326
+ properties: {},
327
+ },
328
+ },
329
+ };
330
+ const askQuestionTool: ChatCompletionInputTool = {
331
+ type: "function",
332
+ function: {
333
+ name: "ask_question",
334
+ description: "Ask a question to the user to get more info required to solve or clarify their problem.",
335
+ parameters: {
336
+ type: "object",
337
+ properties: {},
338
+ },
339
+ },
340
+ };
341
+ const exitLoopTools = [taskCompletionTool, askQuestionTool];
342
+ ```
343
+
344
+ When calling any of these tools, the Agent will break its loop and give control back to the user for new input.
345
+
346
+ ### The complete while loop
347
+
348
+ Behold our complete while loop.🎉
349
+
350
+ The gist of our Agent's main while loop is that we simply iterate with the LLM alternating between tool calling and feeding it the tool results, and we do so **until the LLM starts to respond with two non-tool messages in a row**.
351
+
352
+ This is the complete while loop:
353
+
354
+ ```ts
355
+ let numOfTurns = 0;
356
+ let nextTurnShouldCallTools = true;
357
+ while (true) {
358
+ try {
359
+ yield* this.processSingleTurnWithTools(this.messages, {
360
+ exitLoopTools,
361
+ exitIfFirstChunkNoTool: numOfTurns > 0 && nextTurnShouldCallTools,
362
+ abortSignal: opts.abortSignal,
363
+ });
364
+ } catch (err) {
365
+ if (err instanceof Error && err.message === "AbortError") {
366
+ return;
367
+ }
368
+ throw err;
369
+ }
370
+ numOfTurns++;
371
+ const currentLast = this.messages.at(-1)!;
372
+ if (
373
+ currentLast.role === "tool" &&
374
+ currentLast.name &&
375
+ exitLoopTools.map((t) => t.function.name).includes(currentLast.name)
376
+ ) {
377
+ return;
378
+ }
379
+ if (currentLast.role !== "tool" && numOfTurns > MAX_NUM_TURNS) {
380
+ return;
381
+ }
382
+ if (currentLast.role !== "tool" && nextTurnShouldCallTools) {
383
+ return;
384
+ }
385
+ if (currentLast.role === "tool") {
386
+ nextTurnShouldCallTools = false;
387
+ } else {
388
+ nextTurnShouldCallTools = true;
389
+ }
390
+ }
391
+ ```
392
+
393
+ ## Connecting Tiny Agents with Gradio MCP Servers
394
+
395
+ Now that we understand both Tiny Agents and Gradio MCP servers, let's see how they work together! The beauty of MCP is that it provides a standardized way for agents to interact with any MCP-compatible server, including our Gradio-based sentiment analysis server.
396
+
397
+ ### Using the Gradio Server with Tiny Agents
398
+
399
+ To connect our Tiny Agent to the Gradio sentiment analysis server we built earlier, we just need to add it to our list of servers. Here's how we can modify our agent configuration:
400
+
401
+ ```ts
402
+ const agent = new Agent({
403
+ provider: process.env.PROVIDER ?? "nebius",
404
+ model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
405
+ apiKey: process.env.HF_TOKEN,
406
+ servers: [
407
+ // ... existing servers ...
408
+ {
409
+ command: "npx",
410
+ args: [
411
+ "mcp-remote",
412
+ "http://localhost:7860/gradio_api/mcp/sse" // Your Gradio MCP server
413
+ ]
414
+ }
415
+ ],
416
+ });
417
+ ```
418
+
419
+ Now our agent can use the sentiment analysis tool alongside other tools! For example, it could:
420
+ 1. Read text from a file using the filesystem server
421
+ 2. Analyze its sentiment using our Gradio server
422
+ 3. Write the results back to a file
423
+
424
+ ### Example Interaction
425
+
426
+ Here's what a conversation with our agent might look like:
427
+
428
+ ```
429
+ User: Read the file "feedback.txt" from my Desktop and analyze its sentiment
430
+
431
+ Agent: I'll help you analyze the sentiment of the feedback file. Let me break this down into steps:
432
+
433
+ 1. First, I'll read the file using the filesystem tool
434
+ 2. Then, I'll analyze its sentiment using the sentiment analysis tool
435
+ 3. Finally, I'll write the results to a new file
436
+
437
+ [Agent proceeds to use the tools and provide the analysis]
438
+ ```
439
+
440
+ ### Deployment Considerations
441
+
442
+ When deploying your Gradio MCP server to Hugging Face Spaces, you'll need to update the server URL in your agent configuration to point to your deployed space:
443
+
444
+ ```ts
445
+ {
446
+ command: "npx",
447
+ args: [
448
+ "mcp-remote",
449
+ "https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse"
450
+ ]
451
+ }
452
+ ```
453
+
454
+ This allows your agent to use the sentiment analysis tool from anywhere, not just locally!
455
+
456
+
457
+
units/en/unit3/introduction.mdx ADDED
@@ -0,0 +1 @@
 
 
1
+ # Introduction
units/en/unit4/introduction.mdx ADDED
@@ -0,0 +1 @@
 
 
1
+ # Advanced Topics, Security, and the Future of MCP