File size: 1,723 Bytes
2f773cf
 
 
 
 
 
 
 
c934a84
2f773cf
 
 
 
 
3f2b048
 
 
 
 
 
a0669d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
title: Final Assignment Template
emoji: 📚
colorFrom: red
colorTo: green
sdk: gradio
sdk_version: 5.47.0
app_file: app.py
hf_oauth: true
pinned: false
license: mit
short_description: Final Assignment for agents course
---

<!--- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference --->

### Graph

The graph is built with LangGraph framework. There is a control of iterations between the agent and the tools node, with a field in the state. The **route_tools** function checks if the **iterations** number, updated in the **increase** node, is greater than the stablished in MAX_ITERATIONS constant.

### Models
I have tried two different models so far.
* **gemini-2.5-flash**. This is free to use, taking advantage of the generous limits provided by Google AI for developers.
* **gpt-oss-120b**. I have used it through HuggingFace inference providers. Since I have a pro account, 2$/month is more than enough to develop the Final Assignment project.
This model, **gpt-oss-120b, does not work through _Together_ inference provider**. It seems that Together is not performing well with LangGraph. It worked just fine through **_Fireworks_** inference provider.

### Tools
* python_tool
* reverse_tool
* excel_file_to_markdown
* sum_numbers
* web_search
* get_wikipedia_info
* ask_audio_model
* chess_tool (this one cannot run on HuggingFace Space)


### Results

The answers of the two models were cached after generation, and then submitted. I modified the gradio app to do so, as suggested in the template comments

* gemini-2.5-flash. 40%
* gpt-oss-120b. 60%
* combined (taking correct results from both models): 65%. Only one additional answer provided by gemini.