pilgrim-65's picture
README modified with clarifications
a0669d3

A newer version of the Gradio SDK is available: 6.5.1

Upgrade
metadata
title: Final Assignment Template
emoji: 📚
colorFrom: red
colorTo: green
sdk: gradio
sdk_version: 5.47.0
app_file: app.py
hf_oauth: true
pinned: false
license: mit
short_description: Final Assignment for agents course

Graph

The graph is built with LangGraph framework. There is a control of iterations between the agent and the tools node, with a field in the state. The route_tools function checks if the iterations number, updated in the increase node, is greater than the stablished in MAX_ITERATIONS constant.

Models

I have tried two different models so far.

  • gemini-2.5-flash. This is free to use, taking advantage of the generous limits provided by Google AI for developers.
  • gpt-oss-120b. I have used it through HuggingFace inference providers. Since I have a pro account, 2$/month is more than enough to develop the Final Assignment project. This model, gpt-oss-120b, does not work through Together inference provider. It seems that Together is not performing well with LangGraph. It worked just fine through Fireworks inference provider.

Tools

  • python_tool
  • reverse_tool
  • excel_file_to_markdown
  • sum_numbers
  • web_search
  • get_wikipedia_info
  • ask_audio_model
  • chess_tool (this one cannot run on HuggingFace Space)

Results

The answers of the two models were cached after generation, and then submitted. I modified the gradio app to do so, as suggested in the template comments

  • gemini-2.5-flash. 40%
  • gpt-oss-120b. 60%
  • combined (taking correct results from both models): 65%. Only one additional answer provided by gemini.