Spaces:
No application file
A newer version of the Gradio SDK is available:
6.5.1
Huge IFX API
An AI-powered sports fan assistant that creates natural language, multimodal, and personalized experiences around professional sports teams, players, games, rules, and fan communities. The app delivers conversational responses enhanced with visuals and session memory.
Fictional League: Huge League
- International soccer league, 23-player squads, 4-3-3 base formation.
- Teams:
- Yucatán Force (Mérida, Mexico): Mayan pride, fortress stadium "El Templo del Sol".
- Tierra Alta FC (San José, Costa Rica): Highlanders, eco-friendly, smart play.
- Everglade FC (Miami, USA): Flashy, wild, South Florida flair.
- Fraser Valley United (Abbotsford, Canada): Vineyard roots, top youth academy.
Table of Contents
- Features
- Architecture
- Tech Stack
- Setup
- Deployment
- API Usage
- Prompt Management (Freeplay)
- Memory (Zep)
- Agents (LangGraph)
- LLM (OpenAI)
- Development & Contributing
- License
Features
- Conversational AI assistant for Huge League soccer fans
- Multimodal responses (text, visuals)
- Personalized session memory
- Prompt management and logging
- Modular agent-based workflow
Architecture
+------------------------+
| Gradio UI (Spaces) |
+-----------+------------+
|
v
+------------------------+
| LangGraph Agent Flow |
+-----------+------------+
| | | |
v v v v
Freeplay Zep OpenAI Tools
(Prompts) (Memory) (LLM) (Custom)
- UI: Gradio-based, runs via
server_gradio.py - Workflow: Orchestrated by LangGraph (
workflows/base.py) - Prompt Management: Freeplay (
utils/freeplay_helpers.py) - Memory: Zep (
utils/zep_helpers.py) - LLM: OpenAI (gpt-4o-mini)
- Tools: Player/Game search (
tools/)
Tech Stack
- Gradio (HuggingFace Spaces)
- FastAPI (not used b/c we went with gradio)
- Freeplay (Prompt management)
- Zep (Session memory)
- LangGraph (Agent workflow)
- OpenAI (LLM)
- Docker, Poetry for dependency management
Setup
Prerequisites:
- Docker installed
Quickstart:
make build
make up
- The app will be available at http://localhost:8000/
Environment Variables:
- See
.env.example(TODO: Document required env vars for Freeplay, Zep, OpenAI, etc.) - Ask Liss for env vars.
Deployment
HuggingFace Spaces
App is hosted at: https://huggingface.co/spaces/ryanbalch/IFX-huge-league
To deploy:
Clone this repo under your HuggingFace repo (in the
IFX-huge-leaguefolder). You will need it for deploys, etc.Deployment Targets:
- GitHub: aliss77777/IFX-sandbox
- HuggingFace: ryanbalch/IFX-huge-league
- Docker images:
ghcr.io/ylassohugeinc/ifx-huge-league-api:prod
Deployment Steps:
# build the final image make build-prod # push to GitHub Container Registry make push-prod-ghcr # trigger a build in the HuggingFace repo cd IFX-huge-league && make trigger-buildThen wait for the build to finish: HuggingFace container logs
API Usage
- Gradio UI: Main entrypoint is
server_gradio.py.
Launches the conversational interface for Huge League fans. - FastAPI:
server.pyprovides a minimal API (mostly for development/debug).
Endpoints
| Path | Method | Description |
|---|---|---|
/ |
GET | Healthcheck/home |
(TODO: Document additional endpoints if present)
Prompt Management (Freeplay)
- Integrated via
utils/freeplay_helpers.py - Prompts are fetched, formatted, and logged using Freeplay.
- See
scripts/freeplay_playground.pyfor usage examples.
Memory (Zep)
- Integrated via
utils/zep_helpers.py - Session/user memory managed via Zep.
- See
scripts/zep_playground.pyfor usage examples.
Agents (LangGraph)
- Workflow orchestrated in
workflows/base.pyusing LangGraph. - Integrates LLM, memory, prompt management, and tools.
LLM (OpenAI)
- Uses OpenAI models (e.g.,
gpt-4o-mini) via LangChain/LangGraph. - API keys required. (TODO: Document setup)
Development & Contributing
- All development is Dockerized.
- Use Poetry for dependency management (
poetry add <pkg>inside the container). - Follow PEP8, prefer two blank lines between unrelated classes/functions.
- See
Makefilefor available commands.
Makefile Commands
The following make commands are available for development, build, and deployment workflows:
| Command | Description |
|---|---|
make build |
Build all Docker images using docker-compose.yaml. |
make build-update |
Remove poetry.lock, rebuild Docker image, and extract new lock file from the container. Note: This is not needed for local development. Only use if you're trying to update dependencies. |
make up |
Start all services using Docker Compose. |
make command |
Open an interactive shell inside the running huge-ifx-api container. |
make command-raw |
Run a bash shell in a new container using Docker Compose (not the running one). |
make clean-requirements |
Remove the local poetry.lock file. |
make extract-lock |
Extract the poetry.lock file from a built container to your local directory. Note: This is only needed if you've been deleting the lock file because build will not have access to local lock file. |
make build-prod |
Build the Docker image for the runtime stage in api/Dockerfile, tagged as huge-ifx-api:prod. Used for production deploys. |
make up-build-prod |
Build and run the production image locally, mapping ports 7860 and 8000, with .env and DEV_MODE=true. |
make push-prod-ghcr |
Tag and push the production image to GitHub Container Registry at ghcr.io/ylassohugeinc/ifx-huge-league-api:prod. |
Typical workflow:
- Use
make buildandmake upfor local development. - Use
make build-prod,make push-prod-ghcr, andmake trigger-build(in the HuggingFace repo) for production deployment.
License
(TODO: Add license info)