File size: 2,853 Bytes
a9f4704
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---

title: Test Creation Agent
emoji: 📝
colorFrom: blue
colorTo: green
sdk: docker
sdk_version: latest
app_port: 7860
pinned: false
---


Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

# Test Creation Agent

This application provides a conversational interface to collect educational test creation parameters. It extracts details like chapters, question counts, difficulty distribution, and test timing from natural language inputs.

## Features

- User-friendly Gradio chat interface
- Intelligent parameter extraction from natural language
- Supports multiple academic subjects and chapters
- Normalizes chapter names to standardized curriculum topics
- Tracks conversation state and guides users through completion

## Architecture

The application consists of two main components:

1. **FastAPI Backend**: Handles the parameter extraction and conversation logic
2. **Gradio Frontend**: Provides the user interface for conversation

## Setup

### Environment Variables

Create a `.env` file from the example:

```bash

cp .env.example .env

```

Edit the `.env` file and add your OpenAI API key:

```

OPENAI_API_KEY=your_openai_api_key_here

```

### Running with Docker

Build and run the Docker container:

```bash

docker build -t test-creation-agent .

docker run -p 7860:7860 -p 8000:8000 --env-file .env test-creation-agent

```

### Running Locally

Install dependencies:

```bash

pip install -r requirements.txt

```

Run the application:

```bash

python app.py

```

This will start both the FastAPI backend and the Gradio frontend. Access the application at http://localhost:7860.

## API Endpoints

The FastAPI backend provides these endpoints:

- `GET /`: Check if the API is running
- `POST /chat`: Send a user message and get a response
  - Request body: `{"message": "string", "session_id": "string"}`
- `GET /session/{session_id}`: Get the current state of a session
- `DELETE /session/{session_id}`: Delete a session
- `POST /reset`: Reset a session to start over
  - Request body: `{"message": "", "session_id": "string"}`

## Deploying to Hugging Face Spaces

1. Create a new Space on Hugging Face with Docker template
2. Link your GitHub repository or upload the files directly
3. Set the OPENAI_API_KEY in the Space's secrets
4. The application will be accessible at your Space's URL

## Usage Example

Simply open the Gradio interface and start describing your test requirements. For example:

"I need a test on Thermodynamics and Electrostatics, 10 questions each, 60% medium, 20% easy, 20% hard, 90 minutes, on May 15, 2025 at 10 AM"

The agent will extract the parameters, normalize chapter names, and either ask for missing information or confirm when all parameters are collected.