File size: 1,482 Bytes
10f42f9
 
 
 
 
 
 
 
c0073a0
10f42f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6b952ac
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: mit
title: Fine-Tuned GPT2 For Azure DevOps Q&A
sdk: gradio
emoji: 
colorFrom: red
colorTo: yellow
short_description: Bot trained on custom AZ DevOps dataset
sdk_version: 5.41.0
---
### NOTE:

The large fine-tuned GPT-2 model weights are hosted on Hugging Face:
https://huggingface.co/heramb04/GPT2-Azure-DevOps

No manual download needed. The model is automatically pulled when `app.py` runs.

If you want to download manually:
bash
wget https://huggingface.co/heramb04/GPT2-Azure-DevOps/resolve/main/model.safetensors

## About

This project deploys a fine-tuned GPT-2 model for Azure DevOps Q&A as a web API using huggingFace and gradio. The model is loaded locally and provides a Q&A interface.

## Features

- **Local Model Inference:** Uses a fine-tuned GPT-2 model loaded from local files.
- **Gradio Interface:** A simple web API for text-based question and answer.
- **Easy Deployment:** Run locally and test using a public link via Gradio.

### Prerequisites

- Python 3.8+
- Git

## How to Run Locally

1. Clone this repository:
   
   git clone https://github.com/Heramb04/Fine_Tuned_GPT-2.git
   cd PRODIGY_GA_1

Navigate to the directory where you have cloned the repository..

2. Set up a virtual environment:
   On windows:
   python -m venv venv
   venv\Scripts\activate

   On macOS/Linux:
   python3 -m venv venv
   source venv/bin/activate

3. Install the dependencies:
   pip install -r requirements.txt

4. Run the application:
   python app.py