google-labs-jules[bot] commited on
Commit
4ef1cb8
·
1 Parent(s): 533960f

Initialize backend service with FastAPI and distilgpt2

Browse files

- Created backend/requirements.txt
- Created backend/app.py with FastAPI and text generation pipeline
- Created backend/Dockerfile
- Created .github/workflows/sync_to_hub.yml for HF Space deployment

Dockerfile ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9
2
+
3
+ WORKDIR /app
4
+
5
+ COPY requirements.txt .
6
+ RUN pip install --no-cache-dir -r requirements.txt
7
+
8
+ COPY app.py .
9
+
10
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
__pycache__/app.cpython-312.pyc ADDED
Binary file (1.38 kB). View file
 
__pycache__/test_app.cpython-312-pytest-8.4.2.pyc ADDED
Binary file (4.17 kB). View file
 
__pycache__/test_app.cpython-312-pytest-9.0.1.pyc ADDED
Binary file (4.17 kB). View file
 
app.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, HTTPException
2
+ from pydantic import BaseModel
3
+ from transformers import pipeline
4
+
5
+ app = FastAPI()
6
+
7
+ # Initialize the text-generation pipeline
8
+ # We can load this on startup to avoid loading it on every request,
9
+ # although pipeline does caching, doing it globally is standard for this simple app.
10
+ generator = pipeline("text-generation", model="distilgpt2")
11
+
12
+ class ChatRequest(BaseModel):
13
+ text: str
14
+
15
+ @app.get("/")
16
+ def read_root():
17
+ return {"status": "Cool Shot Systems Backend Online"}
18
+
19
+ @app.post("/chat")
20
+ def chat(request: ChatRequest):
21
+ try:
22
+ # Generate response
23
+ response = generator(request.text, max_length=100, num_return_sequences=1)
24
+ return response[0]
25
+ except Exception as e:
26
+ raise HTTPException(status_code=500, detail=str(e))
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ fastapi
2
+ uvicorn
3
+ transformers
4
+ torch