RiH-137Rishi commited on
Commit
b44cc33
·
verified ·
1 Parent(s): bbe9cec

Upload 3 files

Browse files
Files changed (3) hide show
  1. Dockerfile +51 -0
  2. app.py +79 -0
  3. requirements.txt +6 -0
Dockerfile ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## use the official python 3.9 image
2
+
3
+ FROM python:3.9
4
+
5
+ ## set the working directory in the container
6
+ WORKDIR /app
7
+
8
+ ## copy the dependencies file to the working directory
9
+ COPY ./requirements.txt /code/requirements.txt
10
+
11
+
12
+ ## install dependencies
13
+ RUN pip install --no-cache-dir -r requirements.txt
14
+
15
+
16
+ ## set up a new user named "user"
17
+ RUN useradd user
18
+
19
+ ## change to the "user" user
20
+ USER user
21
+
22
+ ENV HOME=/home/user \
23
+ PATH=/home/user/.local/bin:$PATH
24
+
25
+ #Set the working directory
26
+ WORKDIR $HOME/app
27
+
28
+ #Copy the current directory contents into the container at $HOME/app setting the owner
29
+ COPY --chown=user:user . $HOME/app
30
+
31
+ ## start the FastAPI app on port 7860
32
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0","--port", "7860"]
33
+
34
+
35
+
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+
50
+
51
+
app.py ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ LLM + Hugging face Model + Deployment using Docker + deployment on Hugging face spaces
3
+
4
+ """
5
+
6
+
7
+ from fastapi import FastAPI
8
+ from transformers import pipeline
9
+ import os
10
+
11
+ ##create a new fastapi app
12
+ app=FastAPI()
13
+
14
+
15
+
16
+ #Initialize the text generation pipeline
17
+ # loading model from the hugging face model hub
18
+
19
+ pipe = pipeline("text2text-generation", model="google/flan-t5-small")
20
+
21
+
22
+ @app.get("/")
23
+ def home():
24
+ return {"message":"Welcome to the Text Generation API"}
25
+
26
+
27
+
28
+ @app.get("/generate")
29
+ # '''text --> prompt'''
30
+
31
+ def generate(text:str):
32
+ output = pipe(text, max_length=50, do_sample=True, temperature=0.7)
33
+
34
+ #retuen the fgenerated text in json response
35
+ return{"output":output[0]['generate']}
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+
50
+
51
+
52
+
53
+
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+
72
+
73
+
74
+
75
+
76
+
77
+
78
+
79
+
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ fastapi
2
+ requests
3
+ uvicorn[standard]
4
+ sentencepiece
5
+ torch
6
+ transformers