MdSourav76046 commited on
Commit
06b80ad
·
verified ·
1 Parent(s): 09c91aa

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -175
README.md DELETED
@@ -1,175 +0,0 @@
1
- ---
2
- title: Text Correction API
3
- emoji: 🔧
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: docker
7
- sdk_version: 1.0.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- # Text Correction API Server
13
-
14
- This is the server-side API for text correction using your trained model.
15
-
16
- ## 📝 License
17
-
18
- This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
19
-
20
- ## 🚀 Setup
21
-
22
- ### 1. Install Dependencies
23
-
24
- ```bash
25
- pip install -r requirements.txt
26
- ```
27
-
28
- ### 2. Set Model Path
29
-
30
- Make sure your trained model is in the `gpu_base_model2` directory, or set the `MODEL_PATH` environment variable:
31
-
32
- ```bash
33
- export MODEL_PATH="./gpu_base_model2"
34
- ```
35
-
36
- ### 3. Run the Server
37
-
38
- #### Local Development:
39
- ```bash
40
- python main.py
41
- ```
42
-
43
- Or using uvicorn directly:
44
- ```bash
45
- uvicorn main:app --reload --host 0.0.0.0 --port 8000
46
- ```
47
-
48
- The API will be available at: `http://localhost:8000`
49
-
50
- ### 4. Test the API
51
-
52
- ```bash
53
- # Health check
54
- curl http://localhost:8000/health
55
-
56
- # Correct text
57
- curl -X POST http://localhost:8000/correct \
58
- -H "Content-Type: application/json" \
59
- -d '{"text": "helo wrld this is a test"}'
60
- ```
61
-
62
- ## 📡 API Endpoints
63
-
64
- ### GET `/health`
65
- Check if the API and model are ready.
66
-
67
- **Response:**
68
- ```json
69
- {
70
- "status": "healthy",
71
- "model_loaded": true,
72
- "device": "cuda"
73
- }
74
- ```
75
-
76
- ### POST `/correct`
77
- Correct text using the trained model.
78
-
79
- **Request:**
80
- ```json
81
- {
82
- "text": "helo wrld this is a test"
83
- }
84
- ```
85
-
86
- **Response:**
87
- ```json
88
- {
89
- "corrected_text": "hello world this is a test",
90
- "processing_time": 0.45
91
- }
92
- ```
93
-
94
- ## 🌐 Deployment Options
95
-
96
- ### Option 1: Hugging Face Spaces (Free) - Recommended
97
-
98
- 1. **Create a new Space** at https://huggingface.co/new-space
99
- - Name: `your-username-text-correction`
100
- - SDK: Docker
101
- - License: **MIT** (or Apache 2.0)
102
- - Click "Create Space"
103
-
104
- 2. **Upload files:**
105
- - Upload all files from this directory
106
- - Upload your `gpu_base_model2/` folder
107
-
108
- 3. **Your API will be live at:**
109
- ```
110
- https://your-username-text-correction.hf.space/correct
111
- ```
112
-
113
- ### Option 2: Render (Free tier available)
114
-
115
- 1. Create a new Web Service
116
- 2. Connect your GitHub repository
117
- 3. Set build command: `pip install -r requirements.txt`
118
- 4. Set start command: `uvicorn main:app --host 0.0.0.0 --port $PORT`
119
- 5. Deploy
120
-
121
- ### Option 3: Railway (Free tier available)
122
-
123
- 1. Create a new project
124
- 2. Add a service from GitHub
125
- 3. Railway will auto-detect the Python app
126
- 4. Set environment variable `MODEL_PATH` if needed
127
- 5. Deploy
128
-
129
- ### Option 4: AWS/GCP/Azure
130
- For production deployments with more control.
131
-
132
- ## ⚙️ Environment Variables
133
-
134
- - `MODEL_PATH`: Path to your trained model (default: `./gpu_base_model2`)
135
- - `PORT`: Server port (default: `8000`)
136
-
137
- ## 🔒 Security Notes
138
-
139
- ⚠️ **Important for Production:**
140
- 1. Add authentication to your API endpoints
141
- 2. Set proper CORS origins (not `*`)
142
- 3. Add rate limiting
143
- 4. Use HTTPS
144
- 5. Keep your API key secure
145
-
146
- ## 🐛 Troubleshooting
147
-
148
- ### Model not loading
149
- - Check that `gpu_base_model2` directory exists
150
- - Verify all model files are present
151
- - Check console logs for specific errors
152
-
153
- ### Out of memory
154
- - Reduce `max_length` in the generate function
155
- - Use smaller batch sizes
156
- - Consider using CPU instead of GPU
157
-
158
- ### Slow inference
159
- - Use GPU if available
160
- - Reduce `num_beams` parameter
161
- - Use quantization for faster inference
162
-
163
- ## 📊 Usage
164
-
165
- This API is designed to be called from an iOS app for correcting OCR text. The typical flow is:
166
-
167
- 1. User takes/selects an image
168
- 2. OCR extracts text from the image
169
- 3. Extracted text is sent to this API
170
- 4. API corrects the text using the trained model
171
- 5. Corrected text is returned to the app
172
-
173
- ## 🤝 Contributing
174
-
175
- This is a private project for text correction. For questions or issues, please contact the project owner.