Spaces:
Running
Running
Ved Gupta commited on
Commit ·
470f81c
1
Parent(s): 11003d0
Document asynchronous processing, concurrency control, `MAX_CONCURRENT_TRANSCRIPTIONS` option, and update `uvicorn` command in README.
Browse files
README.md
CHANGED
|
@@ -19,6 +19,8 @@ Key features:
|
|
| 19 |
- User level access with API keys for managing usage
|
| 20 |
- Self-hostable code for your own speech transcription service
|
| 21 |
- Quantized model optimization for fast and efficient inference
|
|
|
|
|
|
|
| 22 |
- Open source implementation for customization and transparency
|
| 23 |
|
| 24 |
This repository contains code to deploy the API server along with finetuning and quantizing models. Check out the documentation for getting started!
|
|
@@ -51,6 +53,7 @@ Copy the example environment file and configure it:
|
|
| 51 |
```bash
|
| 52 |
cp .env.example .env
|
| 53 |
# Edit .env with your database credentials and settings
|
|
|
|
| 54 |
```
|
| 55 |
|
| 56 |
### 4. Setup Whisper
|
|
@@ -68,7 +71,7 @@ To run the project locally (e.g., inside a Conda environment or virtualenv):
|
|
| 68 |
|
| 69 |
```bash
|
| 70 |
# Ensure your environment is active (e.g., conda activate whisper-api)
|
| 71 |
-
uvicorn app.main:app --reload
|
| 72 |
```
|
| 73 |
|
| 74 |
### Docker (Production)
|
|
|
|
| 19 |
- User level access with API keys for managing usage
|
| 20 |
- Self-hostable code for your own speech transcription service
|
| 21 |
- Quantized model optimization for fast and efficient inference
|
| 22 |
+
- **Asynchronous Processing**: Non-blocking transcription for high availability
|
| 23 |
+
- **Concurrency Control**: Built-in request queuing to prevent server overload
|
| 24 |
- Open source implementation for customization and transparency
|
| 25 |
|
| 26 |
This repository contains code to deploy the API server along with finetuning and quantizing models. Check out the documentation for getting started!
|
|
|
|
| 53 |
```bash
|
| 54 |
cp .env.example .env
|
| 55 |
# Edit .env with your database credentials and settings
|
| 56 |
+
# Optional: Set MAX_CONCURRENT_TRANSCRIPTIONS (default: 2) in .env to control parallel jobs
|
| 57 |
```
|
| 58 |
|
| 59 |
### 4. Setup Whisper
|
|
|
|
| 71 |
|
| 72 |
```bash
|
| 73 |
# Ensure your environment is active (e.g., conda activate whisper-api)
|
| 74 |
+
uvicorn app.main:app --host 0.0.0.0 --port 7860 --reload
|
| 75 |
```
|
| 76 |
|
| 77 |
### Docker (Production)
|