Spaces:
Sleeping
Sleeping
File size: 1,810 Bytes
399d8e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
title: Aura Emotion Detection API
emoji: π€
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
---
# π€ Aura Emotion Detection API
Real-time emotion detection from audio using Wav2Vec2 model from Hugging Face.
## π Features
- **Real-time Emotion Detection**: Uses `superb/wav2vec2-base-superb-er` model
- **Multiple Audio Formats**: Supports WAV, MP3, WebM, and more
- **Fast Processing**: Optimized for real-time analysis
- **REST API**: Easy integration with any frontend
## π API Endpoints
### Health Check
```
GET /health
```
### Predict Emotion
```
POST /predict
Content-Type: multipart/form-data
Body: audio file (WAV, MP3, WebM, etc.)
```
**Response:**
```json
{
"emotion": "happy",
"confidence": 0.85,
"model": "Wav2Vec2 (Hugging Face)"
}
```
## π― Supported Emotions
- `happy` - Joyful, cheerful
- `sad` - Sad, melancholic
- `angry` - Angry, frustrated
- `calm` - Calm, relaxed
- `excited` - Excited, energetic
- `neutral` - Neutral, no strong emotion
## π οΈ Technology Stack
- **Framework**: FastAPI
- **Model**: Wav2Vec2 (superb/wav2vec2-base-superb-er)
- **Audio Processing**: librosa, soundfile, pydub
- **ML Framework**: PyTorch, Hugging Face Transformers
## π Usage Example
```python
import requests
# Upload audio file
with open('audio.wav', 'rb') as f:
files = {'audio': f}
response = requests.post(
'https://your-username-aura-emotion-api.hf.space/predict',
files=files
)
result = response.json()
print(f"Detected emotion: {result['emotion']}")
```
## π Frontend Integration
The frontend is deployed on Vercel and connects to this API for real-time emotion detection from microphone input.
## π License
MIT License
|