DAiSEE Emotion Detection Model
This model detects 4 emotion states (Boredom, Engagement, Confusion, Frustration) from educational videos.
Architecture
- Stage 1: Qwen2.5-VL-7B-Instruct (embedding extraction)
- Stage 2: MLP classifiers (4 separate models)
Usage
Via Inference Endpoint
import requests
import base64
# Encode video
with open("student_video.avi", "rb") as f:
video_b64 = base64.b64encode(f.read()).decode()
# Send request
response = requests.post(
"https://YOUR-ENDPOINT.aws.endpoints.huggingface.cloud",
headers={"Authorization": f"Bearer {YOUR_HF_TOKEN}"},
json={"inputs": video_b64}
)
predictions = response.json()
print(predictions)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support