Dc-4nderson commited on
Commit
a2d2cc6
Β·
verified Β·
1 Parent(s): 1a134fd

# πŸ§β€β™‚οΈ Confidence Pose Classifier (Body Language) This model predicts the confidence level of a person based on their **body pose** in a single image. It's trained on pose vectors extracted from Zoom-like screenshots using [MediaPipe](https://google.github.io/mediapipe/solutions/pose.html). --- ## πŸ’‘ How It Works - Input: a **pose vector** of 33 keypoints (x, y, z), flattened to a 99-length float list - Output: one of three labels: - `confident` - `not_confident` - `kinda_confident` This is ideal for use cases like: - Analyzing student confidence during virtual learning - Visual AI for coaching and feedback - Real-time body-language evaluation in Zoom or webcam settings --- ## 🧠 Model Details - Trained with `scikit-learn`'s `RandomForestClassifier` - Input features: 99 pose coordinates per image (from MediaPipe) - Label encoding saved via `LabelEncoder` --- ## πŸ›  Usage To use this model in your app: ```python import joblib import numpy as np # Load model + encoder model = joblib.load("confidence_pose_model.pkl") encoder = joblib.load("label_encoder.pkl") # Example input: a 99-length pose vector (from MediaPipe) pose_vector = np.array([...]) # Replace with your vector # Predict label = encoder.inverse_transform(model.predict([pose_vector]))[0] print("Predicted label:", label)

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - Dc-4nderson/confidence-body-image-dataset
4
+ language:
5
+ - en
6
+ pipeline_tag: text-classification
7
+ tags:
8
+ - image-classification
9
+ - body-language
10
+ - confidence-detection
11
+ ---