YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
Sign Language Recognition Model
This model recognizes sign language gestures using landmark data from hand, pose, and face keypoints.
Model Details
- Model Type: Sign Language Recognition
- Framework: TensorFlow/Keras
- Input: Landmark sequences (x, y, z coordinates)
- Output: Sign language class predictions
- Classes: 60 different signs
- Parameters: 1763418
Model Architecture
- Input Shape: (,384,708)
- Output Shape: (,60)
- Max Sequence Length: (384)
- Embedding Dimension: (192)
Training Details
- Epochs: 69
- Batch Size: 32
- Learning Rate: 0.0005
- Weight Decay: 0.1
- Best Validation Loss: 3.1850430965423584
- Best Validation Accuracy: 0.25550660490989685
Usage
import tensorflow as tf
import pickle
import numpy as np
# Load the model
model = tf.keras.models.load_model('model.h5')
# Load the processor
with open('processor.pkl', 'rb') as f:
processor = pickle.load(f)
# Example inference
# your_landmark_data should be preprocessed using the same processor
predictions = model.predict(your_landmark_data)
predicted_classes = np.argmax(predictions, axis=1)
Files Description
model.h5: Complete Keras model (recommended for inference)model_weights.h5: Model weights onlyprocessor.pkl: Data processor for landmark preprocessingconfig.json: Model configuration and metadatatraining_history.json: Training metrics and historyinference_example.py: Example inference scriptrequirements.txt: Required dependencies
Requirements
See requirements.txt for the complete list of dependencies.
Training Notebook
The training notebook will be provided in future updates
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support