CLARIS / README.md
RunningPie's picture
Upload folder using huggingface_hub
808062b verified

Sign Language Recognition Model

This model recognizes sign language gestures using landmark data from hand, pose, and face keypoints.

Model Details

  • Model Type: Sign Language Recognition
  • Framework: TensorFlow/Keras
  • Input: Landmark sequences (x, y, z coordinates)
  • Output: Sign language class predictions
  • Classes: 60 different signs
  • Parameters: 1763418

Model Architecture

  • Input Shape: (,384,708)
  • Output Shape: (,60)
  • Max Sequence Length: (384)
  • Embedding Dimension: (192)

Training Details

  • Epochs: 69
  • Batch Size: 32
  • Learning Rate: 0.0005
  • Weight Decay: 0.1
  • Best Validation Loss: 3.1850430965423584
  • Best Validation Accuracy: 0.25550660490989685

Usage

import tensorflow as tf
import pickle
import numpy as np

# Load the model
model = tf.keras.models.load_model('model.h5')

# Load the processor
with open('processor.pkl', 'rb') as f:
    processor = pickle.load(f)

# Example inference
# your_landmark_data should be preprocessed using the same processor
predictions = model.predict(your_landmark_data)
predicted_classes = np.argmax(predictions, axis=1)

Files Description

  • model.h5: Complete Keras model (recommended for inference)
  • model_weights.h5: Model weights only
  • processor.pkl: Data processor for landmark preprocessing
  • config.json: Model configuration and metadata
  • training_history.json: Training metrics and history
  • inference_example.py: Example inference script
  • requirements.txt: Required dependencies

Requirements

See requirements.txt for the complete list of dependencies.

Training Notebook

The training notebook will be provided in future updates