File size: 3,473 Bytes
0ce0e40 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
tags:
- time-series-classification
- human-activity-recognition
- multimodal
- cnn-lstm
- sensor-data
datasets:
- MultiModal_HumanActivity_SensorStream
license: apache-2.0
model-index:
- name: HAR_MultiModal_Classifier
results:
- task:
name: Time Series Classification
type: time-series-classification
metrics:
- type: accuracy
value: 0.931
name: Sequence Accuracy
- type: weighted_f1
value: 0.925
name: Weighted F1 Score
---
# HAR_MultiModal_Classifier
## 🏃 Overview
The **HAR_MultiModal_Classifier** is a deep learning model designed for **Human Activity Recognition (HAR)**. It classifies complex human activities from raw time-series sensor streams, utilizing data from accelerometers (Acc\_X, Y, Z), gyroscopes (Gyro\_X, Y, Z), and contextual physiological metrics (Heart\_Rate\_BPM, Calories\_Burned\_kJ, Device\_Location) simultaneously.
## 🧠 Model Architecture
The architecture is a specialized **Convolutional Neural Network (CNN) combined with a Long Short-Term Memory (LSTM)** network, optimized for processing sequential, high-frequency sensor data.
* **Input:** Sequences of 50 timesteps, containing 9 features per step (6 sensor, 3 contextual/physiological).
* **CNN Layer:** Extracts spatial features and localized patterns from the sensor data windows.
* **LSTM Layer:** Captures the temporal dependencies and long-range sequential dynamics inherent in human motion (e.g., the cyclical pattern of "Walking").
* **Classification Head:** A dense layer with Softmax activation outputs the probability distribution over the 6 activity classes.
* **Target Classes:** Walking, Sitting, Running, Lifting\_Heavy, Typing, Climbing\_Stairs.
## 🎯 Intended Use
This model is ideal for applications requiring continuous, precise activity monitoring:
1. **Smart Wearable Devices:** Powering real-time activity tracking and fitness coaching.
2. **Health Monitoring:** Detecting falls, anomalous activity, or adherence to prescribed exercise routines.
3. **Contextual Computing:** Providing accurate context for mobile applications and ambient intelligence systems.
4. **Robotics and Automation:** Training robots to understand human motion and collaboration.
## ⚠️ Limitations
1. **Device Dependence:** Performance is highly dependent on sensor quality, sampling rate, and device placement (Wrist, Chest, Back, etc.). Deviations from the `Device_Location` in the training set may reduce accuracy.
2. **Activity Overlap:** The model may confuse activities with similar movement signatures (e.g., fast walking vs. slow jogging), despite multimodal input.
3. **Subject Variance:** The model's accuracy may vary across new subjects due to differences in gait, body mass, and movement style, necessitating fine-tuning for personalized deployment.
---
### MODEL 2: **AspectScorer_ReviewBERT**
This model is a multi-output regression model based on BERT, trained to predict multiple numerical aspect scores from a single raw text review.
#### config.json
```json
{
"_name_or_path": "bert-base-uncased",
"architectures": [
"BertForMultipleRegression"
],
"hidden_size": 768,
"model_type": "bert",
"num_hidden_layers": 12,
"vocab_size": 30522,
"problem_type": "multi_output_regression",
"num_labels": 3,
"output_labels": ["Aspect_Performance", "Aspect_Price_Value", "Aspect_Aesthetics"],
"min_rating": 1.0,
"max_rating": 5.0,
"transformers_version": "4.35.2"
} |