|
|
--- |
|
|
tags: |
|
|
- time-series-classification |
|
|
- human-activity-recognition |
|
|
- multimodal |
|
|
- cnn-lstm |
|
|
- sensor-data |
|
|
datasets: |
|
|
- MultiModal_HumanActivity_SensorStream |
|
|
license: apache-2.0 |
|
|
model-index: |
|
|
- name: HAR_MultiModal_Classifier |
|
|
results: |
|
|
- task: |
|
|
name: Time Series Classification |
|
|
type: time-series-classification |
|
|
metrics: |
|
|
- type: accuracy |
|
|
value: 0.931 |
|
|
name: Sequence Accuracy |
|
|
- type: weighted_f1 |
|
|
value: 0.925 |
|
|
name: Weighted F1 Score |
|
|
--- |
|
|
|
|
|
# HAR_MultiModal_Classifier |
|
|
|
|
|
## ๐ Overview |
|
|
|
|
|
The **HAR_MultiModal_Classifier** is a deep learning model designed for **Human Activity Recognition (HAR)**. It classifies complex human activities from raw time-series sensor streams, utilizing data from accelerometers (Acc\_X, Y, Z), gyroscopes (Gyro\_X, Y, Z), and contextual physiological metrics (Heart\_Rate\_BPM, Calories\_Burned\_kJ, Device\_Location) simultaneously. |
|
|
|
|
|
## ๐ง Model Architecture |
|
|
|
|
|
The architecture is a specialized **Convolutional Neural Network (CNN) combined with a Long Short-Term Memory (LSTM)** network, optimized for processing sequential, high-frequency sensor data. |
|
|
|
|
|
* **Input:** Sequences of 50 timesteps, containing 9 features per step (6 sensor, 3 contextual/physiological). |
|
|
* **CNN Layer:** Extracts spatial features and localized patterns from the sensor data windows. |
|
|
* **LSTM Layer:** Captures the temporal dependencies and long-range sequential dynamics inherent in human motion (e.g., the cyclical pattern of "Walking"). |
|
|
* **Classification Head:** A dense layer with Softmax activation outputs the probability distribution over the 6 activity classes. |
|
|
* **Target Classes:** Walking, Sitting, Running, Lifting\_Heavy, Typing, Climbing\_Stairs. |
|
|
|
|
|
## ๐ฏ Intended Use |
|
|
|
|
|
This model is ideal for applications requiring continuous, precise activity monitoring: |
|
|
|
|
|
1. **Smart Wearable Devices:** Powering real-time activity tracking and fitness coaching. |
|
|
2. **Health Monitoring:** Detecting falls, anomalous activity, or adherence to prescribed exercise routines. |
|
|
3. **Contextual Computing:** Providing accurate context for mobile applications and ambient intelligence systems. |
|
|
4. **Robotics and Automation:** Training robots to understand human motion and collaboration. |
|
|
|
|
|
## โ ๏ธ Limitations |
|
|
|
|
|
1. **Device Dependence:** Performance is highly dependent on sensor quality, sampling rate, and device placement (Wrist, Chest, Back, etc.). Deviations from the `Device_Location` in the training set may reduce accuracy. |
|
|
2. **Activity Overlap:** The model may confuse activities with similar movement signatures (e.g., fast walking vs. slow jogging), despite multimodal input. |
|
|
3. **Subject Variance:** The model's accuracy may vary across new subjects due to differences in gait, body mass, and movement style, necessitating fine-tuning for personalized deployment. |
|
|
|
|
|
--- |
|
|
|
|
|
### MODEL 2: **AspectScorer_ReviewBERT** |
|
|
|
|
|
This model is a multi-output regression model based on BERT, trained to predict multiple numerical aspect scores from a single raw text review. |
|
|
|
|
|
#### config.json |
|
|
|
|
|
```json |
|
|
{ |
|
|
"_name_or_path": "bert-base-uncased", |
|
|
"architectures": [ |
|
|
"BertForMultipleRegression" |
|
|
], |
|
|
"hidden_size": 768, |
|
|
"model_type": "bert", |
|
|
"num_hidden_layers": 12, |
|
|
"vocab_size": 30522, |
|
|
"problem_type": "multi_output_regression", |
|
|
"num_labels": 3, |
|
|
"output_labels": ["Aspect_Performance", "Aspect_Price_Value", "Aspect_Aesthetics"], |
|
|
"min_rating": 1.0, |
|
|
"max_rating": 5.0, |
|
|
"transformers_version": "4.35.2" |
|
|
} |