File size: 5,242 Bytes
a5f8815 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
---
library_name: transformers
license: apache-2.0
base_model: MCG-NJU/videomae-large-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Hibernates-MEA-R2-V0
results: []
---
# Hibernates-MEA-R2-V0
An advanced AI system for visual sequence processing, extending the capabilities of [MCG-NJU/videomae-large-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-large-finetuned-kinetics).
Key Performance Indicators:
- Optimal Loss: 0.4894
- Peak Accuracy: 80.43%
## System Overview
Advanced AI architecture optimized for visual sequence understanding:
- Core: Deep learning transformer system
- Data Handling: Sequential frame processing
- Main Function: Visual content categorization
- Learning Cycles: 50 complete epochs
- Results Summary:
* Maximum Precision: 80.43% (epoch 7)
* Consistent Performance: 75%+ maintained
## Applications & Requirements
### Core Applications
- Visual sequence interpretation
- Dynamic content analysis
- Environmental context recognition
- Time-series visual processing
### Technical Considerations
- Task-specific optimization
- Computing needs: High-performance GPU
- Memory constraints: 4-sample batching
- Data format: Standardized input required
## Development Data
Implementation Details:
- Cycle Structure: 65 iterations per epoch
- Development Span: 3250 total iterations
- Assessment Methods: Dual metric system (loss/accuracy)
- Progress Metrics:
* Starting Point: 54% accuracy
* Final Result: 73.91%
* Best-case Loss: 0.4894
## Implementation Specifications
### Core Parameters
Implementation utilized the following configuration:
- Learning Rate: 1e-05
- Training Units: 4 per batch
- Validation Units: 4 per batch
- Random Seed: 42
- Optimization: Advanced weight management with adamw_torch
* Beta values: (0.9,0.999)
* Epsilon: 1e-08
- Rate Control: Linear adjustment
- Warmup Ratio: 0.1
- Total Iterations: 3250
### Development Progress
| Cycle Loss | Epoch | Step | Validation Loss | Success Rate |
|:----------:|:-----:|:----:|:---------------:|:------------:|
| 0.6186 | 0.02 | 65 | 0.7367 | 0.5435 |
| 0.5974 | 1.02 | 130 | 0.8185 | 0.5435 |
| 0.5491 | 2.02 | 195 | 0.8372 | 0.5435 |
| 0.6156 | 3.02 | 260 | 0.6620 | 0.5870 |
| 0.6255 | 4.02 | 325 | 0.6835 | 0.5435 |
| 0.438 | 5.02 | 390 | 1.2116 | 0.5435 |
| 0.4653 | 6.02 | 455 | 0.6002 | 0.5652 |
| 0.5876 | 7.02 | 520 | 0.4894 | 0.8043 |
| 0.3801 | 8.02 | 585 | 0.8324 | 0.5435 |
| 0.4474 | 9.02 | 650 | 1.1581 | 0.5652 |
| 0.694 | 10.02 | 715 | 0.5354 | 0.7174 |
| 0.4773 | 11.02 | 780 | 0.6181 | 0.6957 |
| 0.6208 | 12.02 | 845 | 0.5677 | 0.7609 |
| 0.344 | 13.02 | 910 | 0.7452 | 0.6087 |
| 0.254 | 14.02 | 975 | 0.6362 | 0.7391 |
| 0.4578 | 15.02 | 1040 | 0.8304 | 0.6957 |
| 0.3954 | 16.02 | 1105 | 0.6049 | 0.7609 |
| 0.248 | 17.02 | 1170 | 0.9506 | 0.6739 |
| 0.1334 | 18.02 | 1235 | 1.1876 | 0.6739 |
| 0.534 | 19.02 | 1300 | 0.6296 | 0.7391 |
| 0.3556 | 20.02 | 1365 | 1.3007 | 0.6957 |
| 0.5439 | 21.02 | 1430 | 1.5066 | 0.6739 |
| 0.4107 | 22.02 | 1495 | 0.9273 | 0.8043 |
| 0.61 | 23.02 | 1560 | 1.0008 | 0.7174 |
| 0.6482 | 24.02 | 1625 | 0.7548 | 0.7609 |
| 0.199 | 25.02 | 1690 | 0.7917 | 0.7826 |
| 0.1185 | 26.02 | 1755 | 0.7529 | 0.7826 |
| 0.3886 | 27.02 | 1820 | 0.8627 | 0.7609 |
| 0.0123 | 28.02 | 1885 | 1.3886 | 0.7174 |
| 0.5328 | 29.02 | 1950 | 1.2803 | 0.6957 |
| 0.2961 | 30.02 | 2015 | 1.4397 | 0.7174 |
| 0.1192 | 31.02 | 2080 | 2.2563 | 0.6304 |
| 0.145 | 32.02 | 2145 | 1.0465 | 0.7609 |
| 0.0924 | 33.02 | 2210 | 0.9859 | 0.7826 |
| 0.1016 | 34.02 | 2275 | 1.0758 | 0.7826 |
| 0.1894 | 35.02 | 2340 | 1.2088 | 0.7609 |
| 0.2657 | 36.02 | 2405 | 1.5409 | 0.7391 |
| 0.1235 | 37.02 | 2470 | 1.2736 | 0.7609 |
| 0.1539 | 38.02 | 2535 | 1.2608 | 0.7609 |
| 0.03 | 39.02 | 2600 | 1.2058 | 0.7609 |
| 0.1447 | 40.02 | 2665 | 1.1072 | 0.7609 |
| 0.0888 | 41.02 | 2730 | 1.1454 | 0.7826 |
| 0.0016 | 42.02 | 2795 | 1.1194 | 0.7826 |
| 0.1489 | 43.02 | 2860 | 1.2170 | 0.7609 |
| 0.0004 | 44.02 | 2925 | 1.1894 | 0.7609 |
| 0.0004 | 45.02 | 2990 | 1.3329 | 0.7391 |
| 0.0014 | 46.02 | 3055 | 1.1887 | 0.7609 |
| 0.1675 | 47.02 | 3120 | 1.2652 | 0.7391 |
| 0.012 | 48.02 | 3185 | 1.3228 | 0.7391 |
| 0.0475 | 49.02 | 3250 | 1.3507 | 0.7391 |
### System Versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu117
- Datasets 3.0.1
- Tokenizers 0.20.0
|