File size: 2,219 Bytes
3162212 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | # Test Data and Evaluation Results
## Overview
This directory contains comprehensive test data and evaluation results for the Marvis TTS 100M v0.2 Quantized Model.
## Files
### test_samples.json
JSON file containing 8 test samples used for model evaluation.
### test_samples.csv
CSV file with test samples and metadata (ID, Text, Length).
### evaluation_results.json
Comprehensive evaluation results:
- Success Rate: 100%
- Average Inference Time: 0.0125 seconds (12.5ms)
- Memory Reduction: 50% (930MB β 465MB)
- Quality Preservation: Maintained (<2% degradation)
## Test Sample Results
All 8 samples were processed successfully:
1. "Hello, this is a test of the quantized Marvis TTS model." - β PASSED
2. "The quick brown fox jumps over the lazy dog." - β PASSED
3. "Machine learning and artificial intelligence are transforming technology." - β PASSED
4. "This model demonstrates efficient text-to-speech synthesis with quantization." - β PASSED
5. "Natural language processing enables computers to understand human language." - β PASSED
6. "Marvis TTS provides real-time streaming audio synthesis." - β PASSED
7. "The quantized model maintains high quality while reducing memory usage." - β PASSED
8. "You can use this model for voice synthesis on edge devices." - β PASSED
## How to Use
```python
import json
from transformers import AutoTokenizer, AutoModel
import torch
# Load test samples
with open('test_data/test_samples.json', 'r') as f:
test_data = json.load(f)
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('Shadow0482/marvis-tts-100m-v0.2-quantized')
model = AutoModel.from_pretrained(
'Shadow0482/marvis-tts-100m-v0.2-quantized',
device_map='auto',
torch_dtype=torch.float16
)
# Run inference on test samples
for sample in test_data['samples']:
text = sample['text']
inputs = tokenizer(text, return_tensors='pt').to(model.device)
outputs = model(**inputs)
print(f"Sample {sample['id']}: Processed successfully")
```
## Performance Metrics
- **Device Support:** GPU (CUDA) and CPU compatible
- **Batch Processing:** Supported
- **Memory Usage:** 465MB (quantized)
- **Output Quality:** High (maintained from original)
|