File size: 1,612 Bytes
1aec1ef 87224ba 1aec1ef 87224ba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
title: MMRM
emoji: 🐠
colorFrom: yellow
colorTo: blue
sdk: gradio
sdk_version: 6.5.1
app_file: app.py
pinned: false
license: gpl-3.0
short_description: 'Restoring Ancient Ideograph: A Multimodal Multitask Neural N'
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
## Interactive Demo
This Gradio app demonstrates the restoration capabilities of the MMRM model compared to textual and visual baselines on real-world damaged character data.
### Features
- **Real-world Data**: Select from samples in the `data/real` directory.
- **Model Comparison**:
- **Zero-shot Baseline**: Pre-trained GuwenBERT (Works out-of-the-box without training).
- **Textual Baseline**: Fine-tuned RoBERTa.
- **Visual Baseline**: ResNet50.
- **MMRM**: Our proposed Multimodal Multitask Restoring Model.
- **Intermediate Visualization**: Shows the restored image generated by the MMRM capability.
### Running the Demo
1. **Deploy to Hugging Face Spaces**:
- Create a new Space on Hugging Face (SDK: Gradio).
- Upload the contents of this `demo` folder to the Space repository.
- Upload your model checkpoints to the `checkpoints/` folder in the Space.
- `checkpoints/phase2_mmrm_best.pt`
- `checkpoints/phase1_roberta_finetuned.pt`
- `checkpoints/baseline_img.pt`
*Note: Even without checkpoints, the demo will run using the Zero-shot Baseline (downloaded automatically).*
2. **Local Testing**:
- Install requirements: `pip install -r requirements.txt`
- Run: `python app.py` (assuming you are inside the `demo` directory)
|