File size: 1,089 Bytes
7f01ccf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: mit
datasets:
- de-Rodrigo/merit
language:
- en
- es
base_model:
- llava-hf/llava-1.5-7b-hf
pipeline_tag: image-text-to-text
---

# DONUT Merit

<a href="https://x.com/nearcyan/status/1706914605262684394">
  <div style="text-align: center;">
    <picture>
      <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/de-Rodrigo/donut-merit/resolve/main/assets/dragon_huggingface.png">
      <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/de-Rodrigo/donut-merit/resolve/main/assets/dragon_huggingface.png">
      <img alt="DragonHuggingFace" src="https://huggingface.co/de-Rodrigo/donut-merit/resolve/main/assets/dragon_huggingface.png" style="width: 200px;">
    </picture>
  </div>
</a>


## Model Architecture
**This model is based on the Donut architecture and fine-tuned on the Merit dataset for form understanding tasks.**

- Backbone: [Llava](https://huggingface.co/llava-hf/llava-1.5-7b-hf)
- Training Data: [Merit](https://huggingface.co/datasets/de-Rodrigo/merit)

## Example Usage

```python
WIP
```
**WIP** 🛠️