File size: 2,711 Bytes
edf2a21
 
 
 
 
61b81d3
edf2a21
14df061
 
edf2a21
 
 
 
 
 
 
 
 
14df061
 
 
edf2a21
14df061
 
 
edf2a21
14df061
edf2a21
 
 
14df061
 
edf2a21
 
 
14df061
 
 
 
 
edf2a21
14df061
edf2a21
14df061
 
 
 
 
 
edf2a21
14df061
edf2a21
 
 
14df061
edf2a21
14df061
edf2a21
 
14df061
edf2a21
 
 
 
 
 
 
 
 
 
 
14df061
 
edf2a21
 
 
14df061
 
edf2a21
 
14df061
edf2a21
 
 
 
 
 
 
 
 
 
 
14df061
edf2a21
 
 
 
 
 
 
 
 
 
 
 
 
14df061
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
library_name: transformers
tags: []
---

# Model Card for timesformer_GP_scroll1

The grandprize winning model of the Vesuvius Challenge of 2023. 
 



## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

The grandprize winning model of the Vesuvius Challenge of 2023.    
The model features a small TimeSformer architecture trained on image segmentation task to detect ink in 3d images.    
This model takes as input the 3d image and outputs a 2d map of ink detections, roughly 1/16 the size of the input.    

- **Developed by:** Youssef Nader as part of the Grandprize Winning Team
- **Model type:** TimeSformer
- **License:** MIT

### Model Sources 

<!-- Provide the basic links for the model. -->

- **Repository:** https://github.com/younader/Vesuvius-Grandprize-Winner **[archived]**    
Active development resumed here: https://github.com/ScrollPrize/villa



### How to Get Started with the Model
Make sure to have the dependencies installed, namely transformers and <a href="https://github.com/lucidrains/TimeSformer-pytorch">Timesformer package</a>
```bash
pip install -U transformers timesformer-pytorch
```

Next you can run the model as follows:

```python
from transformers import AutoModel
model = AutoModel.from_pretrained("YoussefMoNader/timesformer_GP_scroll1", trust_remote_code=True)
```
the model expects a (B,1,26,64,64) tensor
<!-- ## Training Details

<!-- ### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

#### Hardware

The model was trained on 4xH100 for 8 hours. This model was trained for 12 epochs on total, a single epoch takes around 45 mins using the old script train_timesformer_og.py 


<!-- ## Technical Specifications [optional]

### Model Architecture and Objective

[More Information Needed]

### Compute Infrastructure

[More Information Needed]

#### Hardware

[More Information Needed] -->
<!-- 
#### Software

[More Information Needed]
 -->
<!-- ## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<!--
**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
<!--

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed] -->