Update README.md
#2
by
Chesebrough
- opened
README.md
CHANGED
|
@@ -42,11 +42,12 @@ Swin Transformer achieves strong performance on COCO object detection (58.7 box
|
|
| 42 |
|
| 43 |
# Videos
|
| 44 |
|
| 45 |
-

|
| 46 |
|
|
|
|
| 47 |
MiDaS Depth Estimation is a machine learning model from Intel Labs for monocular depth estimation. It was trained on up to 12 datasets and covers both in-and outdoor scenes. Multiple different MiDaS models are available, ranging from high quality depth estimation to lightweight models for mobile downstream tasks (https://github.com/isl-org/MiDaS).
|
| 48 |
|
| 49 |
|
|
|
|
| 50 |
## Model description
|
| 51 |
|
| 52 |
This Midas 3.1 DPT model uses the [SwinV2 Philosophy]( https://huggingface.co/docs/transformers/en/model_doc/swinv2) model as backbone and uses a different approach to Vision that Beit, where Swin backbones focus more on using a hierarchical approach.
|
|
@@ -120,6 +121,7 @@ depth
|
|
| 120 |
or one can use the pipeline API:
|
| 121 |
from transformers import pipeline
|
| 122 |
|
|
|
|
| 123 |
pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-large-384")
|
| 124 |
result = pipe("http://images.cocodataset.org/val2017/000000181816.jpg")
|
| 125 |
result["depth"]
|
|
|
|
| 42 |
|
| 43 |
# Videos
|
| 44 |
|
|
|
|
| 45 |
|
| 46 |
+
[](https://www.youtube.com/watch?v=UjaeNNFf9sE)
|
| 47 |
MiDaS Depth Estimation is a machine learning model from Intel Labs for monocular depth estimation. It was trained on up to 12 datasets and covers both in-and outdoor scenes. Multiple different MiDaS models are available, ranging from high quality depth estimation to lightweight models for mobile downstream tasks (https://github.com/isl-org/MiDaS).
|
| 48 |
|
| 49 |
|
| 50 |
+
|
| 51 |
## Model description
|
| 52 |
|
| 53 |
This Midas 3.1 DPT model uses the [SwinV2 Philosophy]( https://huggingface.co/docs/transformers/en/model_doc/swinv2) model as backbone and uses a different approach to Vision that Beit, where Swin backbones focus more on using a hierarchical approach.
|
|
|
|
| 121 |
or one can use the pipeline API:
|
| 122 |
from transformers import pipeline
|
| 123 |
|
| 124 |
+
```python
|
| 125 |
pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-large-384")
|
| 126 |
result = pipe("http://images.cocodataset.org/val2017/000000181816.jpg")
|
| 127 |
result["depth"]
|