File size: 6,493 Bytes
eb35b0a 22303bf eb35b0a 22303bf 9af5e69 22303bf 319575c 22303bf 595b472 22303bf 9af5e69 22303bf b2626a2 9af5e69 22303bf 9af5e69 22303bf 319575c 9af5e69 22303bf 319575c 22303bf 319575c 9af5e69 319575c 22303bf 319575c 22303bf 9af5e69 22303bf 319575c 22303bf eb35b0a 9af5e69 22303bf | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | ---
license: mit
thumbnail: images/cover.png
tags: ['nuclei', 'light-field-microscopy', 'hylfm', 'image-reconstruction', 'fluorescence-light-microscopy', 'pytorch', 'biology']
language: [en]
library_name: bioimageio
---
# HyLFM-Net-stat

HyLFM-Net trained on static images of arrested medaka hatchling hearts. The network reconstructs a volumentric image from a given light-field.
# Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
- [Training Details](#training-details)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
# Model Details
## Model Description
- **model version:** 1.3
- **Additional model documentation:** [package/README.md](package/README.md)
- **Developed by:**
- Beuttenmueller, Wagner, N., F., Norlin, N. et al. Deep learning-enhanced light-field imaging with continuous validation. Nat Methods 18, 557–563 (2021).: https://www.doi.org/10.1038/s41592-021-01136-0
- **Shared by:**
- Fynn Beuttenmueller, EMBL Heidelberg, [https://orcid.org/0000-0002-8567-6389](https://orcid.org/0000-0002-8567-6389), [https://github.com/fynnbe](https://github.com/fynnbe)
- **Model type:** HyLFM-Net
- **Modality:** fluorescence microscopy
- **Target structures:** medaka larvae heart
- **Task type:** volume reconstruction
- **License:** [MIT License](https://spdx.org/licenses/MIT.html)
## Model Sources
- **Repository:** [https://github.com/kreshuklab/hylfm-net](https://github.com/kreshuklab/hylfm-net)
- **Paper:** see [**Developed by**](#model-description)
# Uses
## Direct Use
This model is compatible with the bioimageio.spec Python package (version >= 0.5.7.1) and the bioimageio.core Python package supporting model inference in Python code or via the `bioimageio` CLI.
```python
from bioimageio.core import predict
output_sample = predict(
"huggingface/thefynnbe/ambitious-sloth/1.3",
inputs={'lf': '<path or tensor>'},
)
output_tensor = output_sample.members["prediction"]
xarray_dataarray = output_tensor.data
numpy_ndarray = output_tensor.data.to_numpy()
```
## Downstream Use
Specific bioimage.io partner tool compatibilities may be reported at [Compatibility Reports](https://bioimage-io.github.io/collection/latest/compatibility/#compatibility-by-resource).
Training (and fine-tuning) code may be available at https://github.com/kreshuklab/hylfm-net.
## Out-of-Scope Use
missing; therefore these typical limitations should be considered:
- *Likely not suitable for diagnostic purposes.*
- *Likely not validated for different imaging modalities than present in the training data.*
- *Should not be used without proper validation on user's specific datasets.*
# Bias, Risks, and Limitations
In general bioimage models may suffer from biases caused by:
- Imaging protocol dependencies
- Use of a specific cell type
- Species-specific training data limitations
Common risks in bioimage analysis include:
- Erroneously assuming generalization to unseen experimental conditions
- Trusting (overconfident) model outputs without validation
- Misinterpretation of results
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# How to Get Started with the Model
You can use "huggingface/thefynnbe/ambitious-sloth/1.3" as the resource identifier to load this model directly from the Hugging Face Hub using bioimageio.spec or bioimageio.core.
See [bioimageio.core documentation: Get started](https://bioimage-io.github.io/core-bioimage-io-python/latest/get-started) for instructions on how to load and run this model using the `bioimageio.core` Python package or the bioimageio CLI.
# Training Details
## Training Data
This model was trained on `10.5281/zenodo.7612115`.
## Training Procedure
### Training Hyperparameters
- **Framework:** Pytorch State Dict
### Speeds, Sizes, Times
- **Model size:** 234.44 MB
# Environmental Impact
- **Hardware Type:** GTX 2080 Ti
- **Hours used:** 10.0
- **Cloud Provider:** EMBL Heidelberg
- **Compute Region:** Germany
- **Carbon Emitted:** 0.54 kg CO2e
# Technical Specifications
## Model Architecture and Objective
- **Architecture:** HyLFM-Net --- A convolutional neural network for light-field microscopy volume reconstruction.
- **Input specifications:**
`lf`:
- Axes: `batch, channel, y, x`
- Shape: `1 × 1 × 1235 × 1425`
- Data type: `float32`
- Value unit: arbitrary unit
- Value scale factor: 1.0
- example

- **Output specifications:**
`prediction`: predicted volume of fluorescence signal
- Axes: `batch, channel, z, y, x`
- Shape: `1 × 1 × 49 × 244 × 284`
- Data type: `float32`
- Value unit: arbitrary unit
- Value scale factor: 1.0
- example
prediction sample](images/output_prediction_sample.png)
## Compute Infrastructure
### Hardware Requirements
- **Storage:** Model size: 234.44 MB
### Software
- **Framework:**
- ONNX: opset version: 15
- Pytorch State Dict: 1.13
- TorchScript: 1.13
- **Libraries:** None beyond the respective framework library.
- **BioImage.IO partner compatibility:** [Compatibility Reports](https://bioimage-io.github.io/collection/latest/compatibility/#compatibility-by-resource)
---
*This model card was created using the template of the bioimageio.spec Python Package, which intern is based on the BioImage Model Zoo template, incorporating best practices from the Hugging Face Model Card Template. For more information on contributing models, visit [bioimage.io](https://bioimage.io).*
---
**References:**
- [Hugging Face Model Card Template](https://huggingface.co/docs/hub/en/model-card-annotated)
- [Hugging Face modelcard_template.md](https://github.com/huggingface/huggingface_hub/blob/b9decfdf9b9a162012bc52f260fd64fc37db660e/src/huggingface_hub/templates/modelcard_template.md)
- [BioImage Model Zoo Documentation](https://bioimage.io/docs/)
- [Model Cards for Model Reporting](https://arxiv.org/abs/1810.03993)
- [bioimageio.spec Python Package](https://bioimage-io.github.io/spec-bioimage-io)
|