Update README.md
Browse files
README.md
CHANGED
|
@@ -29,7 +29,9 @@ NASA GeneLab VisionTransformer on BPS Microscopy Data
|
|
| 29 |
[Sylvain Costes](https://www.nasa.gov/people/sylvain-costes/), NASA Ames Resarch Center<br>
|
| 30 |
|
| 31 |
## General:
|
| 32 |
-
|
|
|
|
|
|
|
| 33 |
[Biological and Physical Sciences (BPS) Microscopy Benchmark Training Dataset](https://registry.opendata.aws/bps_microscopy/) or as a Huggingface dataset here:
|
| 34 |
[kenobi/GeneLab_BPS_BenchmarkData](https://huggingface.co/datasets/kenobi/GeneLab_BPS_BenchmarkData).
|
| 35 |
This is a Vision Transformer model trained on Fluorescence microscopy images of individual nuclei from mouse fibroblast cells,
|
|
@@ -53,9 +55,9 @@ We will include more technical details here soon.
|
|
| 53 |

|
| 54 |

|
| 55 |
|
| 56 |
-
## ViT base training data
|
| 57 |
The ViT model was pretrained on a dataset consisting of 14 million images and 21k classes ([ImageNet-21k](http://www.image-net.org/).
|
| 58 |
-
More information on the base model used can be found here: (https://huggingface.co/google/vit-base-patch16-224-in21k)
|
| 59 |
|
| 60 |
## How to use this Model
|
| 61 |
(quick snippets to work on Google Colab)
|
|
|
|
| 29 |
[Sylvain Costes](https://www.nasa.gov/people/sylvain-costes/), NASA Ames Resarch Center<br>
|
| 30 |
|
| 31 |
## General:
|
| 32 |
+
|
| 33 |
+
This Vision Transformer model has been fine-tuned on BPS Microscopy data. We are currently working on an expansive optimisation and evaluation framework.
|
| 34 |
+
The images used are available here:
|
| 35 |
[Biological and Physical Sciences (BPS) Microscopy Benchmark Training Dataset](https://registry.opendata.aws/bps_microscopy/) or as a Huggingface dataset here:
|
| 36 |
[kenobi/GeneLab_BPS_BenchmarkData](https://huggingface.co/datasets/kenobi/GeneLab_BPS_BenchmarkData).
|
| 37 |
This is a Vision Transformer model trained on Fluorescence microscopy images of individual nuclei from mouse fibroblast cells,
|
|
|
|
| 55 |

|
| 56 |

|
| 57 |
|
| 58 |
+
## ViT base training data (currently being replaced)
|
| 59 |
The ViT model was pretrained on a dataset consisting of 14 million images and 21k classes ([ImageNet-21k](http://www.image-net.org/).
|
| 60 |
+
More information on the base model used can be found here: (https://huggingface.co/google/vit-base-patch16-224-in21k);
|
| 61 |
|
| 62 |
## How to use this Model
|
| 63 |
(quick snippets to work on Google Colab)
|