File size: 1,795 Bytes
cf3f6db
 
8aa93e6
 
 
cf3f6db
 
8aa93e6
cf3f6db
 
49db6a8
8aa93e6
 
 
25e5d41
8aa93e6
89c7417
25e5d41
 
8aa93e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
title: README
emoji: 🚀
colorFrom: blue
colorTo: green
sdk: static
pinned: false

---

Welcome to ESM3Hub! ESM3Hub is a collaborative community that empowers biologists to create and train models without requiring advanced ML and coding expertise. Through ESM3-Play-V3, biologists can share their trained models for direct use or further re-training by others.

When using shared models, please cite the original publications where available.

ColabESM3:
<a href="https://colab.research.google.com/github/westlake-repl/SaprotHub/blob/main/colab/ESM-Play-V3.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" style="max-width: 100%;"></a>

ColabESMC:
<a href="https://colab.research.google.com/github/westlake-repl/SaprotHub/blob/main/colab/ESM-Play-VC.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" style="max-width: 100%;"></a>

For more details about the base model Saprot: 
<a href="https://github.com/westlake-repl/SaProt"><img src="https://img.shields.io/github/stars/westlake-repl/SaProt?style=social&label=Code+Stars" style="max-width: 100%;"></a>

<!-- If you want to upload your model or dataset, please ensure the model card and dataset card (README.md) follow the official format, such as [this model card](https://huggingface.co/SaProtHub/Model-EYFP_100K-650M) and [this dataset card](https://huggingface.co/datasets/SaProtHub/Dataset-Subcellular_Localization-DeepLoc). For example, your cards should include: -->

<!--
For dataset cards:
- A description of your task
- Splits
- Related paper (Optional)
- Meanings of each label (only for classification tasks)
- Others

For model cards:
- Base model description
- Task type
- Dataset description
- Model input type
- Performance
- LoRA config
- Training config
- Others
-->