lhallee commited on
Commit
056ceb5
·
verified ·
1 Parent(s): 0986e60

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +35 -9
README.md CHANGED
@@ -174,16 +174,42 @@ We look at various ESM models and their throughput on an H100. Adding efficient
174
  The most gains will be seen with PyTorch > 2.5 on linux machines.
175
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2bd3bdb7cbd214b658c48/RfLRSchFivdsqJrWMh4bo.png)
176
 
177
- ### Citation
178
- If you use any of this implementation or work please cite it (as well as the ESMC preprint).
 
 
 
 
 
 
 
 
 
 
179
 
 
 
 
 
 
 
 
180
  ```
181
- @misc {FastPLMs,
182
- author = { Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
183
- title = { FastPLMs: Fast, efficient, protien language model inference from Huggingface AutoModel.},
184
- year = {2024},
185
- url = { https://huggingface.co/Synthyra/ESMplusplus_small },
186
- DOI = { 10.57967/hf/3726 },
187
- publisher = { Hugging Face }
 
 
 
 
 
 
 
 
 
188
  }
189
  ```
 
174
  The most gains will be seen with PyTorch > 2.5 on linux machines.
175
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2bd3bdb7cbd214b658c48/RfLRSchFivdsqJrWMh4bo.png)
176
 
177
+ ### Citations
178
+
179
+ ```bibtex
180
+ @misc{FastPLMs,
181
+ author={Hallee, Logan and Bichara, David and Gleghorn, Jason P.},
182
+ title={FastPLMs: Fast, efficient, protein language model inference from Huggingface AutoModel.},
183
+ year={2024},
184
+ url={https://huggingface.co/Synthyra/ESMplusplus_small},
185
+ DOI={10.57967/hf/3726},
186
+ publisher={Hugging Face}
187
+ }
188
+ ```
189
 
190
+ ```bibtex
191
+ @article{hayes2024simulating,
192
+ title={Simulating 500 million years of evolution with a language model},
193
+ author={Hayes, Thomas and Rao, Roshan and Akin, Halil and Sofber, Nicholas J and Achour, Divya and Moez, Irfan and Garg, Rhitu and Angelova, Rami and Babu, Manan and Alcaide, Eric and others},
194
+ journal={bioRxiv},
195
+ year={2024}
196
+ }
197
  ```
198
+
199
+ ```bibtex
200
+ @article{dong2024flexattention,
201
+ title={Flex Attention: A Programming Model for Generating Optimized Attention Kernels},
202
+ author={Dong, Juechu and Feng, Boyuan and Guessous, Driss and Liang, Yanbo and He, Horace},
203
+ journal={arXiv preprint arXiv:2412.05496},
204
+ year={2024}
205
+ }
206
+ ```
207
+
208
+ ```bibtex
209
+ @inproceedings{paszke2019pytorch,
210
+ title={PyTorch: An Imperative Style, High-Performance Deep Learning Library},
211
+ author={Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and Gimelshein, Natalia and Antiga, Luca and Desmaison, Alban and K{\"o}pf, Andreas and Yang, Edward and DeVito, Zach and Raison, Martin and Tejani, Alykhan and Chilamkurthy, Sasank and Steiner, Benoit and Fang, Lu and Bai, Junjie and Chintala, Soumith},
212
+ booktitle={Advances in Neural Information Processing Systems 32},
213
+ year={2019}
214
  }
215
  ```