Update README.md
Browse files
README.md
CHANGED
|
@@ -159,6 +159,12 @@ inference:
|
|
| 159 |
|
| 160 |
Use this text2text model to find out what LLM instructions might be able to generate an arbitary piece of code!
|
| 161 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 162 |
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the `pszemraj/fleece2instructions-codealpaca` dataset.
|
| 163 |
It achieves the following results on the evaluation set:
|
| 164 |
- Loss: 0.9222
|
|
|
|
| 159 |
|
| 160 |
Use this text2text model to find out what LLM instructions might be able to generate an arbitary piece of code!
|
| 161 |
|
| 162 |
+
- Check out a [basic demo on Spaces](https://huggingface.co/spaces/pszemraj/generate-instructions)
|
| 163 |
+
- An example of how to use instructiongen models in a CLI script can be found [here](https://gist.github.com/pszemraj/8b0213e700763106074d3ac15d041c14)
|
| 164 |
+
- You can find other models fine-tuned for instruction generation by [searching for the instructiongen tag](https://huggingface.co/models?other=instructiongen)
|
| 165 |
+
|
| 166 |
+
## about
|
| 167 |
+
|
| 168 |
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the `pszemraj/fleece2instructions-codealpaca` dataset.
|
| 169 |
It achieves the following results on the evaluation set:
|
| 170 |
- Loss: 0.9222
|