modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
|---|---|---|---|---|---|---|
BrianTin/MTBERT
|
[
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11
| null |
---
license: cc-by-3.0
language:
- en
tags:
- not-for-all-audiences
---
<style>
.image_body {
background-color: #555575;
padding-left: 10px;
border-radius: 5px;
margin-top: -20px !important;
margin-bottom: 0px !important;
margin-left: 0px;
margin-right: 5px;
padding-left: -10px;
text-align: center;
display: block;
}
.image_boddy {
background-color: #555575;
border-radius: 0px 0px 5px 5px;
margin-right: 5px;
padding-left: -40px !important;
display: block
}
img {max-width: 90%; max-height: 90%; display: block; align: center; margin-left: auto; margin-right: auto; margin-top: 0px !important; margin-bottom: 2px;/* remove extra space below image */ }
details, .details {
background-color: #001e3c; padding-left: 5px; border-radius: 5px; margin-top: 5px; margin-bottom: 5px; padding-bottom: 2px; padding-top: 5px;
}
.image_sources {
text-align: center;
}
hr {
margin-top: 25px;
margin-bottom: 10px;
}
.image-container{
display: block;
}
.image-div{
display: inline-block;
width: 49%;
margin-left: -2px;
}
</style>
<div class="details"><b>Changelog</b>
* Removing BiC and other purpose-built NSFW models to prepare for migration to kaggle setups.
* Added new models, but havent added them to the repo itself.
</div>
**Reworking the README.md on a different file rn (solarium/READMEwip.md)**
This is a personal collection of my LoRAs, which I use in a google colab file. <br>
<br>It's cloud hosted meaning I can freely access and add models onto this. <br>
<br>Will upload a colab file for us soon(tm)
To copy this into a colab notebook:
```
!pip install huggingface_hub
import huggingface_hub
!git clone https://huggingface.co/Solarium/personal-lora.git /content/huggingface
%cd personal-lora
!git-lfs install
!git lfs pull
```
None of the models here are by me, this is a collection, however images without a source is probably generated by me or theres no prompt data on CivitAI
<details><summary>Personal Favourite Generations</summary>






[comment]: #"""
</details><br>
<details><summary>Table of Contents</summary>
[Anime Loras](#anime-loras)<br>
* [Genshin](#genshin-impact)<br>
* [Blue Archive](#blue-archive)<br>
* [Azur Lane](#azur-lane)<br>
* [Other Characters](#other-characters)<br>
* [Arknights](#arknights)<br>
[General Purpose](#general-purpose)
</details>
<br>
-----
# List of supported LoRAs.
This is the list of currently supported LoRAs.
## Anime Loras
LoRAs from anime / anime video games.
### Genshin Impact
[/]: #"Raiden"
<details>
<summary><b>Raiden Shogun</b></summary>
<br>
<details class="image_body"><summary>Images</summary>

</details>
<a href="https://civitai.com/models/11896/raiden-shogun-or-realistic-genshin-lora" target="_blank">CivitAI Page</a>
Generates alright images, it can overfit though.
</details>
[comment]: #"Hu-Tao"
<details>
<summary><b>Hu Tao & Ghosts</b></summary>
<br><details class="image_body"><summary class="image"><b>Images</b></summary>
<div class="image-div">
,%20boo%20tao,hat,%20red%20eyes,%20twintails,%20brown%20hair,%20solo,%20symbol-shaped%20pup.png)
</div>
<div class="image-div">

[Img Source](https://civitai.com/gallery/301002?reviewId=51284&infinite=false&returnUrl=%2Fmodels%2F7505%2Fhu-tao-or-genshin-impact)
</details>
[Source](https://civitai.com/models/7505/hu-tao-or-genshin-impact)
```Trigger Words:
Hu Tao (Genshin Impact)
Boo Tao
Symbol Shaped Pupils
```
Boo Tao triggers ghosts.
**Reccomended Weight is 0.6-0.8**
</details>
[comment]: #"Yae-Miko"
<details>
<summary><b>Yae Miko (Realistic)</b></summary>
<br><details class="image_body"><summary>Images</summary>
<div class="image-div">

</div>
<div class="image-div">

[Image Source w/prompt](https://civitai.com/gallery/343893?reviewId=58699&infinite=false&returnUrl=%2Fmodels%2F8484%2Fyae-miko-or-realistic-genshin)</details>
</div>
<div class="image-div">

</div>
[Source](https://civitai.com/models/8484/yae-miko-or-realistic-genshin)
0.45-0.75 weight is best for good images. Higher weight can cause bad faces.
</details>
[comment]: #"Ganyu"
<details>
<summary><b>Ganyu</b></summary>
<br><details class="image_body"><summary>Images</summary>

</details>
[Source](https://civitai.com/models/11814/ganyu-genshin-impact-realistic-anime-lora)
Sometimes does not give blue hair,sometimes overfits since its not trained on chilloutmix.
</details>
[comment]: #"Nilou"
<details>
<summary><b>Nilou</b></summary>
<br><details class="image_body"><summary>Images</summary>

</details>
[CivitAI Source](https://civitai.com/models/5367/tsumasaky-nilou-genshin-impact-lora)
Can overfit past 1 strength, possibly 0.6-0.8?
Update from testing: this Model does not fare well with photorealistic models even on 0.7 strength.
</details>
[comment]: #"Noelle"
<details><summary><b>Noelle</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/gallery/164493?reviewId=24724&infinite=false&returnUrl=%2Fmodels%2F9071%2Fgenshin-impact-noelle)
</details>
[CivitAI Source](https://civitai.com/models/9071/genshin-impact-noelle)
Possibly overfits at 1?
</details>
[comment]: #"Keqing"
<details><summary><b>Keqing</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Img Source](https://civitai.com/gallery/384474?reviewId=65994&infinite=false&returnUrl=%2Fmodels%2F15699%2Fkeqing-or-genshin-impact-or-3in1-lora-and-locon)

</details>
[CivitAI Source](https://civitai.com/models/15699/keqing-or-genshin-impact-or-3in1-lora-and-locon)
Works pretty well.
0.85-1 weight for official costumes, 0.7-0.85 for custom costumes.
<details><summary>Additional Prompt Info</summary>
For triggering Piercing Thunderbolt(Default Costumes):
keqing (piercing thunderbolt) (genshin impact), keqing (genshin impact), pantyhose, hair bun, purple hair, gloves, twintails, long hair, purple eyes, diamond-shaped pupils, bare shoulders, hair ornament, black pantyhose, cone hair bun, detached sleeves,dress, jewelry, medium breasts, earrings, bangs, frills, purple dress, black gloves, braid, skirt,
For triggering Opulent Splendor:
`keqing (opulent splendor) (genshin impact), keqing (genshin impact), official alternate costume, dress, cone hair bun, night, strapless dress, looking at viewer, long hair, cleavage, black dress, hair bun, strapless, bare shoulders, purple hair, bangs, bow, detached collar, purple eyes, diamond-shaped pupils, ribbon, double bun, twintails, two-tone dress, medium breasts, hair ornament, black bow, hair ribbon, blue dress, bowtie, hair between eyes,`
For triggering Lantern Rite:
keqing (lantern rite) (genshin impact), keqing (genshin impact), hair bun, skirt, scarf, purple sweater, white skirt, purple hair, sweater, twintails, purple eyes, diamond-shaped pupils, hair ornament, bare shoulders, smile, breasts, cone hair bun, long hair, belt, double bun, long sleeves, bangs, bow, hair flower, hair bow,ribbon, hair ribbon, braid, plaid scarf, plaid, off shoulder
bonus: For trigger Any Custom Outfit:
keqing (genshin impact), hair bun, purple hair, twintails, purple eyes, diamond-shaped pupils, hair ornament, cone hair bun, long hair, double bun, bangs, bow, hair flower, hair bow, hair ribbon, [Explain in detail what clouthes you would like to trigger......]
</details>
</details>
[comment]: #"Kokomi"
<details><summary><b>Kokomi</b></summary><br>
<details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/gallery/167455?reviewId=25228&infinite=false&returnUrl=%2Fmodels%2F9854%2Fsangonomiya-kokomi-genshin-impact)
[Yeah](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cf8495ba-70bc-4b02-37ba-c3e7dc3fa900/width=1024/240690)
[NSFW Image Source](https://civitai.com/gallery/240690?reviewId=39578&infinite=false&returnUrl=%2Fmodels%2F9854%2Fsangonomiya-kokomi-genshin-impact)
[Yeah](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/6b51be02-2af3-4465-6a48-cb4231b5d400/width=1024/240687)
[NSFW Image Source](https://civitai.com/gallery/240687?reviewId=39578&infinite=false&returnUrl=%2Fmodels%2F9854%2Fsangonomiya-kokomi-genshin-impact)
[NSFW Image Source](https://huggingface.co/Solarium/personal-lora/resolve/main/Attachments/4-5-23/00020-950810996.png)

</details>
[CivitAI Source](https://civitai.com/models/9854/sangonomiya-kokomi-genshin-impact)
</details>
[comment]: #"Eula-Realistic"
<details><summary><b>Eula (Realistic)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/gallery/287129?reviewId=48611&infinite=false&returnUrl=%2Fmodels%2F20644%2Feula-or-realistic-genshin-lora)

[Image Source](https://civitai.com/gallery/302794?reviewId=51628&infinite=false&returnUrl=%2Fmodels%2F20644%2Feula-or-realistic-genshin-lora)
</details>
[CivitAI Source](https://civitai.com/models/20644/eula-or-realistic-genshin-lora)<br>
Reccomended Weights: 0.5-0.75
</details>
[comment]: #"Faruzan"
<details><summary><b>Faruzan</b></summary>
<br><details class="image_body"><summary>Images</summary>
.png)
[Image Source](https://civitai.com/gallery/111585?modelId=9828&modelVersionId=11675&infinite=false&returnUrl=%2Fmodels%2F9828%2Ffaruzan-genshin-impact)

[Image Source](https://civitai.com/gallery/197353?reviewId=30724&infinite=false&returnUrl=%2Fmodels%2F9828%2Ffaruzan-genshin-impact)
[comment]: ",%20on%20back,%20spread%20pussy,%20show%20pussy,%20detailed%20hand,_masterpiece,%20best%20quality,%20dohnastyle,%201girl,%20solo,%20hair%20ornam.png)"
[hidden Image Source](https://civitai.com/gallery/127701?reviewId=17847&infinite=false&returnUrl=%2Fmodels%2F9828%2Ffaruzan-genshin-impact)
</details>
[CivitAI](https://civitai.com/models/9828/faruzan-genshin-impact)<br>
Trigger Words: 1girl, solo, hair ornament, green hair, twintails, long hair, dress, water
</details>
[comment]: #"Kamisato-Ayaka"
<details><summary><b>Kamisato Ayaka (Springbloom Missive)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[NSFW Image Source](https://civitai.com/gallery/229462?reviewId=37194&infinite=false&returnUrl=%2Fmodels%2F12566%2Fkamisato-ayaka-springbloom-missive-or-genshin-impact-or-3in1-lora)

</details>
[CivitAI Source](https://civitai.com/models/12566/kamisato-ayaka-springbloom-missive-or-genshin-impact-or-3in1-lora)
Recommend 0.9-1 Strength for triggering any official costume.
For changing custom clothes, I suggest 0.7-0.85 Weight
<details><summary>Prompt Information</summary>
For trigger Springbloom Missive:
kamisato ayaka (springbloom missive), kamisato ayaka, official alternate hairstyle, official alternate costume, blunt bangs, butterfly hair ornament, hair flower, blue dress, grey eyes, light blue hair, mole under eye
For trigger Heytea(School Uniform):
kamisato ayaka (heytea), kamisato ayaka, official alternate costume, ponytail, serafuku, blunt bangs, hair bow, black bow, hair ribbon, red ribbon, school uniform, sailor shirt, sailor collar, pleated skirt, grey eyes, light blue hair, mole under eye
For trigger flawless radiance(Default Costumes):
kamisato ayaka (flawless radiance), kamisato ayaka, official costume, ponytail, kote, kusazuri, blunt bangs, hair ribbon, red ribbon, japanese armor, grey eyes, light blue hair, mole under eye
bonus: For trigger Any Custom Outfit:
kamisato ayaka, blunt bangs, hair bow, hair ribbon, red ribbon, light blue hair, grey eyes, mole under eye, ponytail, [Explain in detail what clouthes you would like to trigger......]
</details>
</details>
[comment]:#"lumine-genshin"
<details><summary><b>Lumine</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/455341?period=AllTime&sort=Newest&view=categories&modelVersionId=41292&modelId=35028&postId=135445)

[Image Source](https://civitai.com/images/455340?period=AllTime&sort=Newest&view=categories&modelVersionId=41292&modelId=35028&postId=135445)
</details>
[CivitAI Source](https://civitai.com/models/35028/lumine-genshin-impact-or-character-lora-1200)
Prompts:
luminedef, luminernd
</details>
### Blue Archive
[comment]: #"Toki"
<details><summary><b>Toki (Maid)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/285011?modelVersionId=25916&prioritizedUserIds=201506&period=AllTime&sort=Most+Reactions&limit=20)


</details>
[CivitAI Source](https://civitai.com/models/17275/toki-bluearchive)<br>
Prompt Data: 1girl, maid, maid headdress, blonde hair, maid apron, blue eyes
</details>
[comment]: #"Hayase-Yuuka"
<details><summary><b>Hayase Yuuka</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/130140?modelVersionId=13456&prioritizedUserIds=165835&period=AllTime&sort=Most+Reactions&limit=20)

[Image Source](https://civitai.com/images/130142?modelVersionId=13456&prioritizedUserIds=165835&period=AllTime&sort=Most+Reactions&limit=20)
</details>
[CivitAI Source](https://civitai.com/models/11366/hayase-yuuka-blue-archiveor)
</details>
[comment]: #"Ajitani-Hifumi"
<details><summary><b>Ajitani Hifumi</b></summary><br>
<details class="image_body"><summary>Images</summary>
,%20(ulzzang-6500_0.2),%201girl,%20[_(detailed%20face_1.2)_0.2],%20hifumi,%20%20long_hair,%20l.png)
[Image Source](https://civitai.com/images/101542?modelId=7050&postId=88572&id=7050&slug=bluearchiveajitanihifumi)

[Image Source](https://civitai.com/images/78405?modelId=7050&postId=60082&id=7050&slug=bluearchiveajitanihifumi)
</details>
[CivitAI Source] (https://civitai.com/models/7050/bluearchiveajitanihifumi)
</details>
## Azur Lane
<details><summary><b>Takao</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/156691?modelVersionId=12383&prioritizedUserIds=99603&period=AllTime&sort=Most+Reactions&limit=20)

</details>
[CivitAI Source](https://civitai.com/models/10189/takao-azur-lane-full-lora)
</details>
<details><summary><b>Yet-San (White Dress)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/329967?modelId=7440&postId=73602&id=7440&slug=yet-san-azur-lane-white-dress)

[Image Source](https://civitai.com/images/142922?modelId=7440&postId=95886&id=7440&slug=yet-san-azur-lane-white-dress)
</details>
[CivitAI Source](https://civitai.com/models/7440/yet-san-azur-lane-white-dress)
</details>
[comment]: #"Le-Malin"
<details><summary><b>Le Malin</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/108857?modelVersionId=11336&prioritizedUserIds=59833&period=AllTime&sort=Most+Reactions&limit=20)
![Yeah]()
[Image Source]()

</details>
[CivitAI Source] (https://civitai.com/models/6899/le-malin-azur-lane-all-skins-9mb-update)
<br> Trigger Words: lemalinmuse, lemalindefault, lemalinlapin, lemalinsundress, lemalinswimsuit
</details>
[comment]: #"Sirius-maid"
<details><summary><b>Sirius (Maid)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/292822?modelId=17340&postId=107772&id=17340&slug=sirius-azur-lane-maid-outfit)

[Image Source](https://civitai.com/images/292827?modelId=17340&postId=107772&id=17340&slug=sirius-azur-lane-maid-outfit)




</details>
[CivitAI Source](https://civitai.com/models/17340/sirius-azur-lane-maid-outfit)
<br>Trigger Words: maidsirius, sirius_(azur_lane)
</details>
[comment]: "#Formidable-swimsuit"
<details><summary><b>Formidable (Swimsuit)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/355299?modelId=21142&postId=121197&id=21142&slug=formidable-azur-lane-swimsuit)

[Image Source](https://civitai.com/images/285048?modelId=21142&postId=105795&id=21142&slug=formidable-azur-lane-swimsuit)
</details>
[CivitAI Source](https://civitai.com/models/21142/formidable-azur-lane-swimsuit)
<br>Trigger Words: white_single_thighhigh, blue_bikini, formidableswim
</details>
[comment]: #"Perseus"
<details><summary><b>Perseus (Maid)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/90758?modelId=7659&postId=93143&id=7659&slug=perseus-azur-lane-nurse)
),%20%20,.png)
[Image Source](https://civitai.com/images/86073?modelVersionId=8993&prioritizedUserIds=162367&period=AllTime&sort=Most+Reactions&limit=20)

</details>
[CivitAI Source](https://civitai.com/models/7659/perseus-azur-lane-nurse)
<br>Suggested Prompt: pink hair, pink eyes, pink gloves, twintail, nurse cap, center opening
</details>
[comment]: #"Indomitable-(maid)"
<details><summary><b>Indomitable (Maid)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/407479?modelVersionId=36555&prioritizedUserIds=162367&period=AllTime&sort=Most+Reactions&limit=20)

[Image Source](https://civitai.com/images/407919?modelId=7425&postId=125410&id=7425&slug=indomitable-azur-lane-maid-costume)
</details>
[CivitAI Source](https://civitai.com/models/7425/indomitable-azur-lane-maid-costume)
<br> Trigger Word: indomitablemaid
</details>
### Arknights
[comment]: #"texas-the-amertosa"
<details><summary><b>Texas: The Amertosa</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/158837?modelId=6779&postId=94383&id=6779&slug=arknights-texas-the-omertosa)

[Image Source](https://civitai.com/images/136776?modelId=6779&postId=94029&id=6779&slug=arknights-texas-the-omertosa)

[Image Source](https://civitai.com/images/75141?modelId=6779&postId=66295&id=6779&slug=arknights-texas-the-omertosa)
</details>
[CivitAI Source](https://civitai.com/models/6779/arknights-texas-the-omertosa)
<br>weight0.7-0.85 can achieve good effect.
</details>
[comment]: #"Scadi-the-corrupting-heart"
<details><summary><b>Scadi: The Corrupting Heart</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/131462?modelId=6135&postId=93284&id=6135&slug=arknights-skadi-the-corrupting-heart)

[Image Source](https://civitai.com/images/274594?modelId=6135&postId=102723&id=6135&slug=arknights-skadi-the-corrupting-heart)

</details>
[CivitAI Source](https://civitai.com/models/6135/arknights-skadi-the-corrupting-heart)
<br>Weight 0.8~1.0 ,and you can ADJUST prompt to reductive character
You can use redhat,redskadi or whitehead,whiteskadi to trigger different TWO costumes。
</details>
### Honkai Impact
[comment]: #"bronya-zaychik-(silverwing-n-ex)"
<details><summary><b>Bronya Zaychik (Silverwing n-ex)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/196033?modelVersionId=18813&prioritizedUserIds=194092&period=AllTime&sort=Most+Reactions&limit=20)

[Image Source](https://civitai.com/images/196089?modelVersionId=18813&prioritizedUserIds=194092&period=AllTime&sort=Most+Reactions&limit=20)
</details>
[CivitAI Source](https://civitai.com/models/15936/bronya-zaychik-silverwing-n-ex-or-honkai-impact-3rd-or-lora-and-locon)<br><br>
For trigger Silverwing N-EX:
bronya zaychik (silverwing n-ex), bronya zaychik, red pupils,breasts, long hair, dress, grey hair, cleavage, bangs, grey eyes, jewelry, single glove, earrings, white dress, gloves, bare shoulders, sleeveless dress, drill hair, sleeveless, single sleeve, hair between eyes, large breasts, twin drills, white sleeves, hair ornament, single pauldron
bonus: For trigger Any Custom Outfit:(experimental)
bronya zaychik, red pupils, breasts, long hair, grey hair, bangs, grey eyes, earrings, drill hair, twin drills, hair between eyes, large breasts, hair ornament, [Explain in detail what clouthes you would like to trigger......]
You may consider removing red pupilsif you hate it.
Recommend 0.85-1 Strength for triggering any official costume.
For changing custom clothes, I suggest 0.7-0.85 Weight
</details>
### Re-Zero
[comment]:#"Rem"
<details><summary><b>Rem</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/116084?period=AllTime&sort=Newest&modelId=9951&postId=87150)

[Image Source](https://civitai.com/images/113010?period=AllTime&sort=Newest&modelId=9951&postId=64846)
</details>
[CivitAI Source](https://civitai.com/models/9951/rem-rezero)
Trigger Words: rem_re_zero
It is a character LoRA of Rem from Re:Zero. It was trained on AbyssOrangeMix2_hard model.
I highly recommend add "blue hair" for all prompts.
For version with horn add tag "horn".
</details>
[comment]:#"Ram"
<details><summary><b>Ram</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/281194?period=AllTime&sort=Newest&modelId=10631&postId=104858)

[Image Source](https://civitai.com/images/281194?period=AllTime&sort=Newest&modelId=10631&postId=104858)
</details>
[CivitAI Source](https://civitai.com/models/10631/ram-rezero)
Trigger Words: Ram_ReZero
</details>
[comment]:#"Emilia-Re-zero"
<details><summary><b>Emila</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/252230?period=AllTime&sort=Newest&modelId=10789&postId=120590)

[Image Source](https://civitai.com/images/314459?period=AllTime&sort=Newest&modelId=10789&postId=117289)
</details>
[CivitAI Source](https://civitai.com/models/10789/emilia-rezero)
Trigger Words: emilia
DO NOT USE Emilia_\(re:zero\) it makes the results a lot worse -(mainly just the clothing so if going for a different outfit it may help)
Emilia from Re:Zero. best weights 0.6 - 1
some images were of her in her alternate outfits, like her drunk appearance in memory snow, and her morning appearance in one of the earlier episodes, might do her swimsuit from lost in memories and her cat hood at some point.
normal outfit: white dress, wide sleeves, frills, pleated skirt, detached sleeves, hair flower, ribbon, hair ribbon, hair ornament
morning outfit: short dress, red ribbon, zettai ryouiki, white thighhighs,
drunk outfit: purple and blue dress, drunk, messy_hair.
base hairstyle: blunt bangs, crown braid, white hair (or purple),
capable of nsfw
grey_hair, purple_eyes, thighhighs, hair_ornament, pointy_ears, blush, grey_hair, long_hair, cleavage, flower, x_hair_ornament, hair_flower, wide_sleeves, pleated_skirt, medium_breasts, skirt, bare_shoulders, white_thighhighs
might have rainbow highlight in hair, try putting it in negative prompt (although it still popped up usually)
</details>
### Vtubers
[comment]:#"amatsuka-uto-vtuber"
<details><summary><b>Amatsuka Uto</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/153774?period=AllTime&sort=Newest&view=categories&modelVersionId=15433&modelId=13102&postId=67666)
</details>
[CivitAI Source](https://civitai.com/models/13102/amatsuka-uto-or-character-lora-218)
No trigger words, it will work if the LoRA is there?
</details>
[comment]: #"chloe-sakamata"
<details><summary><b>Chloe Sakamata </b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/342468?period=AllTime&sort=Newest&modelId=25221&postId=78652)

[Image Source](https://civitai.com/images/498282?period=AllTime&sort=Newest&modelId=25221&postId=145460)
</details>
[CivitAI Source](https://civitai.com/models/25221/sakamata-chloe-hololive)
LoRA weight 0.6 is best.
</details>
[comment]:#"Minato-Aqua"
<details><summary><b>Minato Aqua (Maid) </b></summary>
<br><details class="image_body"><summary>Images</summary>

[NSFW Image Source](https://civitai.com/images/458181?period=AllTime&sort=Newest&modelId=17816&postId=136064)

[Image Source](https://civitai.com/images/222933?period=AllTime&sort=Newest&modelId=17816&postId=74662)
</details>
[CivitAI Source](https://civitai.com/models/17816/minato-aqua-maidver-hololive)
triggers: 1girl, minato_aqua
weight 0.5 is best.
</details>
[comment]:#"kizuna-ai"
<details><summary><b>Kizuna AI </b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/415922?period=AllTime&sort=Newest&modelId=31098&postId=127148)

[Image Source](https://civitai.com/images/428121?period=AllTime&sort=Newest&modelId=31098&postId=129209)
</details>
[CivitAI Source](https://civitai.com/models/31098/kizuna-ai-kizuna-ai-inc-vtuber)
kizuna ai is trigger word
weight 0.6 is best
sample prompt:
```kizuna ai, long hair, brown hair, multicolored hair, short shorts, floating hair, pink hairband, white shorts, detached sleeves, navel, sailor collar, streaked hair, pink hair, blue eyes, white thighhighs, medium breasts, lace-trimmed sleeves, sleeveless shirt, striped, white sailor collar, bowtie, hair bow, swept bangs, white shirt, lace-trimmed legwear```
</details>
### Other Characters
<details><summary><b>Pyra/Mythra/Penuma (Xenoblade Impact)</b></summary><br>
<details class="image_body"><summary>Images</summary>
,%201girl,%20absurdres,%20bangs,%20breasts,%20chest%20jewel,%20earrings,%20gem,%20gloves,%20greek%20text,%20green%20eyes,%20green%20hair,.png)
[Image Source](https://civitai.com/images/51349?modelVersionId=6009&prioritizedUserIds=53515&period=AllTime&sort=Most+Reactions&limit=20)

[Image Source](https://civitai.com/images/335208?modelId=5179&postId=112165&id=5179&slug=pyra-mythra-pneuma-xenoblade-lora)
</details>
[CivitAI Source](https://civitai.com/models/5179/pyra-mythra-pneuma-xenoblade-lora)
Triggers are pyra \(xenoblade\), mythra \(xenoblade\), and pneuma \(xenoblade\). As for the weight, I don't think it's wise to go over 0.5 (check the matrix at the end of the gallery), sweet spots seems to be between 0.4 and 0.6.
</details>
[comment]: #"Aqua"
<details><summary><b>Aqua (Konosuba)</b></summary><br>
<details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/67082?modelId=5902&postId=88616&id=5902&slug=aqua-konosuba-lora)

[Image Source](https://civitai.com/images/140638?modelId=5902&postId=95153&id=5902&slug=aqua-konosuba-lora)
</details>
[CivitAI Source] (https://civitai.com/models/5902/aqua-konosuba-lora)<br>
Should work on most model, mainly anime ones. Trigger is aqua \(konosuba\) and descriptive tags, weight around 0.6.
</details>
[comment]: #"Asuna-SAO"
<details><summary><b>Asuna (Sword Art Online)</b></summary>
<br><details><summary>Images</summary>

[Image Source](https://civitai.com/images/234711?modelId=4718&postId=116554&id=4718&slug=asuna-lora)
![Yeah]()
[Image Source]()
</details>
[CivitAI Source](https://civitai.com/models/4718/asuna-lora)<br>
Made another LoRA. For this one I suggest Abyss Orange Mix or Midnight Mixer (Melt or V2), and a weight of around 0.6. Mainly for models that can't do an accurate Asuna on their own.
<br><br> Trigger words: asuna \(sao\), asuna
</details>
[comment]: #"Quinella"
<details><summary><b>Quinella(Sword Art Online)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/186996?modelId=12256&postId=99681&id=12256&slug=quinella-sao-or-character-lora-218)
![Yeah]()
[Image Source]()
</details>
[CivitAI Source] (https://civitai.com/models/12256/quinella-sao-or-character-lora-218)
</details>
[comment]: #"Nakano-Nino"
<details><summary><b>Nakano Nino (Quintessential Quintuplets)</b></summary>
<br><details class="image_body"><summary>Images</summary>
![Yeah]()
[Image Source]()
![Yeah]()
[Image Source]()
</details>
[CivitAI Source](https://civitai.com/models/23854/nakano-nino-in-the-quintessential-quintuplets-or-realistic-lora)
Weight:0.9
I recommend using these tags:pleated_skirt, shirt,green_skirt
</details>
[comment]:#"Lacus-Clyne"
<details><summary><b>Lacus Clyne (Gundam Seed)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/317036?period=AllTime&sort=Newest&view=categories&modelVersionId=20487&modelId=16641&postId=100456)

[Image Source](https://civitai.com/images/216845?period=AllTime&sort=Newest&view=categories&modelVersionId=20487&modelId=16641&postId=83513)
</details>
[CivitAI Source](https://civitai.com/models/16641/lacus-clyne-or-character-lora-736)
Lora trained on 736 images of Lacus Clyne
Outfit calls:
rnd1, stri1, purbla1, whi1
stri1 doesnt work too well tho
</details>
[comment]:#"Rory-mercury"
<details><summary><b>Rory Mercury (Gate)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/331515?period=AllTime&sort=Newest&view=categories&modelVersionId=29338&modelId=6500&postId=78198)
</details>
[CivitAI Source](https://civitai.com/models/6500/rory-mercury-or-2-outfits-or-character-lora-368)
def1 for default
wrg1 for war dress sometimes doesn't gen but it does do decent
rnd1 for random should help with outfits but this time it is kind of baked on so prompt appropriately to account for it
</details>
[comment]:#"marie-rose"
<details><summary><b>Marie Rose</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/214163?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=15742&modelId=13360&postId=111549)

[Image Source](https://civitai.com/images/157870?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=15742&modelId=13360&postId=68023)
</details>
[CivitAI Source](https://civitai.com/models/13360/marie-rose)
Must use those tags:marie rose,blonde hair, twintails, hair between eyes, sidelocks,black dress
lora strength between 0.6-0.8
</details>
[comment]:#"Rin-Fate"
<details><summary><b>Rin (Fate)</b></summary>
<br><details class="image_body"><summary>Images</summary>
,%201girl,%20tohsaka%20rin,%20solo,%20long%20hair,%20thighhighs,%20skirt,%20blue%20eyes,%20two%20side%20up,.jpeg)
[Image Source](https://civitai.com/images/141166?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=13941&modelId=11798&postId=95403)

[Image Source](https://civitai.com/images/135406?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=13941&modelId=11798&postId=65484)
</details>
[CivitAI Source](https://civitai.com/models/11798/tohsaka-rin-fate)
1girl, tohsaka rin, solo, long hair, thighhighs, skirt, blue eyes, black thighhighs, black hair
school uniform
1girl, solo, long hair, skirt, pantyhose, tohsaka rin, shirt, black skirt, white shirt, homurahara academy school uniform, open clothes, ribbon, coat, blue eyes, red coat, brown vest, long sleeves, collared shirt, black pantyhose, school uniform, bangs, vest, brown hair, open coat
</details>
[comment]:#"test"
<details><summary><b>Hestia-(DanMachi)</b></summary>
<br><details class="image_body"><summary>Hestia (DanMachi)</summary>
,((cute)),hotel%20room,professional%20lighting,%20photon%20mapping,%20radiosity,%20physica.jpeg)
[Image Source](https://civitai.com/images/114852?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=11083&modelId=9336&postId=91862)

[Image Source](https://civitai.com/images/206850?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=11083&modelId=9336&postId=109349)
</details>
[CivitAI Source](https://civitai.com/models/9336/hestia-danmachi-lora)
</details>
-----
# General-Purpose LoRAs
LoRAs that serve purposes other than portraying characters.
## Concept Loras
Loras that teach concepts.
[comment]: #"Blank-eyes(hypnotized)"
<details><summary><b>Blank Eyes (Hypnotized)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/108527?modelId=8275&postId=64087&id=8275&slug=empty-eyes-lora-utsurome-hypnosis)

[Image Source](https://civitai.com/images/108534?modelId=8275&postId=64087&id=8275&slug=empty-eyes-lora-utsurome-hypnosis)
</details>
**LoRa strength comparison**

[Source](https://civitai.com/images/108535?modelId=8275&postId=64087&id=8275&slug=empty-eyes-lora-utsurome-hypnosis)
When using this LoRA, it may be a better image to redraw only face or eyes using inpaint
<br>[CivitAI Source](https://civitai.com/models/8275/empty-eyes-lora-utsurome-hypnosis)
</details>
[comment]: #"tentacle-nsfw"
<details><summary><b>Tentacles (user request)</b></summary>
<br><details class="image_body"><summary>Images (NSFW)</summary>

[Image Source](https://civitai.com/images/154476?modelVersionId=15496&prioritizedUserIds=165596&period=AllTime&sort=Most+Reactions&limit=20)
![Yeah]()
[Image Source]()
</details>
[CivitAI Source](https://civitai.com/models/11886/nsfwtentacles-lora-or)
<br>Trigger: tentacles
</details>
[comment]:#"goth-girl-concept"
<details><summary><b>Goth Girl (outfit+concept)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/314194?modelVersionId=27954&prioritizedUserIds=147680&period=AllTime&sort=Most+Reactions&limit=20)

[Image Source](https://civitai.com/images/314197?period=AllTime&sort=Newest&modelVersionId=27954&modelId=23410&postId=79275)
</details>
[CivitAI Source](https://civitai.com/models/23410/goth-gals)
Trigger Words: wearing a gothgal outfit, gothgal
Personal notes: Will probably generate eyeliner causing odd eyes?
</details>
## Body/Likeness LoRAs
Loras that strive to copy realistic appearances.
[comment]:#"breast-in-class"
<details><summary><b>BiC</b></summary>
<br><details class="image_body"><summary>Images</summary>

[NSFW Image Source](https://civitai.com/images/436793?modelId=9025&postId=131060&id=9025&slug=breastinclass-better-bodies)

[NSFW Image Source](https://civitai.com/images/383331?modelId=9025&postId=100959&id=9025&slug=breastinclass-better-bodies)
</details>
[CivitAI Source](https://civitai.com/models/9025/breastinclass-better-bodies)<br><br>
This is a LORA that I use in text2img to get better NUDE body types out of generated images.
It's best at low weights, with accompanying keywords, so when models do get naked they're not saggy.
**What it is not**
It probably has no effect on clothed models, and yes, you must include it in your prompt in order for it to work (saddened I must tell you this).
It does not bias towards asians. Chilloutmix just happens to be very good at them. The models used are all white since v1.0.
**Tips:**
* This is a fairly strong LORA best used in conjunction with low weights and other keywords like "naked".
* Very high weights can cause the perspective to shift down as all the images crop out the face.
* Adding "small breast" or "huge breast" can add more size variation, keeping the shape.
* Adding stubborn loras or conflicting terms (e.g. "shirt") to the prompt can burn the image.
</details>
[comment]:#"Sakurai-Ningning"
<details><summary><b>Sakurai Ningning</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/154015?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=15457&modelId=13119&postId=67157)

[Image Source](https://civitai.com/images/191740?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=15457&modelId=13119&postId=101565)
</details>
[CivitAI Source](https://civitai.com/models/13119/or-or-ningning-or-lora)
Lovable face.
Good at using bandages.
Cute and easy-to-use.
This lora has a very magical feature that allows your character to use Band-Aids.
脸部还原很不错。
Tag: ningning
Trained by ChilloutMix-Ni.
All training pictures are from the internet.
Do not use this model in commercials!
</details>
[comment]:#"Enako"
<details><summary><b>Enako</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/319386?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=28372&modelId=22034&postId=70522)

[Image Source](https://civitai.com/images/393285?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=28372&modelId=22034&postId=118940)
</details>
[CivitAI Source](https://civitai.com/models/22034/enako)
Please do not use this model to make real/3d nude pictures, or for pornographic or indecent use
When used with chilloutmix, you need to add prompt: (round face:1.2), (round chin:1.2); reverse prompt: (pointed chin)
</details>
[comment]:#"ad1yn2"
<details><summary><b>ad1yn2</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/247855?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=22917&modelId=19315&postId=70780)

[Image Source](https://civitai.com/images/261267?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=22917&modelId=19315&postId=122011)
</details>
[CivitAI Source](https://civitai.com/models/19315/ad1yn2)
It's a little bit difficult to generate perfect result but I don't know why. Just try it.
use weight 0.5-0.8 for better results.
trigger word: addielyn
</details>
[comment]:#"Rose-Blackpink"
<details><summary><b>Rose (Blackpink)</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/99046?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=10143&modelId=8600&postId=56846)

[Image Source](https://civitai.com/images/99047?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=10143&modelId=8600&postId=56846)
</details>
[CivitAI Source](https://civitai.com/models/8600/rose-blackpink)
Another Blackpink LORA model, of Rosè, trained with v1-5-pruned.ckpt it works pretty well with any photorealistic models 768x768 Steps: 25-30, Sampler: DPM++ SDE Karras, CFG scale: 8-10.
</details>
[comment]:#"chinese-girl-likeness"
<details><summary><b>Chinese Girl Likeness</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/325105?period=AllTime&sort=Newest&view=categories&modelVersionId=28818&modelId=24117&postId=80063)

[Image Source](https://civitai.com/images/326685?period=AllTime&sort=Newest&view=categories&modelVersionId=28818&modelId=24117&postId=106688)
</details>
[CivitAI Source](https://civitai.com/models/24117/chinese-girl)
This lora is made by 100+ pictures of beautiful girls downloaded from Chinese social media.Recommend weight: <0.8
dpm++ sde karras , 25steps , hires fix: r-esrgan 0.2~0.4(denoising),
recommended size: 512*768 768*768
Trigger: chinese-girl-v1.0
</details>
[comment]:#"dress-styles"
<details><summary><b>Dresses</b></summary>
This collapsible holds dress styles....
[comment]:#"???"
<details><summary><b>L-Style 1</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/340823?period=AllTime&sort=Newest&view=categories&modelVersionId=27466&modelId=22998&postId=114274)

[Image Source](https://civitai.com/images/340824?period=AllTime&sort=Newest&view=categories&modelVersionId=27466&modelId=22998&postId=114274)
</details>
[CivitAI Source](https://civitai.com/models/22998/lolita-dress)
</details>
[comment]:#"nine-songs1"
<details><summary><b>Nine-Songs 1??</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/547718?period=AllTime&sort=Newest&view=categories&modelVersionId=50877&modelId=31782&postId=156914)
</details>
[CivitAI Source](https://civitai.com/models/31782?modelVersionId=50877)
</details>
[comment]:#"ninesong-2"
<details><summary><b>Nine-Songs 2</b></summary>
<br><details class="image_body"><summary>Images</summary>
Will add generated images here.
</details>
[CivitAI Source](https://civitai.com/models/31782?modelVersionId=38207)
</details>
[comment]:#"Nine-song3"
<details><summary><b>Nine-Songs 3</b></summary>
<br><details class="image_body"><summary>Images</summary>

[Image Source](https://civitai.com/images/442961?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=39393&modelId=31782&postId=132369)

[Image Source](https://civitai.com/images/436760?period=AllTime&sort=Most+Reactions&view=categories&modelVersionId=39393&modelId=31782&postId=131049)
</details>
[CivitAI Source](https://civitai.com/models/31782?modelVersionId=39393)
</details>
</details>
</details>
</details>
-----
template for future me:
[comment]:#"test"
<details><summary><b>Character</b></summary>
<br><details class="image_body"><summary>Images</summary>
![Yeah]()
[Image Source]()
![Yeah]()
[Image Source]()
</details>
[CivitAI Source]()
</details>
# Loras I want to add but lazy :(
<details><summary>list of shit im too lazy too add or will add or forgot to remove from this list</summary>
<br>https://civitai.com/models/5977/shinobu-kochou-demon-slayer-lora
<br>https://civitai.com/models/10789/emilia-rezero
<br>https://civitai.com/models/8012/kokona-blue-archive
<br><br>https://civitai.com/models/6586/arknights-surtr
https://civitai.com/models/13334/durandal-or-honkai-impact-3rd-or-4in1-lora
https://civitai.com/models/11736/rita-rossweisse-fallen-rosemary-or-honkai-impact-3rd-or-lora</details>
|
Brinah/1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: HASAN55/bert-finetuned-squadg
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HASAN55/bert-finetuned-squadg
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.8903
- Train End Logits Accuracy: 0.1458
- Train Start Logits Accuracy: 0.2292
- Validation Loss: 0.0
- Validation End Logits Accuracy: 0.0
- Validation Start Logits Accuracy: 0.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 4.8903 | 0.1458 | 0.2292 | 0.0 | 0.0 | 0.0 | 0 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Broadus20/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| null |
---
language:
- en
- he
library_name: fasttext
---
language detection of the English and Hebrew (only Romanized, - Hebrew script language detection is trivial)
```python
import fasttext as ft
model = ft.load_model("model_lang_detection.bin")
model.predict("tachles")
#(('__label__he',), array([0.92569]))
```
|
Brona/model1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 508.00 +/- 85.94
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga stucksam -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga stucksam -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga stucksam
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Bryan190/Aguy190
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-chemprot
results: []
datasets:
- bigbio/chemprot
language:
- en
widget:
- text: "The use of beta-blockers has emerged as a beneficial <mask> for congestive heart failure. ."
example_title: "The use of beta-blockers"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-chemprot
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 385 | 1.9792 |
| 2.157 | 2.0 | 770 | 1.9231 |
| 2.0453 | 3.0 | 1155 | 1.8796 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Brykee/BrykeeBot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: lowrollr/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Bryson575x/riceboi
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 489.30 +/- 13.57
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BumBelDumBel/ZORK_AI_FANTASY
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9151612903225806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7737
- Accuracy: 0.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2708 | 0.7374 |
| 3.7745 | 2.0 | 636 | 1.8622 | 0.8326 |
| 3.7745 | 3.0 | 954 | 1.1559 | 0.8935 |
| 1.6841 | 4.0 | 1272 | 0.8575 | 0.9094 |
| 0.8993 | 5.0 | 1590 | 0.7737 | 0.9152 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
BunakovD/sd
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- the_pile
---
# RWKV-4 "Raven"-series Models
## Model Description
These are RWKV-4-Pile 1.5/3/7/14B models finetuned on Alpaca, CodeAlpaca, Guanaco, GPT4All, ShareGPT and more. **Even the 1.5B model is surprisingly good for its size.**
Gradio Demo: https://huggingface.co/spaces/BlinkDL/Raven-RWKV-7B and https://huggingface.co/spaces/BlinkDL/ChatRWKV-gradio
RWKV models inference: https://github.com/BlinkDL/ChatRWKV (fast CUDA).
Q8_0 models: only for https://github.com/saharNooby/rwkv.cpp (fast CPU).
See https://github.com/BlinkDL/RWKV-LM for details on the RWKV Language Model (100% RNN).
Best Prompt Format for Raven models, Bob is user, Alice is bot (NOTE: no space after final "Alice:"). You can use \n within xxxxxxxxxxx, but avoid \n\n.
```
Bob: xxxxxxxxxxxxxxxxxx\n\nAlice:
Bob: xxxxxxxxxxxxxxxxxx\n\nAlice: xxxxxxxxxxxxx\n\nBob: xxxxxxxxxxxxxxxx\n\nAlice:
```
New models will be named like Eng99%-Other1%, Eng86%-Chn10%-JpnEspKor2%-Other2%, etc.
Language ratios determined by amount of ChatGPT data. Please share more ChatGPT data to increase the ratio of your language.
Old models:
* RWKV-4-Raven-Eng : 99% English + 1% Multilang
* RWKV-4-Raven-EngAndMore : 96% English + 2% Chn Jpn + 2% Multilang (More Jpn than v6 "EngChnJpn")
* RWKV-4-Raven-ChnEng : 49% English + 50% Chinese + 1% Multilang
License: Apache 2.0
|
Buntan/bert-finetuned-ner
|
[
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| null |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-polarity-100k-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-polarity-100k-2
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2075
- Accuracy: 0.9619
- F1: 0.9619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.1857 | 1.0 | 20000 | 0.2075 | 0.9619 | 0.9619 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Bwehfuk/Ron
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 226.92 +/- 22.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42
| null |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 18
| null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tommytran/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-ner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42
| null |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.24 +/- 15.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 54
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: multiCorp_5e-05_250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multiCorp_5e-05_250
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0691
- Precision: 0.6768
- Recall: 0.5971
- F1: 0.6344
- Accuracy: 0.9855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4757 | 0.34 | 50 | 0.1963 | 0.0 | 0.0 | 0.0 | 0.9740 |
| 0.1585 | 0.68 | 100 | 0.1299 | 0.3375 | 0.1049 | 0.1600 | 0.9758 |
| 0.1224 | 1.01 | 150 | 0.1121 | 0.3719 | 0.3094 | 0.3377 | 0.9750 |
| 0.1003 | 1.35 | 200 | 0.0954 | 0.4297 | 0.3167 | 0.3647 | 0.9791 |
| 0.0903 | 1.69 | 250 | 0.0920 | 0.4213 | 0.3063 | 0.3547 | 0.9786 |
| 0.0735 | 2.03 | 300 | 0.0795 | 0.4882 | 0.4575 | 0.4724 | 0.9814 |
| 0.0636 | 2.36 | 350 | 0.0769 | 0.5188 | 0.4718 | 0.4942 | 0.9820 |
| 0.0633 | 2.7 | 400 | 0.0737 | 0.5296 | 0.4926 | 0.5104 | 0.9823 |
| 0.0598 | 3.04 | 450 | 0.0735 | 0.5844 | 0.4320 | 0.4968 | 0.9827 |
| 0.0479 | 3.38 | 500 | 0.0730 | 0.5797 | 0.5264 | 0.5518 | 0.9831 |
| 0.0492 | 3.72 | 550 | 0.0680 | 0.6086 | 0.4978 | 0.5477 | 0.9838 |
| 0.041 | 4.05 | 600 | 0.0672 | 0.6190 | 0.5667 | 0.5917 | 0.9842 |
| 0.0371 | 4.39 | 650 | 0.0672 | 0.6616 | 0.5693 | 0.6120 | 0.9851 |
| 0.0362 | 4.73 | 700 | 0.0665 | 0.6670 | 0.5711 | 0.6153 | 0.9852 |
| 0.0334 | 5.07 | 750 | 0.0700 | 0.6532 | 0.5468 | 0.5953 | 0.9848 |
| 0.0288 | 5.41 | 800 | 0.0670 | 0.6482 | 0.5628 | 0.6025 | 0.9849 |
| 0.0288 | 5.74 | 850 | 0.0698 | 0.6643 | 0.5745 | 0.6162 | 0.9851 |
| 0.0263 | 6.08 | 900 | 0.0717 | 0.6827 | 0.5845 | 0.6298 | 0.9856 |
| 0.0231 | 6.42 | 950 | 0.0712 | 0.6826 | 0.5702 | 0.6213 | 0.9852 |
| 0.0238 | 6.76 | 1000 | 0.0691 | 0.6768 | 0.5971 | 0.6344 | 0.9855 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-da
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 449
| null |
---
tags:
- grounding
- video
- understanding
- multimodal
- MAD
- long video
- moments
- moment retrieval
license: bsd
language:
- en
metrics:
- recall
---
# Guidance Based Video Grounding.
The official implementation of the paper: ["Localizing Moments in Long Video Via Multimodal Guidance"](https://arxiv.org/abs/2302.13372). In this repository,
we provide the predicted scores from the Guidance Model using [MAD Dataset](https://github.com/Soldelli/MAD).
## Citation
If you find this implementation useful in your research, please use the following BibTeX entry for citation:
```
@article{Barrios2023LocalizingMI,
title={Localizing Moments in Long Video Via Multimodal Guidance},
author={Wayner Barrios and Mattia Soldan and Fabian Caba Heilbron and Alberto M. Ceballos-Arroyo and Bernard Ghanem},
journal={ArXiv},
year={2023},
volume={abs/2302.13372}
}
```
## Prediction Zoo.
The provided predictions correspond to the scores generated by the Guidance model using sliding windows of 64 frames and 128 frames in length. The predictions are stored in a pickle object with the following structure:
```python
In [1]: import pickle
In [2]: with open("guidance_scores_MAD_test_128.pkl",'rb') as f:
...: scores = pickle.load(f)
In [3]: len(scores)
Out[3]: 72044
In [4]: scores[0].keys()
Out[4]: dict_keys(['qid', 'vid', 'windows', 'score'])
```
```python
{ 'qid': '0',
'score': array([1.48404761e-05, 1.40372722e-05, 1.46572347e-05, 1.28814381e-05,
1.34291167e-05, 1.32850864e-05, 1.61252574e-05, 6.24697859e-05,
4.70118430e-05, 1.63803907e-05, 2.77301951e-05, 2.59740209e-05,
9.86061990e-01, 4.11081433e-01, 1.71889886e-02, 1.37453452e-01,
1.75393507e-05, 1.92647931e-05, 5.38236709e-05, 6.90551009e-04,
7.63237834e-01, 9.73204970e-02, 1.73201097e-05, 2.48163269e-05,
5.99260893e-05, 1.84824003e-05, 2.14560350e-05, 1.04043145e-04,
5.24206553e-05, 1.88337926e-05, 1.62523775e-05, 1.23760619e-05,
1.15747998e-05, 1.85713252e-05, 3.93810224e-05, 4.38277610e-04,
4.63226315e-05, 2.76185543e-04, 6.71112502e-05, 2.05889755e-05,
5.27229131e-05, 4.56629896e-05, 2.62997986e-04, 1.23860036e-05,
1.19574897e-05, 1.27713274e-05, 1.34036281e-05, 1.49246125e-05,
1.66437039e-05, 1.32685755e-05, 1.36442995e-05, 1.39407657e-05,
9.44265649e-02, 5.19266985e-02, 3.09179362e-04, 1.66565824e-05,
1.52278981e-05, 1.34415832e-05, 1.16731699e-05, 1.19617898e-05,
1.34421471e-05, 1.35606424e-05, 1.40685788e-05, 1.44712585e-05,
1.49164434e-05, 1.32006107e-05, 1.23232739e-05, 1.22480678e-05,
1.36934423e-05, 8.42598165e-05, 1.90059054e-05, 1.52820303e-05,
1.25335091e-05, 1.30556955e-05, 1.18760063e-05, 1.14885261e-05,
1.17362497e-05, 1.12321404e-05, 1.24243248e-04, 1.45946506e-05,
4.47804232e-05, 1.39249141e-05, 1.34848015e-05, 3.25621368e-05,
1.44184843e-01, 2.68866897e-05, 1.92906227e-05, 1.76019021e-05,
1.58657276e-05, 1.28230713e-05, 1.28012252e-05, 1.29981381e-05,
1.67807830e-05, 1.70492331e-05, 1.40562279e-05, 1.61650114e-05,
1.47591518e-05, 1.63778402e-02, 1.42061428e-04, 6.93475548e-03,
6.02264590e-05, 8.72147648e-05, 9.83794928e-01, 9.91553962e-01,
9.63991106e-01, 8.97689939e-01, 1.28758256e-04, 2.88744595e-05,
1.70378244e-05, 2.29878224e-05, 2.43768354e-05, 1.59022475e-05,
1.30911794e-05, 1.81753130e-05, 2.05728411e-05, 1.25869919e-05,
1.25580364e-05, 1.16062802e-05, 1.37536981e-05, 1.34730390e-05,
1.40373795e-05, 1.33059066e-05, 1.30285189e-05, 1.37811385e-05,
2.23064744e-05, 1.44057722e-05, 1.42116378e-05, 1.93661017e-05,
1.58555758e-05, 1.43071402e-05, 1.38224150e-05, 1.28803194e-05,
1.20950817e-05, 1.41009232e-05, 1.45958602e-05, 1.23285527e-05,
1.38767664e-05, 1.59005958e-05, 1.49218240e-05, 1.21883040e-05,
1.24096860e-05, 1.63976423e-04, 3.71323113e-05, 1.49581110e-05,
1.28865731e-05, 8.20189889e-05, 1.94104978e-05, 1.45575204e-05,
1.19119395e-05, 1.17359577e-05, 1.33997301e-05, 1.31552797e-05,
1.29547625e-05, 1.46081702e-05, 1.37864763e-05, 2.89076870e-05,
2.40834688e-05, 2.44160365e-05, 3.74382762e-05, 4.72434871e-02,
1.53820711e-05, 1.25494762e-05, 1.16858791e-05, 1.33582507e-05,
6.86281201e-05, 1.72452001e-05, 1.32617952e-05, 1.24350836e-05,
1.32563446e-05, 1.50281312e-05, 2.07685662e-05, 3.12883203e-05,
5.31642836e-05, 7.05183193e-05, 1.51949525e-05, 1.41901855e-05,
1.51822069e-05, 3.32951342e-04, 8.94680124e-05, 1.65749607e-05,
2.18829446e-05, 2.16037024e-05, 1.89978218e-05, 4.97834710e-03,
2.03153506e-01, 1.54585496e-03, 1.23195614e-05, 1.28703259e-05,
1.51874347e-05, 1.30843009e-05, 1.32952518e-05, 1.83968314e-05,
3.42841486e-05, 9.24622072e-05, 1.33280428e-05, 1.38418063e-05,
1.52235261e-05, 1.41796754e-05, 1.46450093e-05, 2.20195379e-05,
1.83107302e-04, 1.82420099e-05, 1.50840988e-05, 1.33859876e-05,
1.51073200e-05, 1.47391929e-05, 1.49910848e-05, 1.53916826e-05,
1.31657725e-05, 1.38312898e-05, 1.90024621e-05, 1.58155744e-05,
1.31786610e-05, 1.57141967e-05, 1.65828824e-05, 1.46924167e-05,
1.38433634e-05, 5.21887268e-05, 2.85502132e-02, 2.30753481e-01,
7.06195598e-04, 1.50714346e-04, 1.27303065e-03, 1.33986650e-02,
7.64285505e-04, 2.07327234e-04, 6.83149046e-05, 3.26294066e-05,
3.00217052e-05, 3.59058060e-04, 1.75943842e-05, 4.50351909e-05,
6.54372343e-05, 7.06970895e-05, 3.67312983e-04, 1.05719395e-01,
4.43235294e-05, 2.82063011e-05, 7.51458792e-05, 1.61291231e-04,
4.26617444e-05, 8.98458238e-05, 5.37320266e-05, 7.81280905e-05,
4.74652685e-02, 6.73964678e-04, 7.80265400e-05, 2.98924297e-05,
4.71418061e-05, 9.99735785e-05, 5.41929447e-04, 8.76590490e-01,
7.32870936e-01, 9.47873652e-01, 9.83479261e-01, 9.41197515e-01,
3.02340268e-05, 5.52863061e-01, 4.90591303e-02, 5.52392844e-03,
1.66527767e-04, 6.01128559e-05, 2.75078182e-05, 5.36037696e-05,
2.72706511e-05, 5.20218709e-05, 1.74067172e-04, 9.59624112e-01,
9.92105484e-01, 6.41801059e-01, 7.50956178e-01, 1.66324535e-05,
1.36247700e-05, 1.38954510e-05, 1.32978639e-05, 2.76602568e-05,
8.64359558e-01, 2.82314628e-01, 6.86250278e-04, 1.61339794e-05,
1.76240802e-01, 6.14342950e-02, 1.79430062e-05, 1.85770459e-05,
2.49132900e-05, 4.90641105e-05, 1.38329369e-05, 1.35371911e-05,
1.19879533e-05, 1.28572465e-05, 1.49452917e-05, 1.34064794e-05,
1.20641280e-05, 1.38642654e-05, 1.28597740e-05, 1.21135636e-05,
1.19547185e-05, 1.27106450e-05, 1.24800990e-05, 1.45651029e-05,
1.51306494e-05, 1.31757206e-05, 1.44625528e-05, 2.93072371e-05,
1.55961770e-05, 1.38226005e-05, 2.85501122e-01, 9.54893649e-01,
4.26807284e-01, 7.88133383e-01, 1.15605462e-05, 1.27675758e-05,
1.74503912e-05, 1.22338257e-04, 4.07951375e-05, 6.67655331e-05,
2.63322181e-05, 6.43799603e-01, 9.40359533e-01, 8.85976017e-01,
4.58170444e-01, 1.68637175e-03, 5.94505800e-05, 9.05500948e-01,
3.18567127e-01, 4.67336411e-03, 2.84927974e-05, 3.81192891e-03,
4.18508105e-04, 6.88799983e-03, 9.18629944e-01, 8.45510900e-01,
1.88187569e-01, 1.15205767e-02, 6.14926934e-01, 9.16110933e-01,
3.21912378e-01, 9.68408361e-02, 2.36877706e-03, 3.30457231e-04,
9.32341874e-01, 6.69624686e-01, 3.61131132e-02, 4.71764088e-01,
3.23702669e-04, 5.40765934e-04, 2.96235172e-04, 1.00755557e-01,
2.59187482e-02, 9.91479377e-04, 5.00017107e-02, 9.33302939e-03,
8.73835742e-01, 9.06303883e-01, 1.98892485e-02, 2.06603622e-03,
2.67300452e-03, 1.63171062e-05, 4.14947972e-05, 2.11949199e-02,
5.66720143e-02, 6.37245998e-02, 3.02139521e-01, 4.86139301e-03,
6.51149167e-05, 8.24632589e-05, 2.42551632e-05, 2.16892213e-01,
9.93161321e-01, 9.07774687e-01, 9.85157251e-01, 7.91489899e-01,
6.24064269e-05, 2.82448274e-03, 6.10993884e-05, 4.63459146e-05,
6.72110255e-05, 2.53440558e-05, 2.50527592e-05, 4.85404918e-04,
7.80891351e-05, 4.56315975e-05, 1.90765320e-04, 8.94685328e-01,
9.85134244e-01, 9.36044097e-01, 1.42211165e-05, 1.49489415e-05,
1.69001578e-05, 1.66201044e-05, 2.41175085e-01, 5.41068694e-05,
1.77346919e-05, 3.90491296e-05, 2.48894852e-04, 1.45345357e-05,
1.64555768e-05, 1.53538731e-05, 1.38164451e-05, 1.68559291e-05,
3.19991705e-05, 2.60154466e-05, 1.41664159e-05, 1.22337908e-04,
4.30386774e-02, 3.52067378e-04, 2.77736799e-05, 1.43605203e-05,
1.33721569e-05, 1.43800498e-05, 1.23751524e-05, 2.31819286e-05,
9.83208010e-05, 2.08199883e-04, 3.14763274e-05, 3.47468827e-04,
1.10434856e-04, 3.18150487e-05, 1.72609471e-05, 2.70375167e-05,
1.67231119e-05, 1.80254483e-05, 2.09855771e-05, 1.66565824e-05,
1.64901703e-05, 3.01825115e-04, 7.29017615e-01, 1.12410297e-03,
6.18876831e-04, 2.08720026e-04, 2.29539564e-05, 1.47635437e-05,
4.10786743e-05, 2.57481169e-02, 8.77772836e-05, 4.92439649e-05,
9.44633852e-04, 2.61720526e-03, 8.41950595e-01, 8.63339067e-01,
5.76047751e-05, 8.71496499e-01, 9.07008648e-01, 8.54207218e-01,
3.62060557e-04, 7.98364286e-04, 7.50755966e-02, 3.81207588e-04,
6.62766863e-03, 1.50808028e-03, 5.67528963e-01, 7.69607246e-01,
4.62092081e-04, 1.82087897e-04, 9.24605787e-01, 9.67480242e-01,
3.22210602e-03, 3.38318609e-02, 7.42516349e-05, 2.80661490e-02,
7.69108906e-03, 8.99414954e-05, 5.23393810e-01, 7.17914104e-01,
5.11704478e-04, 2.06177612e-03, 7.79069304e-01, 1.28432157e-05,
1.51723981e-01, 7.02154310e-03, 9.71324384e-01, 8.30839634e-01,
6.24295863e-05, 1.97836489e-05, 1.80826428e-05, 1.67380622e-05,
1.57646009e-05, 5.96713580e-05, 7.05929342e-05, 2.16401986e-05,
1.69063496e-05, 1.36657072e-05, 1.44965925e-05, 2.01106413e-05,
1.66287136e-05, 1.51022632e-05, 1.20727018e-05, 1.36815515e-05,
1.57434170e-05, 3.38077080e-03, 1.93943546e-04, 1.50704973e-05,
1.36058252e-05, 1.23554828e-05, 1.20090635e-05, 1.20484674e-05,
1.15330831e-05, 1.24158278e-05, 1.21374187e-05, 1.20495934e-05,
1.25650204e-05, 1.16137307e-05, 1.18198168e-05, 1.15763123e-05,
1.24373146e-05, 1.25643137e-05, 1.48531772e-05, 1.28844113e-05,
1.19790957e-05, 1.42352001e-05, 2.61451223e-05, 1.16819347e-05],
dtype=float32),
'vid': '3001_21_JUMP_STREET',
'windows': array([[ 0, 128],
[ 64, 192],
[ 128, 256],
...,
[32576, 32704],
[32640, 32768],
[32704, 32832]])}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-ner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,860
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 297.65 +/- 17.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 71
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: lowrollr/UnrealMadrid-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16
| null |
---
license: mit
datasets:
- wikitext
language:
- en
---
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>DistilBERT Masked Language Model - README</title>
</head>
<body>
<h1>DistilBERT Masked Language Model</h1>
<p>This repository demonstrates how to use Hugging Face Transformers library and TensorFlow to solve a masked language modeling task using DistilBERT. Specifically, we will use the pretrained "distilbert-base-cased" model to predict a missing word in a sentence from the "wikitext-2-raw-v1" dataset.</p>
<h2>1. Problem Statement</h2>
<p>The goal of this project is to predict a missing word in a sentence using the pretrained "distilbert-base-cased" model. The model should take a sentence with a masked token and output the most probable word to fill in the masked token.</p>
<h2>2. Requirements</h2>
<p>Here are the necessary libraries and modules:</p>
<ul>
<li>Python 3.7+</li>
<li>TensorFlow 2.0+</li>
<li>Hugging Face Transformers</li>
<li>Hugging Face Datasets library</li>
</ul>
<h2>3. Algorithmic Approach</h2>
<p>The algorithmic approach to solving this problem is outlined below:</p>
<ol>
<li>Import necessary libraries and modules</li>
<li>Load the pretrained tokenizer and model</li>
<li>Load the "wikitext-2-raw-v1" dataset and extract the eleventh example from the train split</li>
<li>Preprocess the text</li>
<li>Predict the masked token</li>
<li>Find the most probable token</li>
<li>Decode the most probable token</li>
<li>Output the result</li>
</ol>
<h2>4. Usage</h2>
<p>Run the provided Python script to perform masked language modeling with DistilBERT on the given dataset. The script will output the most probable predicted token for the masked position in the sentence.</p>
<h2>5. License</h2>
<p>This project is licensed under the MIT License. See the LICENSE file for more information.</p>
</body>
</html>
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 229
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wiki_auto
model-index:
- name: t5-small-finetuned-text-simplification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-text-simplification
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wiki_auto dataset.
It achieves the following results on the evaluation set:
- eval_loss: 4.9058
- eval_sari: 57.2104
- eval_runtime: 907.1945
- eval_samples_per_second: 80.742
- eval_steps_per_second: 5.047
- epoch: 3.0
- step: 70089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25
| null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: multiCorp_cut_5e-05_250
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multiCorp_cut_5e-05_250
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0722
- Precision: 0.7331
- Recall: 0.6101
- F1: 0.6660
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2942 | 0.66 | 50 | 0.0969 | 0.4120 | 0.1031 | 0.1649 | 0.9742 |
| 0.0781 | 1.32 | 100 | 0.0720 | 0.4497 | 0.2933 | 0.3551 | 0.9779 |
| 0.0673 | 1.97 | 150 | 0.0632 | 0.5067 | 0.4227 | 0.4609 | 0.9794 |
| 0.046 | 2.63 | 200 | 0.0617 | 0.5822 | 0.4780 | 0.5250 | 0.9814 |
| 0.0426 | 3.29 | 250 | 0.0607 | 0.6815 | 0.5033 | 0.5790 | 0.9833 |
| 0.0365 | 3.95 | 300 | 0.0596 | 0.6222 | 0.5558 | 0.5871 | 0.9823 |
| 0.026 | 4.61 | 350 | 0.0632 | 0.6341 | 0.6026 | 0.6180 | 0.9829 |
| 0.0224 | 5.26 | 400 | 0.0700 | 0.6951 | 0.5576 | 0.6188 | 0.9838 |
| 0.0197 | 5.92 | 450 | 0.0621 | 0.6834 | 0.5989 | 0.6384 | 0.9840 |
| 0.0147 | 6.58 | 500 | 0.0676 | 0.7181 | 0.6064 | 0.6575 | 0.9845 |
| 0.0119 | 7.24 | 550 | 0.0689 | 0.6965 | 0.6560 | 0.6757 | 0.9846 |
| 0.0106 | 7.89 | 600 | 0.0722 | 0.7331 | 0.6101 | 0.6660 | 0.9851 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
CLTL/MedRoBERTa.nl
|
[
"pytorch",
"roberta",
"fill-mask",
"nl",
"transformers",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,988
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.58 +/- 39.32
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLTL/gm-ner-xlmrbase
|
[
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"nl",
"transformers",
"dighum",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: dreams
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dreams
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8850
- Rouge1: 0.1304
- Rouge2: 0.0158
- Rougel: 0.0974
- Rougelsum: 0.0973
- Gen Len: 18.8978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 93 | 4.0104 | 0.1127 | 0.0106 | 0.0844 | 0.0843 | 18.9624 |
| No log | 2.0 | 186 | 3.9313 | 0.1217 | 0.0125 | 0.093 | 0.0927 | 18.9543 |
| No log | 3.0 | 279 | 3.9028 | 0.127 | 0.0135 | 0.0964 | 0.096 | 18.9355 |
| No log | 4.0 | 372 | 3.8892 | 0.13 | 0.0159 | 0.0978 | 0.0976 | 18.8871 |
| No log | 5.0 | 465 | 3.8850 | 0.1304 | 0.0158 | 0.0974 | 0.0973 | 18.8978 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
CLTL/icf-levels-adm
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 33
| null |
---
license: apache-2.0
---
Experimental pre-training on instruction datasets.
https://wandb.ai/open-assistant/supervised-finetuning/runs/ys9rt5ue
Checkpoint: 3500 steps
Used oasst dataset config:
```
pretrain:
use_custom_sampler: true
sort_by_length: false
datasets:
- joke
- webgpt:
val_split: 0.1
- gpt4all:
val_split: 0.01
- alpaca:
val_split: 0.025
- code_alpaca:
val_split: 0.05
- minimath
- humaneval_mbpp_codegen_qa
- humaneval_mbpp_testgen_qa
- grade_school_math_instructions
- recipes
- cmu_wiki_qa
#- youtube_subs_howto100m # uses incompatible column names
#- ubuntu_dialogue_qa # fails to load
- oa_wiki_qa_bart_10000row
- prosocial_dialogue:
fraction: 0.1
- explain_prosocial:
fraction: 0.05
```
pythia parameters:
```
pythia-12b:
dtype: fp16
log_dir: "pythia_log_12b"
learning_rate: 6e-6
model_name: EleutherAI/pythia-12b-deduped
output_dir: pythia_model_12b
weight_decay: 0.0
max_length: 2048
use_flash_attention: true
#deepspeed_config: configs/zero3_config.json
warmup_steps: 50
gradient_checkpointing: true
gradient_accumulation_steps: 8
per_device_train_batch_size: 2
per_device_eval_batch_size: 5
eval_steps: 200
save_steps: 500
num_train_epochs: 2
save_total_limit: 2
```
|
CLTL/icf-levels-att
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32
| null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.68 +/- 0.34
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLTL/icf-levels-etn
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31
| null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CLTL/icf-levels-fac
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32
| null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 60.30 +/- 38.07
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CLTL/icf-levels-stm
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32
| null |
---
pipeline_tag: text-generation
library_name: transformers
language:
- en
---
|
CM-CA/DialoGPT-small-cartman
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="amannlp/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28
| null |
---
license: creativeml-openrail-m
language:
- en
tags:
- riffusion
- 'music '
- diffusion
---
|
CSResearcher/TestModel
|
[
"license:mit"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="amannlp/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CSZay/bart
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
language:
- ja
- en
metrics:
- character
pipeline_tag: text-to-speech
tags:
- music
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CZWin32768/xlm-align
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2106.06381",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.75 +/- 19.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Calamarii/calamari
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.55 +/- 19.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Cameron/BERT-jigsaw-severetoxic
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30
| null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### TLoZMixedTest01 Dreambooth model trained by MasterElli with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Cameron/BERT-mdgender-convai-ternary
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 38
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: pregonas/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Camzure/MaamiBot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: feratur/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Canyonevo/DialoGPT-medium-KingHenry
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: creativeml-openrail-m
---
# BBS_OPM_V1.2
BBS_OPM is a Stable Diffusion Model merge of other models (listed below) and fine tuned to be more useful with Watercolor, Dry Pastels, Oil Pastels and NSWF art, it is also very capable of photorealistic renderings!. I am continually working to update and better this model and will release new ones as I get them tuned and learn more about creating and fine tuning SD Models! I will write more here in the description as I can or figure out that I should. I still have a lot to learn about this and have more to share!
# Models
- BASE: URPM
- .03: Analog Madness
- .04: OpenJourney
# Examples (NSFW)


|
CapitainData/wav2vec2-large-xlsr-turkish-demo-colab
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
---
# ONNX version of sentence-transormers/all-mpnet-base-v2
This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. The ONNX version of this model is made for the [Metarank](https://github.com/metarank/metarank) re-ranker
to do semantic similarity.
Check out the [main Metarank docs](https://docs.metarank.ai) on how to configure it.
TLDR:
```yaml
- type: field_match
name: title_query_match
rankingField: ranking.query
itemField: item.title
distance: cos
method:
type: bert
model: metarank/all-mpnet-base-v2
```
## Building the model
```shell
$> pip install -r requirements.txt
$> python convert.py
============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
```
## License
Apache 2.0
|
Capreolus/bert-base-msmarco
|
[
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"arxiv:2008.09093",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 238
| null |
Access to model urmom12349823/AItext is restricted and you are not in the authorized list. Visit https://huggingface.co/urmom12349823/AItext to ask for access.
|
Capreolus/birch-bert-large-mb
|
[
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
] | null |
{
"architectures": [
"BertForNextSentencePrediction"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1
| null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: clemdev2000/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Captain-1337/CrudeBERT
|
[
"pytorch",
"bert",
"text-classification",
"arxiv:1908.10063",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28
| null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tenich/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Carlork314/Carlos
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tenich/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Carlork314/Xd
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: cc
datasets:
- monmamo/delphine-fairheart
language:
- en
---
Delphine Fairheart character generation model.
|
CarlosTron/Yo
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1211.06 +/- 479.41
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/albert-base-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 34
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: caramel-t0-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.6161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# caramel-t0-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3721
- Rouge1: 47.6161
- Rouge2: 23.8204
- Rougel: 40.1348
- Rougelsum: 43.758
- Gen Len: 17.2759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4403 | 1.0 | 1842 | 1.3822 | 47.2645 | 23.8282 | 39.7516 | 43.5543 | 17.0256 |
| 1.3572 | 2.0 | 3684 | 1.3747 | 47.5267 | 23.6332 | 39.8462 | 43.6527 | 17.4347 |
| 1.2822 | 3.0 | 5526 | 1.3721 | 47.6161 | 23.8204 | 40.1348 | 43.758 | 17.2759 |
| 1.2375 | 4.0 | 7368 | 1.3764 | 47.7445 | 24.1348 | 40.1867 | 43.9317 | 17.2943 |
| 1.1935 | 5.0 | 9210 | 1.3781 | 47.5911 | 23.7657 | 39.8614 | 43.7314 | 17.3077 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
dccuchile/albert-base-spanish-finetuned-ner
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14
| null |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 614.00 +/- 210.94
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ksmcg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ksmcg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ksmcg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
dccuchile/albert-base-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25
| null |
---
tags:
- stable_diffusion
- checkpoint
---
The source of the models is listed below. Please check the original licenses from the source.
https://civitai.com/models/1274
|
dccuchile/albert-base-spanish-finetuned-qa-mlqa
|
[
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3
| null |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
dccuchile/albert-large-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27
| null |
import pandas as pd
import streamlit as st
import plotly.express as px
from plotly import graph_objs as go
st.title("Demand Trend Analysis")
df = pd.read_csv("data/cleaned_data.csv",parse_dates=['Order Date'],index_col='Order Date')
df_train = df.index< '2018-01-01'
df_test = df.index>= '2018-01-01'
df_train = df[df_train]
df_test = df[df_test]
time_pred = ["Past","Future"]
#display the years of data as a slider 2015-2017 for past and 2018 for future
k = st.sidebar.selectbox("Time",time_pred)
if k == "Past":
n_years = st.sidebar.slider("Years of data", 2015, 2016, 2017)
periods = 12*n_years
else:
n_years = st.sidebar.slider("Years of data", 2018,2019)
periods = 12
@st.cache_data
def load_data():
data = df.copy()
return data
data_load_state = st.text("Loading data...")
data = load_data()
data_load_state.text("Loading data...done!")
st.subheader("Raw data")
st.write(data.head())
def plot_raw_data_year(input:str):
if input == "Past":
df_yearly= df_train.groupby(pd.Grouper(freq='Y'))['Sales'].sum()
df_yearly = pd.DataFrame(df_yearly)
else:
df_yearly = df_test.groupby(pd.Grouper(freq='Y'))['Sales'].sum()
df_yearly = pd.DataFrame(df_yearly)
fig = go.Figure()
fig.add_trace(go.Bar(x=df_yearly.index, y=df_yearly.Sales,name='Yearly Sales' ,))
fig.update_layout(title_text='Yearly Sales',plot_bgcolor='white',xaxis_rangeslider_visible=True)
st.plotly_chart(fig)
plot_raw_data_year(k)
def plot_raw_data_month(input:str):
if input == "Past":
df_monthly= df_train.groupby(pd.Grouper(freq='M'))['Sales'].sum()
df_monthly = pd.DataFrame(df_monthly)
else:
df_monthly = df_test.groupby(pd.Grouper(freq='M'))['Sales'].sum()
df_monthly = pd.DataFrame(df_monthly)
fig = go.Figure()
fig.add_trace(go.Scatter(x=df_monthly.index, y=df_monthly.Sales,name='Monthly Sales' ))
fig.update_layout(title_text= 'Monthly Sales',plot_bgcolor='white',xaxis_rangeslider_visible=True)
st.plotly_chart(fig)
plot_raw_data_month(k)
def plot_raw_data_day(input:str):
if input == "Past":
df_daily= df_train.groupby(pd.Grouper(freq='D'))['Sales'].sum()
df_daily = pd.DataFrame(df_daily)
else:
df_daily = df_test.groupby(pd.Grouper(freq='D'))['Sales'].sum()
df_daily = pd.DataFrame(df_daily)
fig = go.Figure()
fig.add_trace(go.Scatter(x=df_daily.index, y=df_daily.Sales,name='Daily Sales' ))
fig.update_layout(title_text= 'Daily Sales',plot_bgcolor='white',xaxis_rangeslider_visible=True)
st.plotly_chart(fig)
plot_raw_data_day(k)
def plot_raw_yearly_sales_by_segment(input:str):
if input == "Past":
df_yearly_segment = df_train.groupby([pd.Grouper(freq='Y'), 'Segment'])['Sales'].sum().reset_index()
df_yearly_segment = pd.DataFrame(df_yearly_segment)
else:
df_yearly_segment = df_test.groupby([pd.Grouper(freq='Y'), 'Segment'])['Sales'].sum().reset_index()
df_yearly_segment = pd.DataFrame(df_yearly_segment)
color_scale = px.colors.sequential.Viridis
# create a dictionary that maps each unique value in the Segment column to a color from the color scheme
color_map = {segment: color_scale[i % len(color_scale)] for i, segment in enumerate(df_yearly_segment['Segment'].unique())}
# use the color_map dictionary to map the Segment values to colors
colors = df_yearly_segment['Segment'].map(color_map)
# create the plot using plotly.graph_objects
fig = go.Figure(data=go.Bar(x=df_yearly_segment['Order Date'], y=df_yearly_segment['Sales'], marker={'color': colors},hovertext=df_yearly_segment['Segment']))
fig.update_layout(title_text='Yearly Sales by Segment', plot_bgcolor='white')
st.plotly_chart(fig)
plot_raw_yearly_sales_by_segment(k)
def plot_raw_yearly_sales_by_region(input:str):
if input == "Past":
df_yearly_segment = df_train.groupby([pd.Grouper(freq='Y'), 'Region'])['Sales'].sum().reset_index()
df_yearly_segment = pd.DataFrame(df_yearly_segment)
else:
df_yearly_segment = df_test.groupby([pd.Grouper(freq='Y'), 'Region'])['Sales'].sum().reset_index()
df_yearly_segment = pd.DataFrame(df_yearly_segment)
color_scale = px.colors.sequential.Viridis
# create a dictionary that maps each unique value in the Segment column to a color from the color scheme
color_map = {segment: color_scale[i % len(color_scale)] for i, segment in enumerate(df_yearly_segment['Region'].unique())}
# use the color_map dictionary to map the Segment values to colors
colors = df_yearly_segment['Region'].map(color_map)
# create the plot using plotly.graph_objects
fig = go.Figure(data=go.Bar(x=df_yearly_segment['Order Date'], y=df_yearly_segment['Sales'], marker={'color': colors},hovertext=df_yearly_segment['Region']))
fig.update_layout(title_text='Yearly Sales by Region', plot_bgcolor='white')
st.plotly_chart(fig)
plot_raw_yearly_sales_by_region(k)
def plot_raw_yearly_sales_by_Category(input:str):
if input == "Past":
df_yearly_segment = df_train.groupby([pd.Grouper(freq='Y'), 'Category'])['Sales'].sum().reset_index()
else:
df_yearly_segment = df_test.groupby([pd.Grouper(freq='Y'), 'Category'])['Sales'].sum().reset_index()
df_yearly_segment = pd.DataFrame(df_yearly_segment)
color_scale = px.colors.sequential.Viridis
# create a dictionary that maps each unique value in the Segment column to a color from the color scheme
color_map = {segment: color_scale[i % len(color_scale)] for i, segment in enumerate(df_yearly_segment['Category'].unique())}
# use the color_map dictionary to map the Segment values to colors
colors = df_yearly_segment['Category'].map(color_map)
# create the plot using plotly.graph_objects
fig = go.Figure(data=go.Bar(x=df_yearly_segment['Order Date'], y=df_yearly_segment['Sales'], marker={'color': colors},hovertext=df_yearly_segment['Category']))
fig.update_layout(title_text='Yearly Sales by Category', plot_bgcolor='white')
st.plotly_chart(fig)
plot_raw_yearly_sales_by_Category(k)
def plot_raw_yearly_sales_by_State(input:str, number:int):
if input == "Past":
df_yearly_state = df_train.groupby([pd.Grouper(freq='Y'), 'State'])['Sales'].sum().reset_index()
else:
df_yearly_state = df_test.groupby([pd.Grouper(freq='Y'), 'State'])['Sales'].sum().reset_index()
df_yearly_state = pd.DataFrame(df_yearly_state)
color_scale = px.colors.sequential.Viridis
topN_states = df_yearly_state.groupby('State').sum().sort_values('Sales', ascending=False).head(number).index.tolist()
top_states_df = df_yearly_state[df_yearly_state['State'].isin(topN_states)]
# create a dictionary that maps each unique value in the State column to a color from the color scheme
color_map = {state: color_scale[i % len(color_scale)] for i, state in enumerate(top_states_df['State'].unique())}
# use the color_map dictionary to map the State values to colors
colors = top_states_df['State'].map(color_map)
# create the plot using plotly.graph_objects
fig = go.Figure(data=go.Bar(x=top_states_df['Order Date'], y=top_states_df['Sales'], marker={'color': colors},hovertext=top_states_df['State']))
fig.update_layout(title_text=f'Top {number} states with highest sales', plot_bgcolor='white')
st.plotly_chart(fig)
# initialize Streamlit slider for selecting number of subcategories to display
number_st = st.slider('Select the number of States', 1, 10, 3)
plot_raw_yearly_sales_by_State(k,number_st)
def plot_raw_yearly_sales_by_Sub_Cat(input:str, number:int):
if input == "Past":
df_yearly_state = df_train.groupby([pd.Grouper(freq='Y'), 'Sub-Category'])['Sales'].sum().reset_index()
else:
df_yearly_state = df_test.groupby([pd.Grouper(freq='Y'), 'Sub-Category'])['Sales'].sum().reset_index()
df_yearly_state = pd.DataFrame(df_yearly_state)
color_scale = px.colors.sequential.Viridis
topN_states = df_yearly_state.groupby('Sub-Category').sum().sort_values('Sales', ascending=False).head(number).index.tolist()
top_states_df = df_yearly_state[df_yearly_state['Sub-Category'].isin(topN_states)]
# create a dictionary that maps each unique value in the State column to a color from the color scheme
color_map = {state: color_scale[i % len(color_scale)] for i, state in enumerate(top_states_df['Sub-Category'].unique())}
# use the color_map dictionary to map the State values to colors
colors = top_states_df['Sub-Category'].map(color_map)
# create the plot using plotly.graph_objects
fig = go.Figure(data=go.Bar(x=top_states_df['Order Date'], y=top_states_df['Sub-Category'], marker={'color': colors},hovertext=top_states_df['Sub-Category']))
fig.update_layout(title_text=f'Top {number} sub categories with highest sales', plot_bgcolor='white')
st.plotly_chart(fig)
# initialize Streamlit slider for selecting number of subcategories to display
number_sub_cat = st.slider('Select the number of Sub-Category', 1, 10, 3)
plot_raw_yearly_sales_by_Sub_Cat(k,number_sub_cat)
def plot_raw_yearly_sales_by_Product(input:str,number:int):
if input == "Past":
df_yearly_product = df_train.groupby([pd.Grouper(freq='Y'), 'Product Name'])['Sales'].sum().reset_index()
else:
df_yearly_product = df_test.groupby([pd.Grouper(freq='Y'), 'Product Name'])['Sales'].sum().reset_index()
df_yearly_product = pd.DataFrame(df_yearly_product)
color_scale = px.colors.sequential.Viridis
topN_products = df_yearly_product.groupby('Product Name').sum().sort_values('Sales', ascending=False).head(number).index.tolist()
top_product_df = df_yearly_product[df_yearly_product['Product Name'].isin(topN_products)]
# create a dictionary that maps each unique value in the Product Name column to a color from the color scheme
color_map = {product: color_scale[i % len(color_scale)] for i, product in enumerate(top_product_df['Product Name'].unique())}
# use the color_map dictionary to map the Product Name values to colors
colors = top_product_df['Product Name'].map(color_map)
# create the plot using plotly.graph_objects
fig = go.Figure(data=go.Bar(x=top_product_df['Order Date'], y=top_product_df['Sales'], marker={'color': colors},hovertext=top_product_df['Product Name']))
fig.update_layout(title_text=f'Top {number} best-selling products', plot_bgcolor='white')
st.plotly_chart(fig)
# initialize Streamlit slider for selecting number of products to display
number_p = st.slider('Select the number of products to display', 1, 10, 3)
plot_raw_yearly_sales_by_Product(k,number_p)
def plot_raw_yearly_sales_by_City(input:str, number:int):
if input == "Past":
df_yearly_state = df_train.groupby([pd.Grouper(freq='Y'), 'City'])['Sales'].sum().reset_index()
else:
df_yearly_state = df_test.groupby([pd.Grouper(freq='Y'), 'City'])['Sales'].sum().reset_index()
df_yearly_state = pd.DataFrame(df_yearly_state)
color_scale = px.colors.sequential.Viridis
topN_states = df_yearly_state.groupby('City').sum().sort_values('Sales', ascending=False).head(number).index.tolist()
top_states_df = df_yearly_state[df_yearly_state['City'].isin(topN_states)]
# create a dictionary that maps each unique value in the State column to a color from the color scheme
color_map = {state: color_scale[i % len(color_scale)] for i, state in enumerate(top_states_df['City'].unique())}
# use the color_map dictionary to map the State values to colors
colors = top_states_df['City'].map(color_map)
# create the plot using plotly.graph_objects
fig = go.Figure(data=go.Bar(x=top_states_df['Order Date'], y=top_states_df['City'], marker={'color': colors},hovertext=top_states_df['City']))
fig.update_layout(title_text=f'Top {number} states with highest sales', plot_bgcolor='white')
st.plotly_chart(fig)
# initialize Streamlit slider for selecting number of subcategories to display
number_city = st.slider('Select the number of Cities', 1, 10, 3)
plot_raw_yearly_sales_by_City(k,number_city)
|
dccuchile/albert-large-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25
| null |
Example prompts
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
List five advantages of using solar energy.
### Response:
1. Solar energy is renewable and sustainable.
2. Low ongoing costs and no fuel expenses.
3. Generates no pollution or noise.
4. Easily scalable for different types of applications.
5. Solar energy is widely available in most regions.
```
***
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Generate an example of a travel destination in your favorite country.
### Response:
My favorite travel destination in Australia is Fraser Island, located on the eastern coast.
```
|
dccuchile/albert-large-spanish-finetuned-xnli
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.30 +/- 20.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/albert-tiny-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29
| null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 4.16 +/- 0.59
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r artbreguez/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
dccuchile/albert-tiny-spanish-finetuned-xnli
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31
| null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mraabs/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dccuchile/albert-xlarge-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26
| null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: qlearning-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mraabs/qlearning-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dccuchile/albert-xlarge-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 24
| null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zhuohao/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa
|
[
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zhuohao/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dccuchile/albert-xxlarge-spanish-finetuned-pawsx
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26
| null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation_V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation_V1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.787 | 0.34 | 100 | 1.7580 |
| 1.8565 | 0.68 | 200 | 1.6773 |
| 1.7969 | 1.02 | 300 | 1.6396 |
| 1.6961 | 1.35 | 400 | 1.6202 |
| 1.6777 | 1.69 | 500 | 1.6101 |
| 1.6584 | 2.03 | 600 | 1.5956 |
| 1.6032 | 2.37 | 700 | 1.5908 |
| 1.5995 | 2.71 | 800 | 1.5870 |
| 1.5876 | 3.05 | 900 | 1.5834 |
| 1.5467 | 3.39 | 1000 | 1.5820 |
| 1.5486 | 3.73 | 1100 | 1.5696 |
| 1.5313 | 4.06 | 1200 | 1.5761 |
| 1.514 | 4.4 | 1300 | 1.5713 |
| 1.5024 | 4.74 | 1400 | 1.5727 |
| 1.4984 | 5.08 | 1500 | 1.5722 |
| 1.4769 | 5.42 | 1600 | 1.5700 |
| 1.4652 | 5.76 | 1700 | 1.5670 |
| 1.4835 | 6.1 | 1800 | 1.5684 |
| 1.4589 | 6.44 | 1900 | 1.5711 |
| 1.459 | 6.77 | 2000 | 1.5671 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
dccuchile/albert-base-spanish
|
[
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null |
{
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 586
| 2023-04-02T00:30:28Z
|
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
dccuchile/albert-tiny-spanish
|
[
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null |
{
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 393
| 2023-04-02T00:33:40Z
|
---
datasets:
- vldsavelyev/guitar_tab
pipeline_tag: text-generation
tags:
- music
---
Trained on a [guitar_tab](https://huggingface.co/datasets/vldsavelyev/guitar_tab) - a dataset of tabs in alphaTex format, filtered down to 4-string bass tracks.
|
dccuchile/albert-xxlarge-spanish
|
[
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null |
{
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42
| 2023-04-02T00:37:07Z
|
---
tags:
- stable_diffusion
- checkpoint
---
The source of the models is listed below. Please check the original licenses from the source.
https://civitai.com/models/4201
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-ner
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 81
| 2023-04-02T00:46:37Z
|
pip install transformers
python generate_text.py
from transformers import pipeline
# Load the pre-trained GPT-2 model
generator = pipeline('text-generation', model='gpt2')
# Generate some text based on a given prompt
prompt = "The meaning of life is"
generated_text = generator(prompt, max_length=50, num_return_sequences=1)
# Print the generated text
print(generated_text[0]['generated_text'])
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25
| 2023-04-02T00:49:17Z
|
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: ibadrehman/aivsai-soccertwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-xnli
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28
| null |
# Tako(octopus) park
The octopus mountain is an outdoor park playground equipment made of concrete and artificial stone that has a structure like a maze and a secret base, mainly a slide. is called in It is also known by various other names such as "octopus slide".
## Trigger Word
```
tkprk
```
## Sample
<img src="https://huggingface.co/ymmttks/TakoPark/resolve/main/samples/00071-3750984426.png" width="600">
<img src="https://huggingface.co/ymmttks/TakoPark/resolve/main/samples/00089-36269299.png" width="600">
<img src="https://huggingface.co/ymmttks/TakoPark/resolve/main/samples/00090-2398259706.png" width="600">
<img src="https://huggingface.co/ymmttks/TakoPark/resolve/main/samples/00089-36269298.png" width="600">
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-mldoc
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 39
| 2023-04-02T01:14:13Z
|
---
tags:
- generated_from_trainer
datasets:
- afrispeech-200
metrics:
- wer
model-index:
- name: whisper-small-hi-2400_500_135
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: afrispeech-200
type: afrispeech-200
config: hausa
split: train
args: hausa
metrics:
- name: Wer
type: wer
value: 0.31118587047939444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi-2400_500_135
This model is a fine-tuned version of [saif-daoud/whisper-small-hi-2400_500_134](https://huggingface.co/saif-daoud/whisper-small-hi-2400_500_134) on the afrispeech-200 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7561
- Wer: 0.3112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 350
- training_steps: 1386
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6189 | 0.5 | 693 | 0.7623 | 0.3172 |
| 0.609 | 1.5 | 1386 | 0.7561 | 0.3112 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pos
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5
| 2023-04-02T01:27:02Z
|
---
tags:
- stable_diffusion
- checkpoint
---
The source of the models is listed below. Please check the original licenses from the source.
https://civitai.com/models/9942
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-xnli
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 36
| 2023-04-02T01:30:03Z
|
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
dccuchile/distilbert-base-spanish-uncased-finetuned-ner
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28
| null |
Access to model haiyan1/mask is restricted and you are not in the authorized list. Visit https://huggingface.co/haiyan1/mask to ask for access.
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate-1
|
[
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1
| null |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: legal_long_legal_ver2_test_sm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_long_legal_ver2_test_sm
This model is a fine-tuned version of [kiddothe2b/legal-longformer-base](https://huggingface.co/kiddothe2b/legal-longformer-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7379
- Accuracy: 0.5519
- Precision: 0.5139
- Recall: 0.5663
- F1: 0.5388
- D-index: 1.2690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 0.98 | 26 | 0.6957 | 0.4599 | 0.44 | 0.6173 | 0.5138 | 1.1026 |
| No log | 2.0 | 53 | 0.6960 | 0.4528 | 0.4262 | 0.5306 | 0.4727 | 1.0831 |
| No log | 2.98 | 79 | 0.6978 | 0.4481 | 0.4312 | 0.6071 | 0.5042 | 1.0794 |
| No log | 4.0 | 106 | 0.7232 | 0.4788 | 0.4622 | 0.7806 | 0.5806 | 1.1493 |
| No log | 4.98 | 132 | 0.7340 | 0.5189 | 0.4828 | 0.5714 | 0.5234 | 1.2095 |
| No log | 6.0 | 159 | 0.8623 | 0.5425 | 0.5049 | 0.5255 | 0.515 | 1.2493 |
| No log | 6.98 | 185 | 1.2325 | 0.5448 | 0.5116 | 0.3367 | 0.4062 | 1.2412 |
| No log | 8.0 | 212 | 1.4773 | 0.5165 | 0.4717 | 0.3827 | 0.4225 | 1.1925 |
| No log | 8.98 | 238 | 1.6199 | 0.5330 | 0.4941 | 0.4286 | 0.4590 | 1.2258 |
| No log | 10.0 | 265 | 1.8976 | 0.5259 | 0.4900 | 0.6276 | 0.5503 | 1.2261 |
| No log | 10.98 | 291 | 2.1687 | 0.4953 | 0.4622 | 0.5612 | 0.5069 | 1.1653 |
| No log | 12.0 | 318 | 2.3087 | 0.4882 | 0.4578 | 0.5816 | 0.5124 | 1.1535 |
| No log | 12.98 | 344 | 2.5168 | 0.4953 | 0.4667 | 0.6429 | 0.5408 | 1.1708 |
| No log | 14.0 | 371 | 2.5389 | 0.5142 | 0.4788 | 0.5765 | 0.5231 | 1.2012 |
| No log | 14.98 | 397 | 2.4224 | 0.5330 | 0.4957 | 0.5918 | 0.5395 | 1.2366 |
| No log | 16.0 | 424 | 2.6391 | 0.5212 | 0.4852 | 0.5867 | 0.5312 | 1.2148 |
| No log | 16.98 | 450 | 2.7235 | 0.5307 | 0.4932 | 0.5510 | 0.5205 | 1.2297 |
| No log | 18.0 | 477 | 2.7272 | 0.5425 | 0.5045 | 0.5765 | 0.5381 | 1.2527 |
| 0.2333 | 18.98 | 503 | 2.7222 | 0.5495 | 0.5117 | 0.5561 | 0.5330 | 1.2641 |
| 0.2333 | 19.62 | 520 | 2.7379 | 0.5519 | 0.5139 | 0.5663 | 0.5388 | 1.2690 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
CennetOguz/distilbert-base-uncased-finetuned-recipe
|
[
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2
| null |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Certified-Zoomer/DialoGPT-small-rick
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.26 +/- 7.07
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r naeisher/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Chaewon/mnmt_decoder_en_gpt2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T02:09:16Z
|
---
license: openrail
library_name: diffusers
pipeline_tag: text-to-image
---
|
Chaima/TunBerto
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
duplicated_from: Lucetepolis/FuzzyHazel
---
# FuzzyHazel, FuzzyAlmond
HazyAbyss - <a href="https://huggingface.co/KMAZ/TestSamples/">Download</a><br/>
OctaFuzz - <a href="https://huggingface.co/Lucetepolis/OctaFuzz">Download</a><br/>
MareAcernis - <a href="https://huggingface.co/Lucetepolis/MareAcernis">Download</a><br/>
RefSlaveV2 - <a href="https://huggingface.co/Dorshu/refslaveV2_v2">Download</a><br/>
dlfmaanjffhgkwl v2 - <a href="https://civitai.com/models/9815/dlfmaanjffhgkwl-mix">Download</a><br/>
Guardian Tales 三七-SAL-独轮车 | Chibi Style Lora 52 - <a href="https://civitai.com/models/14274/guardian-tales-sal-or-chibi-style-lora-52">Download</a><br/>
Komowata Haruka (こもわた遙華) Chibi Art Style LoRA - <a href="https://civitai.com/models/9922/komowata-haruka-chibi-art-style-lora">Download</a><br/>
Terada Tera (寺田てら) Art Style LoRA - <a href="https://civitai.com/models/15446/terada-tera-art-style-lora">Download</a><br/>
Yaro Artstyle LoRA - <a href="https://civitai.com/models/8112/yaro-artstyle-lora">Download</a><br/>
EasyNegative and pastelmix-lora seem to work well with the models.
EasyNegative - <a href="https://huggingface.co/datasets/gsdf/EasyNegative">Download</a><br/>
pastelmix-lora - <a href="https://huggingface.co/andite/pastel-mix">Download</a>
# Formula
```
MBW
HazyAbyss.safetensors [d7b0072ef7]
octafuzz.safetensors [364bdf849d]
0000.safetensors
base_alpha=1
Weight_values=1,1,0,0,0,0.5,1,1,0.5,0,0,0,1,0,0,0,0.5,1,1,0.5,0,0,0,1,1
MBW
0000.safetensors [360691971b]
mareacernis.safetensors [fbc82b317d]
0001.safetensors
base_alpha=0
Weight_values=0.5,0,0,0,0,0,0,0,0.5,0.5,0,0,0.25,0.5,0.5,0.5,0.25,0.25,0.25,0.25,0.5,0.5,0.5,0,0
MBW
0001.safetensors [ac67bd1235]
refslavev2.safetensors [cce9a2d200]
0002.safetensors
base_alpha=0
Weight_values=0,0.5,1,1,0.5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1
MBW
0002.safetensors [cc5331b8ae]
dlf.safetensors [d596b45d6b]
FuzzyHazel.safetensors
base_alpha=0
Weight_values=0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0
SuperMerger LoRA Merge
model_0 : FuzzyHazel.safetensors
model_Out : FuzzyAlmond.safetensors
LoRa : lora:guardiantales:0.25, lora:komowata:0.25, lora:terada:0.25, lora:yaro:0.25
```
# Samples
All of the images use following negatives/settings. EXIF preserved.
```
Negative prompt: (worst quality, low quality:1.4), EasyNegative, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits
Steps: 28, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 768x512, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires upscale: 1.5, Hires steps: 14, Hires upscaler: Latent (nearest-exact)
```
# FuzzyHazel












# FuzzyAlmond












|
Chakita/Friends
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8
| 2023-04-02T02:34:05Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.80 +/- 0.79
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Champion/test_upload_vox2_wavlm_epoch8
|
[
"sidekit",
"audio"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Youssefk/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Youssefk/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5098
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 624, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 3.3988 | 0 |
| 1.7637 | 1 |
| 1.5098 | 2 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Chan/distilroberta-base-finetuned-wikitext2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
Access to model LMFResearchSociety/Tiantianquan is restricted and you are not in the authorized list. Visit https://huggingface.co/LMFResearchSociety/Tiantianquan to ask for access.
|
Chertilasus/main
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
Access to model degaga/documentai-1 is restricted and you are not in the authorized list. Visit https://huggingface.co/degaga/documentai-1 to ask for access.
|
Chikita1/www_stash_stock
|
[
"license:bsd-3-clause-clear"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
Access to model degaga/documentai-2 is restricted and you are not in the authorized list. Visit https://huggingface.co/degaga/documentai-2 to ask for access.
|
Chinat/test-classifier
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T04:25:39Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### protoparailaranulaularacorpinho Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Chinmay/mlindia
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T04:27:31Z
|
---
license: openrail
---
くまモンです。トリガーは「kumamon」。Weightは0.8が良好なようです。
Kumamon is the official character of Kumamoto Prefecture in Japan. Trigger is "kumamon". Weight is around 0.8.
```
Example)
prompt: kumamon ,arms up, standing, <lora:kumamon:0.8>
Negative prompt: EasyNegative, badhandv4
```
<img width="480" src=https://huggingface.co/p-light/kumamon/resolve/main/kumamon.png>
|
ChoboAvenger/DialoGPT-small-DocBot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: unknown
---
# GPT4 X Alpaca (fine-tuned natively) 13B model download for Alpaca.cpp, Llama.cpp, and Dalai
All credits go to chavinlo for creating the dataset and training/fine-tuning the model
https://huggingface.co/chavinlo/gpt4-x-alpaca
|
ChoboAvenger/DialoGPT-small-joshua
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T04:38:59Z
|
---
license: cc
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
"Three Kingdoms" is a historical simulation game series developed by Japan Koei Co., Ltd., released in 1985. It is the first part of the "Three Kingdoms" series of game works. The essence of the game series is the detailed textual research on the history of the Three Kingdoms, and the vivid portraits of the characters, which perfectly integrate the huge political and military structure of the Three Kingdoms era into the SLG game mode.
I processed the vertical drawing of the characters and made this LORA
TAG:san13style,armor,hanfu
### Results
youtube:https://youtu.be/RXQJMT95t1E






|
ChrisP/xlm-roberta-base-finetuned-marc-en
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
datasets:
- magicgh/alpaca-cleaned-random-25
---
# LLaMa-7B LoRA Alpaca-25
This repo contains a low-rank adaptation (LoRA) finetuned model of LLaMa-7B on 25% of the Alpaca cleaned dataset.
This version of the weights was trained with the following hyperparameters:
* Epochs: 3
* Cutoff length: 512
* Learning rate: 3e-4
* Lora r: 8
* Lora alpha: 16
* Lora dropout: 0.05
* Lora target modules: q_proj, k_proj
|
ChrisVCB/DialoGPT-medium-cmjs
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| 2023-04-02T04:48:27Z
|
---
license: mit
datasets:
- magicgh/alpaca-cleaned-random-50
---
# LLaMa-7B LoRA Alpaca-50
This repo contains a low-rank adaptation (LoRA) finetuned model of LLaMa-7B on 50% of the Alpaca cleaned dataset.
This version of the weights was trained with the following hyperparameters:
* Epochs: 3
* Cutoff length: 512
* Learning rate: 3e-4
* Lora r: 8
* Lora alpha: 16
* Lora dropout: 0.05
* Lora target modules: q_proj, k_proj
|
ChristianOrr/madnet_keras
|
[
"tensorboard",
"dataset:flyingthings-3d",
"dataset:kitti",
"arxiv:1810.05424",
"vision",
"deep-stereo",
"depth-estimation",
"Tensorflow2",
"Keras",
"license:apache-2.0"
] |
depth-estimation
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T04:53:51Z
|
---
language:
- en
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-base-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5634
---
# T5-base-finetuned-wnli
<!-- Provide a quick summary of what the model is/does. -->
This model is T5 fine-tuned on GLUE WNLI dataset. It acheives the following results on the validation set
- Accuracy: 0.5634
## Model Details
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
## Training procedure
### Tokenization
Since, T5 is a text-to-text model, the labels of the dataset are converted as follows:
For each example, a sentence as been formed as **"wnli sentence1: " + wnli_sent1 + "sentence 2: " + wnli_sent2** and fed to the tokenizer to get the **input_ids** and **attention_mask**.
For each label, label is choosen as **"entailment"** if label is 1, else label is **"not_entailment"** and tokenized to get **input_ids** and **attention_mask** .
During training, these inputs_ids having **pad** token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels
is given as decoder attention mask.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: epsilon=1e-08
- num_epochs: 3.0
### Training results
|Epoch | Training Loss | Validation Accuracy |
|:----:|:-------------:|:-------------------:|
| 1 | 0.1502 | 0.4930 |
| 2 | 0.1331 | 0.5634 |
| 3 | 0.1355 | 0.4225 |
|
ChukSamuels/DialoGPT-small-Dr.FauciBot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13
| 2023-04-02T04:58:54Z
|
---
license: gpl-3.0
datasets:
- ag_news
language:
- en
---
|
Chun/w-zh2en-mto
|
[
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7
| 2023-04-02T05:31:01Z
|
# *This is a 4-bit quantized ggml file for use with llama.cpp on CPU*
# GPT4 x Alpaca
(https://huggingface.co/chavinlo/gpt4-x-alpaca)
As a base model we used: https://huggingface.co/chavinlo/alpaca-13b
Finetuned on GPT4's responses, for 3 epochs.
NO LORA
|
Chungu424/repo
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- accuracy
pipeline_tag: text-to-image
tags:
- art
---
|
Chungu424/repodata
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T05:38:12Z
|
# *This is a 4-bit quantized ggml file for use with llama.cpp on CPU*
# ToolPaca
(https://huggingface.co/chavinlo/toolpaca)
based off https://huggingface.co/chavinlo/gpt4-x-alpaca
""""""""""toolformer""""""""""
sample:
```json
{
"instruction": "toolformer: enabled\ntoolformer access: python\nA Python shell. Use this to execute python commands. Input should be a valid python command or script. If you expect output it should be printed out. Useful for all code, as well as math calculations.\npython(codetoexecute)\nFind the greatest common divisor of the given pair of integers.",
"input": "48, 36",
"response": "The greatest common divisor is python('import math; math.gcd(48, 36)')."
}
```
dataset: https://cdn.discordapp.com/attachments/1088641238485442661/1090460649596919878/toolformer-similarity-0.9-dataset.json
NO LORA
|
Ci/Pai
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T05:42:39Z
|
---
license: apache-2.0
datasets:
- EleutherAI/the_pile
language:
- en
---
This is a tokenizer for the Parva models, based off of the GPT-Neox tokenizers
|
Cilan/dalle-knockoff
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: flax-community-thainews-20230402
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flax-community-thainews-20230402
This model is a fine-tuned version of [flax-community/gpt2-base-thai](https://huggingface.co/flax-community/gpt2-base-thai) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.5598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 11.0275 |
| No log | 2.0 | 8 | 10.6600 |
| No log | 3.0 | 12 | 10.5598 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Cinnamon/electra-small-japanese-generator
|
[
"pytorch",
"electra",
"fill-mask",
"ja",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 19
| null |
---
language:
- en
license: apache-2.0
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: t5-base-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5869
---
# T5-base-finetuned-cola
<!-- Provide a quick summary of what the model is/does. -->
This model is T5 fine-tuned on GLUE CoLA dataset. It acheives the following results on the validation set
- Matthews Correlation Coefficient: 0.5869
## Model Details
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
## Training procedure
### Tokenization
Since, T5 is a text-to-text model, the labels of the dataset are converted as follows:
For each example, a sentence as been formed as **"cola sentence: " + cola_sent** and fed to the tokenizer to get the **input_ids** and **attention_mask**.
For each label, label is choosen as **"acceptable"** if label is 1, else label is **"unacceptable"** and tokenized to get **input_ids** and **attention_mask** .
During training, these inputs_ids having **pad** token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels
is given as decoder attention mask.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: epsilon=1e-08
- num_epochs: 3.0
### Training results
|Epoch | Training Loss | Validation Matthews Correlation Coefficient |
|:----:|:-------------:|:-------------------:|
| 1 | 0.2471 | 0.4577 |
| 2 | 0.1633 | 0.5869 |
| 3 | 0.0933 | 0.5855 |
|
Ciruzzo/DialoGPT-medium-harrypotter
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| 2023-04-02T05:49:22Z
|
---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- language-and-voice-lab/samromur_asr
metrics:
- wer
model-index:
- name: Whisper tiny Icelandic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: language-and-voice-lab/samromur_asr
type: language-and-voice-lab/samromur_asr
config: samromur_asr
split: validation
args: samromur_asr
metrics:
- name: Wer
type: wer
value: 53.125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny Icelandic
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the language-and-voice-lab/samromur_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8061
- Wer: 53.125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.4531 | 0.25 | 250 | 1.5444 | 88.0 |
| 0.8932 | 0.5 | 500 | 1.0078 | 62.0 |
| 0.7225 | 0.75 | 750 | 0.8466 | 53.625 |
| 0.6828 | 1.0 | 1000 | 0.8061 | 53.125 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Ciruzzo/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| null |
---
model-index:
- name: ninja/nemo
results:
- task:
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 8.1
name: WER
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Clarianliz30/Caitlyn
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0
| null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.69 +/- 21.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ClaudeCOULOMBE/RickBot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9
| 2023-04-02T06:04:34Z
|
---
model-index:
- name: ninja/ninja
results:
- task:
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 8.1
name: WER
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CleveGreen/FieldClassifier_v2
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 46
| null |
---
language:
- es
- en
datasets:
- yahma/alpaca-cleaned
---
# Arepaca V1
<div style="text-align:center;width:350px;height:350px;">
<img src="/https://huggingface.co/hackathon-somos-nlp-2023/dolly-ArepacaV1/blob/main/arepaca.jpeg" alt="Arepaca logo"">
</div>
## Citation
```
@misc {hackathon-somos-nlp-2023,
author = { {Edison Bejarano, Leonardo Bolaños, Alberto Ceballos, Santiago Pineda, Nicolay Potes} },
title = { Arepaca },
year = 2023,
url = { https://huggingface.co/hackathon-somos-nlp-2023/dolly-ArepacaV1 }
publisher = { Hugging Face }
}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CleveGreen/FieldClassifier_v2_gpt
|
[
"pytorch",
"gpt2",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"GPT2ForSequenceClassification"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26
| 2023-04-02T06:10:07Z
|
---
datasets:
- argilla/alpaca_data_cleaned_validations
language:
- en
- es
epochs:
- 808
Tranning loss:
- 0.96
CO2 Emission Related to Experiments:
- Experiments were conducted using Google Cloud Platform in region europe-west1, which has a carbon efficiency of 0.27 kgCO$_2$eq/kWh. A cumulative of 7 hours of computation was performed on hardware of type A100 PCIe 40/80GB (TDP of 250W).
- Total emissions are estimated to be 0.47 kgCO$_2$eq of which 100 percents were directly offset by the cloud provider.
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.