id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
quoctruong242/finetunedmodel | quoctruong242 | 2023-11-29T08:09:47Z | 34 | 0 | null | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | 2023-11-29T08:09:47Z | 2023-11-29T05:07:32.000Z | null | null |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <Truong> person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - quoctruong242/finetunedmodel
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <Truong> person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | quoctruong242/finetunedmodel | [
-0.4530602991580963,
-0.6736394762992859,
0.3268406093120575,
0.22725334763526917,
-0.24498575925827026,
0.08327654749155045,
0.4848344326019287,
0.14656639099121094,
0.5970904231071472,
0.488606721162796,
-0.7113788723945618,
-0.3046334683895111,
-0.6055021286010742,
-0.2574799358844757,
... |
e-n-v-y/envy-fantasy-architectural-flourishes-xl-01 | e-n-v-y | 2023-11-29T06:42:11Z | 33 | 1 | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"city",
"architecture",
"buildings",
"fantasy",
"building",
"fancy",
"flourishes",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"has_space",
"region:us"
] | 2023-11-29T06:42:11Z | 2023-11-29T06:42:09.000Z | null | null | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Sell&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- city
- architecture
- buildings
- fantasy
- building
- fancy
- flourishes
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text: '8k, highly detailed, toy shop in a empty fantasy vertical city at the end of time, masterpiece'
output:
url: >-
3969267.jpeg
- text: '8k, highly detailed, makoto shinkai promotional movie still, whimsical fantasy sky city at the end of the universe'
output:
url: >-
3969254.jpeg
- text: '8k, highly detailed, amazing,indescribable fantasy megacity outside of time'
output:
url: >-
3969259.jpeg
- text: '8k, highly detailed, a wonderous fantasy city edge of the multiverse, masterpiece, by Feng Zhu'
output:
url: >-
3969260.jpeg
- text: '8k, highly detailed, a amazing,whimsical fantasy vertical arcology outside of the multiverse, masterpiece'
output:
url: >-
3969261.jpeg
- text: '8k, highly detailed, Hidden courtyard with a fountain in a indescribable fantasy mountainside megacity outside of time, masterpiece'
output:
url: >-
3969262.jpeg
- text: '8k, highly detailed, illegal street race in a great,techno-optimist fantasy sprawling megastructure at the end of the multiverse, masterpiece'
output:
url: >-
3969263.jpeg
- text: '8k, highly detailed, living,amazing fantasy cloud city at the beginning of time'
output:
url: >-
3969264.jpeg
- text: '8k, highly detailed, submarine dock in a Hallowed fantasy topia beyond the beginning of reality, masterpiece'
output:
url: >-
3969265.jpeg
- text: '8k, highly detailed, cargo depot in a impossible,cheerful fantasy hillside metropolis edge of time, masterpiece, by Aleksei Liakh'
output:
url: >-
3969266.jpeg
---
# Envy Fantasy Architectural Flourishes XL 01
<Gallery />
## Model description
<p>The fantasy city looked upon the scifi city with all of its fancy <a rel="ugc" href="https://civitai.com/models/210396">greebles </a>and cried, "Architect! I want to be fancy too! Give me greebles so that I may be fancy like my scifi brethren!"</p><p>"No," replied the Architect, "for greebles are thematically inappropriate for fantasy, and would make you look stupid."</p><p>"So then you intend to leave me greeble-less? Have I been betrayed by my own creator?" asked the fantasy city.</p><p>"No, my dear fantasy city," said the Architect. "Rejoice, because for you, I have flourishes, which will make you look just as fantasy as a greebled up scifi city, but in a thematically appropriate way!"</p><p>And the fantasy city rejoiced, and was fancy.</p><p>Anyway, this model adds details to fantasy cities. Don't tell the fantasy city, but it also looks pretty cool in sci-fi settings!</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/e-n-v-y/envy-fantasy-architectural-flourishes-xl-01/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('e-n-v-y/envy-fantasy-architectural-flourishes-xl-01', weight_name='EnvyFlourishesXL01.safetensors')
image = pipeline('8k, highly detailed, cargo depot in a impossible,cheerful fantasy hillside metropolis edge of time, masterpiece, by Aleksei Liakh').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | e-n-v-y/envy-fantasy-architectural-flourishes-xl-01 | [
-0.5496753454208374,
-0.6812938451766968,
0.30679571628570557,
0.4475618004798889,
-0.04780839756131172,
-0.05093790218234062,
0.2488950937986374,
-0.5134046673774719,
0.8733710050582886,
0.35618066787719727,
-0.4694853127002716,
-0.5314562916755676,
-0.3736359179019928,
-0.106301210820674... |
tkcho/cp-commerce-clf-kr-sku-brand-384d0df541580abdb6962aafc27d7c56 | tkcho | 2023-11-29T22:53:43Z | 33 | 0 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T22:53:43Z | 2023-11-29T11:40:02.000Z | null | null | Entry not found | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | tkcho/cp-commerce-clf-kr-sku-brand-384d0df541580abdb6962aafc27d7c56 | [
-0.3227650225162506,
-0.22568444907665253,
0.8622258901596069,
0.43461504578590393,
-0.5282988548278809,
0.7012965679168701,
0.7915717959403992,
0.0761863961815834,
0.7746025919914246,
0.2563222050666809,
-0.7852813005447388,
-0.22573848068714142,
-0.910447895526886,
0.5715667009353638,
... |
mlinmg/SG-Raccoon-Yi-55B-200k | mlinmg | 2023-11-29T10:53:06Z | 32 | 4 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | 2023-11-29T10:53:06Z | 2023-11-28T00:05:36.000Z | null | null | ---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en,
pipeline_tag: conversational
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/644ba0c76ebb3ebf7264dbe9/PWn9I-0XH7kSP_YXcyxIg.png" width="400"/>
</p>
---
# SG Raccoon 55B
The first 55B auto-regressive causal LM created by combining 2x finetuned [Yi 34b](https://huggingface.co/01-ai/Yi-34B) with *200K context* into one.
# Prompting Format
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
# Merge process
The models used in the merge are [Tess-M-v1.3](https://huggingface.co/migtissera/Tess-M-v1.3/) and [Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B).
The layer ranges used are as follows:
```yaml
- model: migtissera/Tess-M-v1.3
layer_range: [0, 14]
- model: NousResearch/Nous-Capybara-34B
layer_range: [7, 21]
- model: migtissera/Tess-M-v1.3
layer_range: [15, 29]
- model: NousResearch/Nous-Capybara-34B
layer_range: [22, 36]
- model: migtissera/Tess-M-v1.3
layer_range: [30, 44]
- model: NousResearch/Nous-Capybara-34B
layer_range: [37, 51]
- model: migtissera/Tess-M-v1.3
layer_range: [45, 59]
```
# Tips
Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP (and no other samplers) if output doesn't seem right. Yi tends to run "hot" by default.
Sometimes the model "spells out" the stop token as </s> like Capybara, so you may need to add </s> as an additional stopping condition.
# Benchmarks
Coming soon.
# Acknowledgements
- Special thanks to [MSS](https://milanosamplesale.com/) for sponsoring this project
- [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).
- Great thanks to [@Undi95](https://huggingface.co/Undi95) for helping figuring out model merge options
- Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing models
- This merged model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b) | null | transformers | conversational | null | null | null | null | null | null | null | null | null | mlinmg/SG-Raccoon-Yi-55B-200k | [
-0.6374897956848145,
-0.5243647694587708,
0.007590248715132475,
0.22599348425865173,
-0.03597906231880188,
-0.3869396150112152,
0.0029712256509810686,
-0.5208534598350525,
0.21562792360782623,
0.2638888657093048,
-0.7879306674003601,
-0.3560778498649597,
-0.45529061555862427,
-0.1434889286... |
Ja3ck/llama-2-13b-DPO-Y24-v2 | Ja3ck | 2023-11-29T06:49:10Z | 32 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T06:49:10Z | 2023-11-29T06:36:55.000Z | null | null | ---
license: apache-2.0
language:
- ko
---
## Prompt Template
```
### 질문: {instruction}
### 답변: {output}
``` | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | Ja3ck/llama-2-13b-DPO-Y24-v2 | [
-0.04997878149151802,
-0.1832696497440338,
0.4979184865951538,
0.5446438789367676,
-0.6531704068183899,
-0.024528229609131813,
0.11793479323387146,
0.9445185661315918,
0.35888832807540894,
0.6858898997306824,
-0.7869235277175903,
-0.9631636738777161,
-0.6446100473403931,
0.0574704334139823... |
worde-byte/finetunemistral | worde-byte | 2023-11-29T16:22:03Z | 31 | 0 | null | [
"transformers",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:caisarl76/Mistral-7B-OpenOrca-Guanaco-accu16",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T16:22:03Z | 2023-11-15T18:18:53.000Z | null | null | ---
license: llama2
base_model: caisarl76/Mistral-7B-OpenOrca-Guanaco-accu16
tags:
- generated_from_trainer
model-index:
- name: finetunemistral
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetunemistral
This model is a fine-tuned version of [caisarl76/Mistral-7B-OpenOrca-Guanaco-accu16](https://huggingface.co/caisarl76/Mistral-7B-OpenOrca-Guanaco-accu16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 8000
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | worde-byte/finetunemistral | [
-0.5109860301017761,
-0.6129318475723267,
0.03832206875085831,
0.1706814169883728,
-0.4440242052078247,
-0.46512094140052795,
-0.21196986734867096,
-0.33869680762290955,
0.1472984105348587,
0.4069335460662842,
-0.6482294797897339,
-0.6344974040985107,
-0.695138692855835,
-0.090949244797229... |
osunlp/TableLlama | osunlp | 2023-11-29T05:50:30Z | 31 | 6 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:osunlp/TableInstruct",
"arxiv:2311.09206",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T05:50:30Z | 2023-11-20T01:50:22.000Z | null | null | ---
license: mit
language:
- en
datasets:
- osunlp/TableInstruct
---
---
# TableLlama: Towards Open Large Generalist Models for Tables
Project Page: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
Paper: [https://arxiv.org/abs/2311.09206](https://arxiv.org/abs/2311.09206)
Dataset: [https://huggingface.co/datasets/osunlp/TableInstruct/](https://huggingface.co/datasets/osunlp/TableInstruct/)
Code: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
## Introduction
We introduce TableLlama, an open-source large generalist model specifically tailored for various table-based tasks. The TableLlama model is trained on 🤗 [TableInstruct Dataset](https://huggingface.co/datasets/osunlp/TableInstruct), a meticulously curated instruction tuning dataset for tables. TableLlama is tuned on 2.6 million table-based task data, and can handle up to 8K context!
## Model
[TableLlama-7B](https://huggingface.co/osunlp/TableLlama/)
## Data
The models are trained on the 🤗 [TableInstruct Dataset](https://huggingface.co/datasets/osunlp/TableInstruct), which includes a comprehensive table-based instruction tuning dataset that covers a variety of real-world tables and realistic tasks. We include 14 datasets of 11 tasks in total. Check out the dataset card for more details.
## Training Procedure
The models are fine-tuned with the TableInstruct dataset using LongLoRA (7B), fully fine-tuning version as the base model, which replaces the vanilla attention mechanism of the original Llama-2 (7B) with shift short attention. The training takes 9 days on a 48 80*A100 cluster. Check out our paper for more details.
## Evaluation
The models are evaluated on 8 in-domain datasets of 8 tasks and 6 out-of-domain datasets of 4 tasks.
## Usage
You can use the models through Huggingface's Transformers library.
Check our Github repo for more advanced use: [https://osu-nlp-group.github.io/TableLlama/](https://osu-nlp-group.github.io/TableLlama/)
## Prompt Format
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that
appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Question:
{question}
### Response:
```
## Limitations
We've tried our best to build table generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the table tasks and datasets. Still not all table-based tasks can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@misc{zhang2023tablellama,
title={TableLlama: Towards Open Large Generalist Models for Tables},
author={Tianshu Zhang and Xiang Yue and Yifei Li and Huan Sun},
year={2023},
eprint={2311.09206},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | osunlp/TableLlama | [
-0.2587430775165558,
-0.8784648180007935,
0.5571652054786682,
0.22404880821704865,
-0.16252848505973816,
-0.5799095630645752,
-0.40065136551856995,
-0.6412376165390015,
0.11233386397361755,
0.6652319431304932,
-0.41280597448349,
-0.9215830564498901,
-0.3879101872444153,
-0.2101177126169204... |
Norod78/claymationx-sdxl-lora | Norod78 | 2023-11-29T05:22:50Z | 30 | 1 | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"claymation",
"christmas",
"style",
"sdxl style lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"has_space",
"region:us"
] | 2023-11-29T05:22:50Z | 2023-11-29T05:22:48.000Z | null | null | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- claymation
- christmas
- style
- sdxl style lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: ClaymationX
widget:
- text: 'Harry potter and the holiday spirit ClaymationX '
output:
url: >-
3974791.jpeg
- text: 'Rick and Morty ClaymationX '
output:
url: >-
3974823.jpeg
- text: 'Cyberman santa ClaymationX '
output:
url: >-
3974831.jpeg
- text: 'Snoop Dogg ClaymationX '
output:
url: >-
3974835.jpeg
- text: 'Godzilla vs. Santa ClaymationX '
output:
url: >-
3974837.jpeg
- text: 'The girl with a pearl earring by vermeer ClaymationX '
output:
url: >-
3974905.jpeg
- text: 'Zombie Santa riding with a demon reindeer ClaymationX '
output:
url: >-
3974868.jpeg
- text: 'An evil Rudolph spider ClaymationX '
output:
url: >-
3974883.jpeg
- text: 'A socially awkward (potato) ClaymationX '
output:
url: >-
3974929.jpeg
- text: 'Marge Simpson kissing Santa ClaymationX '
output:
url: >-
3974948.jpeg
---
# ClaymationX - SDXL LoRA
<Gallery />
([CivitAI](https://civitai.com/models/214257))
## Model description
<p>SDXL LoRA trained upon images from various old "Christmas claymation specials"</p><p>Use the trigger word ClaymationX in your prompts</p>
## Trigger words
You should use `ClaymationX` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Norod78/claymationx-sdxl-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Norod78/claymationx-sdxl-lora', weight_name='SDXL-ClaymationX-Lora-000002.safetensors')
image = pipeline('Marge Simpson kissing Santa ClaymationX ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | Norod78/claymationx-sdxl-lora | [
-0.26799276471138,
-0.3307422995567322,
0.15829335153102875,
0.5564613938331604,
-0.4568796455860138,
-0.25237876176834106,
0.15246783196926117,
-0.19914937019348145,
0.3651140332221985,
0.6035707592964172,
-0.4801284670829773,
-0.3645104169845581,
-0.8156205415725708,
-0.30897873640060425... |
David394/adapter_cache | David394 | 2023-11-29T22:32:17Z | 29 | 0 | null | [
"region:us"
] | 2023-11-29T22:32:17Z | 2023-11-02T22:14:18.000Z | null | null | Entry not found | null | null | null | null | null | null | null | null | null | null | null | null | David394/adapter_cache | [
-0.3227650225162506,
-0.22568444907665253,
0.8622258901596069,
0.43461504578590393,
-0.5282988548278809,
0.7012965679168701,
0.7915717959403992,
0.0761863961815834,
0.7746025919914246,
0.2563222050666809,
-0.7852813005447388,
-0.22573848068714142,
-0.910447895526886,
0.5715667009353638,
... |
robookwus/climate-change-belief | robookwus | 2023-11-30T00:14:53Z | 28 | 2 | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-11-30T00:14:53Z | 2023-07-04T08:33:40.000Z | null | null | ---
license: mit
widget:
- text: "human-made climate change is a hoax spread by liberal media!"
---
Classifiying whether a person believes in human-made climate change or not. Neutral or factual news are also options.
Albert model trained with data from: https://www.kaggle.com/datasets/edqian/twitter-climate-change-sentiment-dataset
works pretty poorly right now but will re-train it very soon with more data and a larger large language model..
--- | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | robookwus/climate-change-belief | [
-0.4749729633331299,
-0.4023696482181549,
0.3617079555988312,
0.13154329359531403,
-0.14740702509880066,
0.42877882719039917,
-0.20287157595157623,
-0.29408594965934753,
0.42185133695602417,
0.34333908557891846,
-0.6353164911270142,
-0.6488810777664185,
-0.48130857944488525,
0.061645962297... |
jebcarter/psyonic-cetacean-20B | jebcarter | 2023-11-29T14:20:38Z | 28 | 3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"storywriting",
"text adventure",
"not-for-all-audiences",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | 2023-11-29T14:20:38Z | 2023-11-28T09:34:48.000Z | null | null | ---
license: other
license_name: microsoft-research-license
tags:
- storywriting
- text adventure
- not-for-all-audiences
---

---
Presenting the FP16 files for Psyonic-Cetacean-20B! This is an experimental Llama2-based stack merge based on the models and recipe below:
- [KoboldAI/PsyFighter-2-13b](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2-GGUF)
- [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
```yaml
slices:
- sources:
- model: Orca2flat
layer_range: [0, 16]
- sources:
- model: LLaMA2-13B-Psyfighter2 (FP16 not yet available)
layer_range: [8, 24]
- sources:
- model: Orca2flat
layer_range: [17, 32]
- sources:
- model: LLaMA2-13B-Psyfighter2 (FP16 not yet available)
layer_range: [25, 40]
merge_method: passthrough
dtype: float16
```
Note: while we did run an inverted merge the output was not satisfactory and will not be released.
We first flatted the additional ChatML vocabulary tokens out of Orca-2-13B, then performed a stack merge with Psyfighter-2-13B. The results surprised us with their vividness, freshness of prose, obedience to instruction prompting, and formatting cohesion.
This model is focused on storywriting and text adventure, with a side order of Assistant and Chat functionality. Like its ancestor Psyfighter-2 this model will function better if you let it improvise and riff on your concepts rather than feeding it an excess of detail.
Additionally, either the removal of the ChatML vocab or the stack merging process itself has resulted in not only an uncensored model but an actively anti-censored model, so please be aware that this model can and will kill you during adventures or output NSFW material if prompted accordingly.
During testing, the model exhibited an especially strong affinity for science fiction and space opera writing, while handling fantasy elements quite well and horror elements slightly less so. Refer to the Psyfighter-2 model card for best prompting practices.
Despite that, we have tested the model out to 16000 context via Rope scaling and the model does not drive towards NSFW on its own. It will follow your tone and style very well.
Please enjoy, and if you encounter anything exciting or weird, please reach out to me at [jebcarter@pm.me].
Special thanks as always to the KoboldAI crew who provided the mergebox, testing, and feedback on this model, and to gelukuMLG for the model mascot! | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | jebcarter/psyonic-cetacean-20B | [
-0.3952639400959015,
-0.46627843379974365,
0.3339875340461731,
0.30336514115333557,
-0.5123506784439087,
0.15554013848304749,
0.03567962348461151,
-0.8627769947052002,
0.33500415086746216,
0.7304853796958923,
-0.6355882883071899,
-0.3413965702056885,
-0.6416654586791992,
-0.053896773606538... |
Norod78/SDXL-JojosoStyle-Lora-v2 | Norod78 | 2023-11-29T09:23:49Z | 28 | 0 | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"style",
"jojo's bizarre adventure",
"jojo",
"stone ocean",
"sdxl style lora",
"jojo's bizarre adventure stone ocean",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"has_space",
"regio... | 2023-11-29T09:23:49Z | 2023-11-29T09:23:45.000Z | null | null | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- style
- jojo's bizarre adventure
- jojo
- stone ocean
- sdxl style lora
- jojo's bizarre adventure stone ocean
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Jojosostyle
widget:
- text: 'Spiderman kissing a mutant alien JojosoStyle '
output:
url: >-
4002501.jpeg
- text: 'American gothic by Grant Wood JojosoStyle '
output:
url: >-
4002504.jpeg
- text: 'Insane zombie santa JojosoStyle '
output:
url: >-
4002511.jpeg
- text: 'The cookie monster JojosoStyle '
output:
url: >-
4002518.jpeg
- text: 'The Eiffel tower JojosoStyle , Very detailed, clean, high quality, sharp image'
output:
url: >-
4002520.jpeg
- text: 'A magical landscape JojosoStyle '
output:
url: >-
4002508.jpeg
- text: 'The girl with a pearl earring by vermeer Jojosostyle , Very detailed, clean, high quality, sharp image'
output:
url: >-
4002519.jpeg
- text: 'Pikachu blasting electricity Jojosostyle , Very detailed, clean, high quality, sharp image'
output:
url: >-
4002523.jpeg
- text: 'Snoop Dogg JojosoStyle '
output:
url: >-
4002507.jpeg
- text: 'Mr. Potato head Jojosostyle , Very detailed, clean, high quality, sharp image'
output:
url: >-
4002521.jpeg
---
# JoJo Stone-Ocean Style XL
<Gallery />
([CivitAI](https://civitai.com/models/129551))
## Model description
<p>A LoRA model for SDXL trained upon images from <em>JoJo's Bizarre Adventure: Stone Ocean</em></p>
## Trigger words
You should use `Jojosostyle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Norod78/jojo-stone-ocean-style-xl/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Norod78/jojo-stone-ocean-style-xl', weight_name='SDXL-JojosoStyle-Lora-v2-r16.safetensors')
image = pipeline('Mr. Potato head Jojosostyle , Very detailed, clean, high quality, sharp image').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | Norod78/SDXL-JojosoStyle-Lora-v2 | [
-0.3386639654636383,
-0.49379056692123413,
0.49425050616264343,
0.29732567071914673,
-0.6728349328041077,
-0.2886463403701782,
0.1880815625190735,
-0.4260307848453522,
0.4695417881011963,
0.8494126200675964,
-0.618101179599762,
-0.4478650391101837,
-0.750730574131012,
-0.1636180877685547,
... |
nathanrchn/phi | nathanrchn | 2023-11-30T00:38:29Z | 27 | 0 | null | [
"transformers",
"pytorch",
"phi",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-30T00:38:29Z | 2023-11-26T11:22:01.000Z | null | null | ---
license: other
license_name: microsoft-research-license
license_link: LICENSE
---
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | nathanrchn/phi | [
-0.12853312492370605,
-0.18616832792758942,
0.6529129147529602,
0.494362473487854,
-0.19319364428520203,
0.23607414960861206,
0.36071962118148804,
0.05056367814540863,
0.5793655514717102,
0.7400145530700684,
-0.6508100032806396,
-0.237839937210083,
-0.7102250456809998,
-0.0478254035115242,... |
credentek/TenaliAI-FinTech-v1 | credentek | 2023-11-29T17:47:45Z | 27 | 0 | null | [
"transformers",
"pytorch",
"joblib",
"bert",
"text-classification",
"generated_from_trainer",
"banking",
"finance",
"internet banking",
"mobile banking",
"Natural User Experience",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T17:47:45Z | 2023-11-26T15:46:33.000Z | null | null | ---
tags:
- generated_from_trainer
- banking
- finance
- internet banking
- mobile banking
- Natural User Experience
model-index:
- name: TenaliAI-FinTech-v1
results: []
widget:
- text: Can you pls tell me what is the latest balance in my account number 1001
example_title: Balance Enquiry
- text: I want to send money abroad. What is the process
example_title: Outward Remittance
- text: Pay 100 INR and recharge my mobile phone
example_title: Utility Bill Payment
- text: What is the outstanding EMI on my loan
example_title: Loan Details
license: apache-2.0
language:
- en
metrics:
- accuracy
---
# TenaliAI-FinTech-v1
This model was trained from scratch on banking dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1350
## Model description
This project is integral to the development of a Natural User Experience within the Banking and Finance Industry [BFSI].
The TenaliAI-FinTech model is specifically designed to tackle the intricate task of deciphering the intent behind customer queries in the BFSI sector.
The underlying technology behind TenaliAI-FinTech employs advanced natural language processing and machine learning algorithms. These technologies enhance the model's ability to accurately classify and understand the diverse range of customer queries. By leveraging sophisticated classification techniques, the model ensures a more precise interpretation of user intent, regardless of whether the query originates from the bank's net banking portal, mobile banking portal, or other communication channels.
Furthermore, the model excels in query tokenization, making it proficient in breaking down customer queries into meaningful components. This capability not only streamlines the processing of customer requests but also enables a more efficient and targeted response.
Ultimately, the technology powering TenaliAI-FinTech contributes to an enhanced customer service experience by providing quicker and more accurate responses to inquiries across multiple banking platforms.
## Intended uses & limitations
This model is meant to generate "Intent" for a given customer query on bank's netbanking portal or mobile banking. Following is the list of intents :
<pre>
{
'add_beneficiary': 0,
'balance_enquiry': 1,
'beneficiary_details': 2,
'bill_payment': 3,
'block_card': 4,
'bulk_payments': 5,
'bulk_payments_status': 6,
'change_contact_info': 7,
'debit_card_details': 8,
'delete_beneficiary': 9,
'fd_details': 10,
'fd_rate': 11,
'fd_rate_large_amount': 12,
'funds_transfer_other_bank': 13,
'funds_transfer_own_account': 14,
'funds_transfer_status': 15,
'funds_transfer_third_party': 16,
'gst_payment': 17,
'investment_details': 18,
'list_accounts': 19,
'list_beneficiary': 20,
'list_billers': 21,
'list_fd': 22,
'list_investments': 23,
'list_loans': 24,
'loan_details': 25,
'nrv_details': 26,
'open_account': 27,
'pending_authorization': 28,
'pin_change': 29,
'raise_request': 30,
'request_status': 31,
'saving_interest_rate': 32,
'send_money_abroad': 33,
'ss_fd_rate': 34,
'transaction_history': 35,
'transaction_limit': 36,
'update_beneficiary': 37}
</pre>
How to use :
1. Type a query such as
- "Tell me my last 10 transactions"
- "I am senior citizen. What is FD rates"
- "I want to send money to my brother"
- "I want Fixed Deposit rate for 2 Crore INR"
- "What is the outstanding EMI or my loan"
- "How many active loans do I have ?"
- "I want to add a new beneficiary"
3. This engine will understand the "intent" behind the query and return the value of LABEL_0 to LABEL_50.
4. The LABEL having maximum value (which will be at the top in the result) will be the identified "intent"
5. Use above mapping table and convert LABEL to Code. So, for example, LABEL_34 means "Senior Citizen Fixed Deposit Rate" and so on.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 226 | 2.2719 |
| No log | 2.0 | 452 | 0.8482 |
| 2.2725 | 3.0 | 678 | 0.2721 |
| 2.2725 | 4.0 | 904 | 0.1631 |
| 0.2506 | 5.0 | 1130 | 0.1350 |
| 0.2506 | 6.0 | 1356 | 0.1625 |
| 0.0232 | 7.0 | 1582 | 0.1650 |
| 0.0232 | 8.0 | 1808 | 0.1676 |
| 0.0095 | 9.0 | 2034 | 0.1739 |
| 0.0095 | 10.0 | 2260 | 0.1745 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | credentek/TenaliAI-FinTech-v1 | [
-0.4468163549900055,
-0.8597155809402466,
0.20401044189929962,
0.30278652906417847,
-0.1400981843471527,
-0.23098224401474,
0.03650938346982002,
-0.248493030667305,
0.299697607755661,
0.6794642806053162,
-0.7234090566635132,
-0.8011550307273865,
-0.4113604426383972,
-0.15408384799957275,
... |
TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ | TheBloke | 2023-11-30T00:07:05Z | 27 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"base_model:harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | 2023-11-30T00:07:05Z | 2023-11-29T15:09:34.000Z | null | null | ---
base_model: harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k
datasets:
- WizardLM/WizardLM_evol_instruct_V2_196k
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: L
model_name: Open Llama 3B V2 Wizard Evol Instuct V2 196K
model_type: llama
prompt_template: '### HUMAN:
{prompt}
### RESPONSE:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Open Llama 3B V2 Wizard Evol Instuct V2 196K - AWQ
- Model creator: [L](https://huggingface.co/harborwater)
- Original model: [Open Llama 3B V2 Wizard Evol Instuct V2 196K](https://huggingface.co/harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k)
<!-- description start -->
## Description
This repo contains AWQ model files for [L's Open Llama 3B V2 Wizard Evol Instuct V2 196K](https://huggingface.co/harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-GGUF)
* [L's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Human-Response
```
### HUMAN:
{prompt}
### RESPONSE:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ/tree/main) | 4 | 64 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 2.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''### HUMAN:
{prompt}
### RESPONSE:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''### HUMAN:
{prompt}
### RESPONSE:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''### HUMAN:
{prompt}
### RESPONSE:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: L's Open Llama 3B V2 Wizard Evol Instuct V2 196K
Trained on 1 epoch of the WizardLM_evol_instruct_v2_196k dataset
Link to [GGUF](https://huggingface.co/maddes8cht/harborwater-open-llama-3b-v2-wizard-evol-instuct-v2-196k-gguf) formats.
Prompt template:
```
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_harborwater__open-llama-3b-v2-wizard-evol-instuct-v2-196k)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 36.33 |
| ARC (25-shot) | 41.81 |
| HellaSwag (10-shot) | 73.01 |
| MMLU (5-shot) | 26.36 |
| TruthfulQA (0-shot) | 38.99 |
| Winogrande (5-shot) | 66.69 |
| GSM8K (5-shot) | 1.9 |
| DROP (3-shot) | 5.57 |
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | TheBloke/open-llama-3b-v2-wizard-evol-instuct-v2-196k-AWQ | [
-0.5087504386901855,
-0.8611383438110352,
0.4072374701499939,
0.12354212254285812,
-0.1798827052116394,
-0.14074105024337769,
0.14396657049655914,
-0.5021577477455139,
-0.005078999325633049,
0.47051241993904114,
-0.7097125053405762,
-0.5722185373306274,
-0.2843641936779022,
0.0097526786848... |
chargoddard/loyal-piano-m7 | chargoddard | 2023-11-29T09:13:30Z | 26 | 0 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:pankajmathur/orca_mini_v1_dataset",
"dataset:openai/summarize_from_feedback",
"dataset:PygmalionAI/PIPPA",
"dataset:chargoddard/rpguild",
"dataset:lemonilia/LimaRP",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"t... | 2023-11-29T09:13:30Z | 2023-11-24T00:44:39.000Z | null | null | ---
license: cc-by-nc-4.0
datasets:
- pankajmathur/orca_mini_v1_dataset
- openai/summarize_from_feedback
- PygmalionAI/PIPPA
- chargoddard/rpguild
- lemonilia/LimaRP
language:
- en
tags:
- mistral
---
Experimenting with dataset ratios. Intended to be a roleplay-focused model with some smarts and good long-context recall.
Not recommended for use yet! WIP, untested, questionable quality. This place is not a place of honor. No highly esteemed deed is commemorated here.
Uses alpaca prompt format. | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | chargoddard/loyal-piano-m7 | [
-0.37209638953208923,
-0.7540711164474487,
0.09833019226789474,
0.8110049962997437,
-0.4985116124153137,
-0.5406299829483032,
0.3345330059528351,
-0.37080249190330505,
0.5679618716239929,
0.5026284456253052,
-0.7602537870407104,
-0.1466938555240631,
-0.4657362103462219,
-0.0333266146481037... |
uchicago-dsi/cgfp-classifier-dev | uchicago-dsi | 2023-11-29T02:56:32Z | 26 | 0 | null | [
"transformers",
"pytorch",
"distilbert",
"endpoints_compatible",
"region:us"
] | 2023-11-29T02:56:32Z | 2023-11-28T00:55:30.000Z | null | null | Entry not found | null | transformers | null | null | null | null | null | null | null | null | null | null | uchicago-dsi/cgfp-classifier-dev | [
-0.3227645754814148,
-0.22568444907665253,
0.862226128578186,
0.43461522459983826,
-0.5282993912696838,
0.7012966871261597,
0.7915722131729126,
0.07618582993745804,
0.7746022939682007,
0.25632214546203613,
-0.7852814793586731,
-0.22573833167552948,
-0.910447895526886,
0.5715672373771667,
... |
davidkim205/komt-mistral-7b-v1-dpo | davidkim205 | 2023-11-29T09:45:13Z | 26 | 0 | null | [
"peft",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"llama-2-chat",
"text-generation",
"en",
"ko",
"arxiv:2308.06502",
"arxiv:2308.06259",
"region:us"
] | 2023-11-29T09:45:13Z | 2023-11-29T09:21:52.000Z | null | null | ---
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- llama-2-chat
library_name: peft
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **Model Architecture** : The komt-mistral-7b-v1-dpo is is a fine-tuned version of the komt-mistral-7b-v1(original model : Mistral-7B-Instruct-v0.1).
## Dataset
* maywell/ko_Ultrafeedback_binarized
- https://huggingface.co/datasets/maywell/ko_Ultrafeedback_binarized
## Hardware and Software
- nvidia driver : 535.54.03
- CUDA Version: 12.2
## Training
Refer https://github.com/davidkim205/komt
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]</s>
```
## Usage
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel, PeftConfig
from transformers import TextStreamer, GenerationConfig
model='davidkim205/komt-mistral-7b-v1'
peft_model_name = 'davidkim205/komt-mistral-7b-v1-dpo'
config = PeftConfig.from_pretrained(peft_model_name)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
config.base_model_name_or_path =model
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map="auto")
model = PeftModel.from_pretrained(model, peft_model_name)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
streamer = TextStreamer(tokenizer)
def gen(x):
generation_config = GenerationConfig(
temperature=0.8,
top_p=0.8,
top_k=100,
max_new_tokens=1024,
early_stopping=True,
do_sample=True,
)
q = f"[INST]{x} [/INST]"
gened = model.generate(
**tokenizer(
q,
return_tensors='pt',
return_token_type_ids=False
).to('cuda'),
generation_config=generation_config,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
result_str = tokenizer.decode(gened[0])
start_tag = f"[/INST]"
start_index = result_str.find(start_tag)
if start_index != -1:
result_str = result_str[start_index + len(start_tag):].strip()
return result_str
result = gen('제주도를 1박2일로 혼자 여행하려고 하는데 여행 코스를 만들어줘')
print('##########')
print(result)
```
output
```
제주도 1박2일 1인 여행 코스
제주도는 한국에서 가장 먼 섬인 동남아시아 최대 섬으로, 멋진 해변, 아름다운 자연경관, 절경 납땜 절벽, 한국 최대 규모의 복합리조트 등 다양한 관광 명소가 풍부하게 있어 1박2일로 혼자 여행하시는 여러분들을 위해 아래와 같은 코스를 제안해 드리겠습니다.
▷ 코스 1 : 성산일출봉, 용눈이절벽, 성산일출봉 야간 경관 관람
- 코스 설명 : 제주 동남쪽 해안의 명소인 성산일출봉, 용눈이절벽, 성산일출봉 야간 경관 관람 순으로 구성된 코스입니다. 아침에 일찍 일어나 일출봉에 도착하여 일출을 감상하고, 아침 식사를 하고 절벽 등반을 즐기며 휴식을 취합니다. 오후에는 일출봉 야간 경관 관람을 즐기며 휴식과 휴식을 취합니다.
▷ 코스 2 : 한라산, 한라산 케이블카, 오미자 바위, 신라 이젠
- 코스 설명 : 제주 남부의 명소인 한라산, 한라산 케이블카, 오미자 바위, 신라 이젠 순으로 구성된 코스입니다. 아침에 일찍 일어나 한라산 케이블카를 타고 높은 고지에 위치한 한라산 정상으로 올라가서 탐험을 즐기며 아침 식사를 합니다. 오후에는 오미자 바위를 찾아 휴식과 휴식을 취하고, 일출봉 야간 경관 관람을 즐기며 휴식을 취합니다.
▷ 코스 3 : 대하늘길, 삼거리, 곰돌라비, 칠동굴, 광안절, 칠금절, 해넘이길, 바다지상 길
- 코스 설명 : 제주 서부의 명소인 대하늘길, 삼거리, 곰돌라비, 칠동굴, 광안절, 칠금절, 해넘이길, 바다지상 길 순으로 구성된 코스입니다. 아침에 일찍 일어나 대하늘길에서 탐험을 즐기며 아침 식사를 합니다. 오후에는 삼거리를 찾아 휴식과 휴식을 취하고, 일출봉 야간 경관 관람을 즐기며 휴식을 취합니다.
```
## Evaluation
For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) .
| model | score | average(0~5) | percentage |
|------------------------------------------|---------| ------------ |------------|
| gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% |
| naver Cue(close) | 140 | 3.78 | 75.67% |
| clova X(close) | 136 | 3.67 | 73.51% |
| WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% |
| Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% |
| Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% |
| nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% |
| kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% |
| beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% |
| **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** |
| **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
| **komt-llama-30b-v1 (open)(ours)** | **129** | **3.16** | **63.24%** |
| **komt-mistral-7b-v1 (open)(ours)** | **131** | **3.54** | **70.81%** |
| **komt-mistral-7b-v1-dpo (open)(ours)** | **142** | **3.83** | **76.75%** |
| null | peft | text-generation | null | null | null | null | null | null | null | null | null | davidkim205/komt-mistral-7b-v1-dpo | [
-0.5173847675323486,
-0.6716235280036926,
0.3068894147872925,
0.32528024911880493,
-0.37523791193962097,
0.11760324984788895,
-0.0006419040728360415,
-0.2473812997341156,
0.2583005130290985,
0.28503450751304626,
-0.4429546296596527,
-0.6037126779556274,
-0.6251783967018127,
0.0152743440121... |
botdevringring/fr-naxai-ai-sentiment-classification-104129112023 | botdevringring | 2023-11-29T11:01:06Z | 26 | 0 | null | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T11:01:06Z | 2023-11-29T11:00:36.000Z | null | null | Entry not found | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | botdevringring/fr-naxai-ai-sentiment-classification-104129112023 | [
-0.3227645754814148,
-0.22568444907665253,
0.862226128578186,
0.43461522459983826,
-0.5282993912696838,
0.7012966871261597,
0.7915722131729126,
0.07618582993745804,
0.7746022939682007,
0.25632214546203613,
-0.7852814793586731,
-0.22573833167552948,
-0.910447895526886,
0.5715672373771667,
... |
mooneyko/salmonn_vicuna | mooneyko | 2023-11-29T13:50:30Z | 26 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T13:50:30Z | 2023-11-29T13:32:01.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | mooneyko/salmonn_vicuna | [
-0.3227645754814148,
-0.22568444907665253,
0.862226128578186,
0.43461522459983826,
-0.5282993912696838,
0.7012966871261597,
0.7915722131729126,
0.07618582993745804,
0.7746022939682007,
0.25632214546203613,
-0.7852814793586731,
-0.22573833167552948,
-0.910447895526886,
0.5715672373771667,
... |
FDSRashid/bert-base-arabic-camelbert-mix-Hadith-SentenceTransformer | FDSRashid | 2023-11-29T20:02:19Z | 25 | 0 | null | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"Hadith",
"Islam",
"Arabic",
"ar",
"dataset:FDSRashid/hadith_info",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | 2023-11-29T20:02:19Z | 2023-11-25T17:52:37.000Z | null | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- Hadith
- Islam
- Arabic
language:
- ar
license: apache-2.0
datasets:
- FDSRashid/hadith_info
library_name: sentence-transformers
---
# bert-base-arabic-camelbert-mix-Hadith-SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model originates from the [Camel-Bert-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix) model. It was then trained on the Jawami' Kalim dataset,
specifically a dataset of 440,000 matns and their corresponding taraf labels.
Taraf labels indicate two hadith are about the same report, and as such, are more semantically similar.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["أنا أحب القراءة والكتابة.", "الطيور تحلق في السماء."]
model = SentenceTransformer('FDSRashid/bert-base-arabic-camelbert-mix-Hadith-SentenceTransformer')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["أنا أحب القراءة والكتابة.", "الطيور تحلق في السماء."]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('FDSRashid/bert-base-arabic-camelbert-mix-Hadith-SentenceTransformer')
model = AutoModel.from_pretrained('FDSRashid/bert-base-arabic-camelbert-mix-Hadith-SentenceTransformer')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
The dataset was plit into 70% training, 20 % eval, 10% test.
This is the evaluation results of the eval data during training:
| epoch | steps | cossim_accuracy | cossim_accuracy_threshold | cossim_f1 | cossim_precision | cossim_recall | cossim_f1_threshold | cossim_ap | manhattan_accuracy | manhattan_accuracy_threshold | manhattan_f1 | manhattan_precision | manhattan_recall | manhattan_f1_threshold | manhattan_ap | euclidean_accuracy | euclidean_accuracy_threshold | euclidean_f1 | euclidean_precision | euclidean_recall | euclidean_f1_threshold | euclidean_ap | dot_accuracy | dot_accuracy_threshold | dot_f1 | dot_precision | dot_recall | dot_f1_threshold | dot_ap |
|-------|-------|--------------------|---------------------------|--------------------|--------------------|--------------------|---------------------|--------------------|--------------------|------------------------------|--------------------|---------------------|--------------------|------------------------|--------------------|--------------------|------------------------------|--------------------|---------------------|--------------------|------------------------|--------------------|--------------------|------------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
| 0 | 10000 | 0.8728666666666667 | 0.6338587403297424 | 0.8668005818383322 | 0.9019749171111432 | 0.8342666666666667 | 0.6194518804550171 | 0.9483563546636091 | 0.8703666666666666 | 416.7969665527344 | 0.8633023707195343 | 0.8990759457118106 | 0.8302666666666667 | 426.06341552734375 | 0.946564696923889 | 0.8713 | 18.880647659301758 | 0.8630773047885727 | 0.9214410050707637 | 0.8116666666666666 | 18.8944034576416 | 0.9467242181115024 | 0.8712 | 308.05712890625 | 0.866759976786263 | 0.8881970195200448 | 0.8463333333333334 | 299.0450439453125 | 0.9474384884425655 |
| 0 | 20000 | 0.8555333333333334 | 0.6027015447616577 | 0.8485685671858273 | 0.8802751307587591 | 0.8190666666666667 | 0.5819894671440125 | 0.9382839456565798 | 0.8571 | 450.9693603515625 | 0.8507174392935982 | 0.8813607775871927 | 0.8221333333333334 | 465.85040283203125 | 0.9394524040574381 | 0.8573333333333333 | 20.690792083740234 | 0.8515483008127293 | 0.8767742391074077 | 0.8277333333333333 | 21.226715087890625 | 0.9397257185375895 | 0.8499333333333333 | 317.83416748046875 | 0.8428292783081819 | 0.8658510862687196 | 0.821 | 305.69891357421875 | 0.9336705412623151 |
| 0 | -1 | 0.8131666666666667 | 0.8046249151229858 | 0.8039135474193768 | 0.8225896469931631 | 0.7860666666666667 | 0.7788962125778198 | 0.9073381295114933 | 0.8133 | 279.29443359375 | 0.8024854843638586 | 0.8177288768943326 | 0.7878 | 299.95355224609375 | 0.9076141670047525 | 0.8137666666666666 | 12.680887222290039 | 0.8030188159073679 | 0.831145669853046 | 0.7767333333333334 | 13.423564910888672 | 0.908097788519589 | 0.7950666666666667 | 329.95880126953125 | 0.792532369768142 | 0.7954866008462623 | 0.7896 | 319.1595764160156 | 0.8766171432744164 |
| 1 | 10000 | 0.8704 | 0.5879600644111633 | 0.8643306379155435 | 0.8972592911465059 | 0.8337333333333333 | 0.5705364942550659 | 0.9481486962395269 | 0.8717333333333334 | 448.628662109375 | 0.8654767692200847 | 0.9031743730975503 | 0.8308 | 461.86663818359375 | 0.9485930662647721 | 0.8721 | 20.382905960083008 | 0.8652842625652292 | 0.8919875424688561 | 0.8401333333333333 | 21.20406723022461 | 0.9487145501542519 | 0.8658 | 301.5247802734375 | 0.8614328905798824 | 0.8782295490752926 | 0.8452666666666667 | 286.892333984375 | 0.9451528510703204 |
| 1 | 20000 | 0.8317333333333333 | 0.6251842975616455 | 0.821581680666411 | 0.8608574976261778 | 0.7857333333333333 | 0.5989522933959961 | 0.920848793533412 | 0.8329 | 440.8127136230469 | 0.8214248205846316 | 0.8648728345005529 | 0.7821333333333333 | 458.9129333496094 | 0.9211033233990977 | 0.833 | 20.02983856201172 | 0.8226101647191328 | 0.8713092752758723 | 0.7790666666666667 | 20.677509307861328 | 0.921443016958735 | 0.8273333333333334 | 338.71099853515625 | 0.8187660668380462 | 0.8310216973358967 | 0.8068666666666666 | 314.1599426269531 | 0.91760440907271 |
| 1 | -1 | 0.8621 | 0.5237299203872681 | 0.857287415252808 | 0.8676179422407319 | 0.8472 | 0.5012189149856567 | 0.941961664843638 | 0.8618 | 470.2433776855469 | 0.8565625968125021 | 0.8854885773254573 | 0.8294666666666667 | 492.0546875 | 0.9420193677891553 | 0.8622666666666666 | 21.432126998901367 | 0.8576286170900065 | 0.8752168783399265 | 0.8407333333333333 | 22.565732955932617 | 0.9422644261124031 | 0.8592666666666666 | 274.02484130859375 | 0.8548321876263648 | 0.8642681929681112 | 0.8456 | 259.56317138671875 | 0.9391585989673044 |
| 2 | 10000 | 0.8663 | 0.5136758089065552 | 0.8616880247459126 | 0.8790484777030307 | 0.845 | 0.49445080757141113 | 0.9469624462478431 | 0.866 | 503.0935363769531 | 0.8615415848954455 | 0.8747423388759104 | 0.8487333333333333 | 516.02685546875 | 0.9467461388100301 | 0.8666666666666667 | 22.72525405883789 | 0.8618953670317708 | 0.879913876927351 | 0.8446 | 23.27953338623047 | 0.9469225809206141 | 0.8651666666666666 | 279.4963073730469 | 0.8594855743059335 | 0.8776403557531962 | 0.8420666666666666 | 266.14874267578125 | 0.9457994682589912 |
| 2 | 20000 | 0.8697666666666667 | 0.4708051085472107 | 0.8649066236732501 | 0.8715813701597617 | 0.8583333333333333 | 0.43661609292030334 | 0.9481260914870877 | 0.8695333333333334 | 511.45037841796875 | 0.8662394466483573 | 0.881391016352722 | 0.8516 | 529.918701171875 | 0.9484912245868902 | 0.8702 | 23.73163604736328 | 0.8664075286415711 | 0.8867252931323283 | 0.847 | 23.867982864379883 | 0.9486623944573455 | 0.8663666666666666 | 254.00552368164062 | 0.8632751969931753 | 0.8539560367882069 | 0.8728 | 217.83688354492188 | 0.9460184254496492 |
| 2 | -1 | 0.8763666666666666 | 0.5737866163253784 | 0.8699281080133262 | 0.9177210506844247 | 0.8268666666666666 | 0.573677659034729 | 0.9497147516342264 | 0.876 | 455.7176208496094 | 0.8696920245824797 | 0.9074704731541192 | 0.8349333333333333 | 462.76019287109375 | 0.9498843371363953 | 0.8765666666666667 | 20.760818481445312 | 0.8700485963010872 | 0.9147246930824083 | 0.8295333333333333 | 20.835704803466797 | 0.9501883105142883 | 0.8725333333333334 | 291.6016845703125 | 0.867386965797261 | 0.8963163134689234 | 0.8402666666666667 | 282.53021240234375 | 0.9477247560163963 |
| 3 | 10000 | 0.8738 | 0.4645261764526367 | 0.867854694996573 | 0.8929478138222849 | 0.8441333333333333 | 0.43856877088546753 | 0.9499914809467622 | 0.8739666666666667 | 519.2559814453125 | 0.8685179450122157 | 0.8975179574710191 | 0.8413333333333334 | 532.0695190429688 | 0.9503123108549347 | 0.8740666666666667 | 23.487689971923828 | 0.8690694359600897 | 0.8851099128992119 | 0.8536 | 24.41732406616211 | 0.9505990592434965 | 0.8713333333333333 | 244.84371948242188 | 0.8665936922761169 | 0.8899114795559927 | 0.8444666666666667 | 227.79989624023438 | 0.9483931825595524 |
| 3 | 20000 | 0.8640666666666666 | 0.5242444276809692 | 0.8590921372601998 | 0.8706188389923329 | 0.8478666666666667 | 0.4847835898399353 | 0.9454944707941928 | 0.8650666666666667 | 488.2347412109375 | 0.8594171800373781 | 0.893623146682021 | 0.8277333333333333 | 503.6361389160156 | 0.9461044960836433 | 0.8654 | 22.266395568847656 | 0.8593275302952722 | 0.8805176635187129 | 0.8391333333333333 | 23.126968383789062 | 0.9464438737608736 | 0.8619 | 277.2056884765625 | 0.8563088124184353 | 0.8830570902394107 | 0.8311333333333333 | 264.5849609375 | 0.9432445123143585 |
| 3 | -1 | 0.8716666666666667 | 0.5407305955886841 | 0.8678071539657854 | 0.8803676773219921 | 0.8556 | 0.51105135679245 | 0.9493445753733256 | 0.8719 | 492.2160339355469 | 0.8674134068585215 | 0.8978951123795933 | 0.8389333333333333 | 494.53570556640625 | 0.9495194790821182 | 0.8725333333333334 | 22.456180572509766 | 0.8685710361875988 | 0.8954410307234886 | 0.8432666666666667 | 22.497486114501953 | 0.9498165332328881 | 0.8691333333333333 | 292.5062255859375 | 0.8656898656898657 | 0.8811101905550953 | 0.8508 | 274.416748046875 | 0.947514396076124 |
| 4 | 10000 | 0.8815 | 0.4313759207725525 | 0.8775440279586103 | 0.9027209925278443 | 0.8537333333333333 | 0.4166678786277771 | 0.9536710584553546 | 0.8807666666666667 | 534.939453125 | 0.877129120879121 | 0.9044617563739377 | 0.8514 | 541.7369384765625 | 0.9534178913858894 | 0.8816 | 24.161209106445312 | 0.8772407224735536 | 0.8955483019654142 | 0.8596666666666667 | 24.785259246826172 | 0.9537450035197841 | 0.8795333333333333 | 223.683349609375 | 0.8760977602287426 | 0.8950479899846988 | 0.8579333333333333 | 212.19989013671875 | 0.9523493221802699 |
| 4 | 20000 | 0.8812 | 0.46411147713661194 | 0.8769748591839538 | 0.9044346840464721 | 0.8511333333333333 | 0.4527707099914551 | 0.9551917727852052 | 0.8818666666666667 | 527.262939453125 | 0.8777930089102124 | 0.9031734837799718 | 0.8538 | 533.6163330078125 | 0.9555440868076188 | 0.8824 | 23.85757827758789 | 0.8779288202746088 | 0.9001260680767614 | 0.8568 | 24.25102424621582 | 0.9558555626510571 | 0.8783 | 247.68724060058594 | 0.8754503518636991 | 0.8844139057078713 | 0.8666666666666667 | 227.70411682128906 | 0.9536525635749343 |
| 4 | -1 | 0.8879666666666667 | 0.47767937183380127 | 0.8839801816680429 | 0.91339590443686 | 0.8564 | 0.4662420153617859 | 0.9590957348342734 | 0.8888333333333334 | 512.8856201171875 | 0.8845685691100892 | 0.89743413830955 | 0.8720666666666667 | 531.912841796875 | 0.9595899160552916 | 0.8893666666666666 | 23.593456268310547 | 0.8854051819184123 | 0.9162150598973189 | 0.8566 | 23.685091018676758 | 0.959826440573303 | 0.8856666666666667 | 252.97689819335938 | 0.8812133138805008 | 0.8972569612381677 | 0.8657333333333334 | 237.81539916992188 | 0.9577627894713584 |
| 5 | 10000 | 0.8947 | 0.40381842851638794 | 0.8908447072575377 | 0.9197132107616952 | 0.8637333333333334 | 0.38857945799827576 | 0.9618600742013084 | 0.8943333333333333 | 550.8440551757812 | 0.890025399876433 | 0.9172916371869252 | 0.8643333333333333 | 560.9615478515625 | 0.9620877205287793 | 0.8947333333333334 | 24.977214813232422 | 0.8906960716747071 | 0.9218259629101284 | 0.8616 | 25.30794906616211 | 0.9623942423400306 | 0.8932 | 211.89276123046875 | 0.889308521595825 | 0.916820048138185 | 0.8634 | 204.16372680664062 | 0.9610285801747338 |
| 5 | 20000 | 0.8964 | 0.3912210464477539 | 0.8929762436827449 | 0.9219138212536381 | 0.8658 | 0.3875443637371063 | 0.9629615906217442 | 0.8968 | 555.8740844726562 | 0.8930133259873971 | 0.9235097215298056 | 0.8644666666666667 | 559.9774169921875 | 0.9629684445349712 | 0.8975666666666666 | 25.311023712158203 | 0.8939796446437812 | 0.9264211655345013 | 0.8637333333333334 | 25.312477111816406 | 0.963306892486033 | 0.8951 | 204.28839111328125 | 0.8921444463460552 | 0.9167780513542033 | 0.8688 | 201.62872314453125 | 0.962254825252064 |
| 5 | -1 | 0.8983333333333333 | 0.38216596841812134 | 0.8950169992101377 | 0.9229407181811743 | 0.8687333333333334 | 0.3772333264350891 | 0.9641863680656955 | 0.8990333333333334 | 556.871826171875 | 0.8946417402182222 | 0.934315575555233 | 0.8582 | 559.1519775390625 | 0.9641750192817846 | 0.8992666666666667 | 25.24359893798828 | 0.8954340570484809 | 0.9252648794709522 | 0.8674666666666667 | 25.564838409423828 | 0.9644819002460969 | 0.8976333333333333 | 203.59861755371094 | 0.8943946341797275 | 0.9188581071579244 | 0.8712 | 196.3087158203125 | 0.9634877497584766 |
| epoch | steps | accuracy_cosinus | accuracy_manhattan | accuracy_euclidean |
|-------|-------|------------------|--------------------|--------------------|
| 0 | 10000 | 0.9314 | 0.9287 | 0.9293 |
| 0 | 20000 | 0.9296 | 0.9286 | 0.9291 |
| 0 | -1 | 0.8959 | 0.8949 | 0.8956 |
| 1 | 10000 | 0.9311 | 0.9305 | 0.9311 |
| 1 | 20000 | 0.9046 | 0.9029 | 0.9036 |
| 1 | -1 | 0.9333 | 0.9317 | 0.9324 |
| 2 | 10000 | 0.9328 | 0.9311 | 0.933 |
| 2 | 20000 | 0.9366 | 0.938 | 0.9373 |
| 2 | -1 | 0.9401 | 0.9396 | 0.9397 |
| 3 | 10000 | 0.9399 | 0.9406 | 0.9404 |
| 3 | 20000 | 0.9363 | 0.9365 | 0.9355 |
| 3 | -1 | 0.9388 | 0.9379 | 0.9388 |
| 4 | 10000 | 0.945 | 0.9443 | 0.9443 |
| 4 | 20000 | 0.9471 | 0.9463 | 0.9467 |
| 4 | -1 | 0.9505 | 0.9496 | 0.9505 |
| 5 | 10000 | 0.956 | 0.9546 | 0.9553 |
| 5 | 20000 | 0.9575 | 0.9561 | 0.9562 |
| 5 | -1 | 0.9577 | 0.956 | 0.9577 |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 26442 with parameters:
```
{'batch_size': 12, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 10000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | null | sentence-transformers | sentence-similarity | null | null | null | null | null | null | null | null | null | FDSRashid/bert-base-arabic-camelbert-mix-Hadith-SentenceTransformer | [
-0.7018061280250549,
-0.7833217978477478,
0.21962334215641022,
0.20353417098522186,
-0.1546626091003418,
0.17115236818790436,
-0.0264617707580328,
-0.17189450562000275,
0.7478770613670349,
0.2883723974227905,
-0.5706145167350769,
-0.508316159248352,
-0.8093780875205994,
0.07311587780714035... |
justswim/lnmdlsktchfsh-model | justswim | 2023-11-29T01:57:42Z | 25 | 0 | null | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"has_space",
"region:us"
] | 2023-11-29T01:57:42Z | 2023-11-29T00:06:21.000Z | null | null |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: lnmdlsktchfsh
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | justswim/lnmdlsktchfsh-model | [
0.0732579454779625,
-0.1783585548400879,
0.23538058996200562,
0.13522785902023315,
-0.5473270416259766,
1.007525086402893,
0.19510556757450104,
-0.20362026989459991,
0.5356338024139404,
-0.0032065643463283777,
-0.5399638414382935,
-0.0445144847035408,
-0.9003117084503174,
0.304425328969955... |
justswim/lnmdlsktchfsh-512 | justswim | 2023-11-29T04:57:59Z | 25 | 0 | null | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | 2023-11-29T04:57:59Z | 2023-11-29T02:47:38.000Z | null | null |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: lnmdlsktchfsh-512
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | justswim/lnmdlsktchfsh-512 | [
0.0732579454779625,
-0.1783585548400879,
0.23538058996200562,
0.13522785902023315,
-0.5473270416259766,
1.007525086402893,
0.19510556757450104,
-0.20362026989459991,
0.5356338024139404,
-0.0032065643463283777,
-0.5399638414382935,
-0.0445144847035408,
-0.9003117084503174,
0.304425328969955... |
bonur/t5-base-tr | bonur | 2023-11-29T13:07:09Z | 25 | 0 | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"mt5",
"tr",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T13:07:09Z | 2023-11-29T08:40:44.000Z | null | null | ---
language:
- tr
- en
tags:
- mt5
- t5
widget:
- text: "Bu hafta hasta olduğum için <extra_id_0> gittim. Midem ağrıyordu ondan dolayı şu an <extra_id_1>."
- example_title: "Turkish Example 1"
- text: "Bu gece kar yağacakmış. Yarın yollarda <extra_id_0> olabilir."
- example_title: "Turkish Example 2"
- text: "I bought two tickets for NBA match. Do you like <extra_id_0> ?"
- example_title: "English Example 2"
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Please check **google/mt5-base** model. This model is pruned version of mt5-base model to only work in Turkish and English.
| null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | bonur/t5-base-tr | [
-0.30868083238601685,
-0.5401190519332886,
0.284615159034729,
-0.006955712102353573,
-1.1432422399520874,
-0.14071397483348846,
0.4101482629776001,
-0.08728514611721039,
0.450607031583786,
0.7859230041503906,
-0.9917059540748596,
-0.9371877908706665,
-0.40299129486083984,
0.032003581523895... |
DingDongBell/ddbldh | DingDongBell | 2023-11-29T09:14:55Z | 25 | 0 | null | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | 2023-11-29T09:14:55Z | 2023-11-29T09:14:36.000Z | null | null | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: ddbldh
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - ddbldh
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "ddbldh" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: portrait photo of ddbldh




| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | DingDongBell/ddbldh | [
-0.25555419921875,
-0.7326301336288452,
0.3228636384010315,
0.37367621064186096,
-0.4899201989173889,
-0.04909594729542732,
0.5139604806900024,
-0.08181221783161163,
0.7657551765441895,
0.47868767380714417,
-0.7544310092926025,
-0.3816400468349457,
-0.4435083270072937,
-0.22482730448246002... |
tizayi/ppo-Huggy | tizayi | 2023-11-29T12:48:01Z | 25 | 0 | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | 2023-11-29T12:48:01Z | 2023-11-29T12:47:55.000Z | null | null | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tizayi/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| null | ml-agents | reinforcement-learning | null | null | null | null | null | null | null | null | null | tizayi/ppo-Huggy | [
-0.5884556770324707,
-0.6476648449897766,
0.24089157581329346,
0.049668096005916595,
-0.21827270090579987,
0.22934263944625854,
0.18566106259822845,
-0.3111298382282257,
0.5818743109703064,
0.47873222827911377,
-0.6827530860900879,
-0.6513739228248596,
-0.41881874203681946,
-0.250845909118... |
flozi00/mistral-sauerkraut-her0-lora | flozi00 | 2023-11-29T14:15:49Z | 25 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | 2023-11-29T14:15:49Z | 2023-11-29T14:15:17.000Z | null | null | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.1
| null | peft | null | null | null | null | null | null | null | null | null | null | flozi00/mistral-sauerkraut-her0-lora | [
-0.6006837487220764,
-0.5601157546043396,
0.4423378109931946,
0.1037740632891655,
-0.22056613862514496,
-0.3067236542701721,
0.12250815331935883,
-0.5630480051040649,
0.07925711572170258,
0.6820694804191589,
-0.7475411295890808,
-0.6544148921966553,
-0.5481134653091431,
-0.1328515112400055... |
EuropeanParliament/eubert_embedding_v1 | EuropeanParliament | 2023-11-29T17:10:48Z | 25 | 0 | null | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"license:eupl-1.1",
"endpoints_compatible",
"region:us"
] | 2023-11-29T17:10:48Z | 2023-11-29T17:01:38.000Z | null | null | ---
license: eupl-1.1
---
# EUBERT Embedding v1
Based on the masked language model EUBERT this sentence transformer will allow to compute embeddings on various EU documents in 24 languages.
👷♂️ Work in progress | null | transformers | feature-extraction | null | null | null | null | null | null | null | null | null | EuropeanParliament/eubert_embedding_v1 | [
-0.22613929212093353,
-0.4255329668521881,
0.31357645988464355,
0.49856629967689514,
-0.028129560872912407,
0.2033693641424179,
0.2454812377691269,
-0.23237237334251404,
0.4604179859161377,
0.9757435917854309,
-0.5492499470710754,
-0.4201558828353882,
-0.6119995713233948,
-0.00644026789814... |
duckduck2/whisper-small-hi | duckduck2 | 2023-11-29T17:03:42Z | 25 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T17:03:42Z | 2023-11-29T17:01:52.000Z | null | null | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3774
- Cer: -0.6347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4418 | 1.27 | 1000 | 0.5122 | -0.2642 |
| 0.1525 | 2.54 | 2000 | 0.4072 | -0.5033 |
| 0.113 | 3.81 | 3000 | 0.3782 | -0.4981 |
| 0.0232 | 5.08 | 4000 | 0.3774 | -0.6347 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.7
- Tokenizers 0.14.1
| null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | duckduck2/whisper-small-hi | [
-0.47946006059646606,
-0.6051755547523499,
0.13137057423591614,
0.024606453254818916,
-0.34001725912094116,
-0.7107070088386536,
-0.26585832238197327,
-0.33829450607299805,
0.29771411418914795,
0.3001539409160614,
-0.7755558490753174,
-0.5473257899284363,
-0.6300868988037109,
-0.2615960538... |
AlanRobotics/lab4_code | AlanRobotics | 2023-11-29T13:10:40Z | 24 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:huggyllama/Llama-7b",
"region:us"
] | 2023-11-29T13:10:40Z | 2023-11-28T00:19:27.000Z | null | null | ---
library_name: peft
base_model: huggyllama/Llama-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
| null | peft | null | null | null | null | null | null | null | null | null | null | AlanRobotics/lab4_code | [
-0.5839648842811584,
-0.5444982647895813,
0.4422629177570343,
0.10047031193971634,
-0.21837837994098663,
-0.29282113909721375,
0.1171051636338234,
-0.560427188873291,
0.08466268330812454,
0.6898143887519836,
-0.749681830406189,
-0.6463689804077148,
-0.5569517612457275,
-0.12423540651798248... |
Ja3ck/llama-2-13b-instruct-Y24-v2 | Ja3ck | 2023-11-29T06:28:41Z | 24 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T06:28:41Z | 2023-11-29T06:18:43.000Z | null | null | ---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
---
## Prompt Template
```
### 질문: {instruction}
### 답변: {output}
``` | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | Ja3ck/llama-2-13b-instruct-Y24-v2 | [
-0.04997900128364563,
-0.18326972424983978,
0.4979185163974762,
0.5446441173553467,
-0.653170645236969,
-0.024528279900550842,
0.11793452501296997,
0.9445188641548157,
0.35888832807540894,
0.6858900785446167,
-0.7869232296943665,
-0.9631636738777161,
-0.6446099281311035,
0.0574705526232719... |
zhengxuanzenwu/gpt2-5token-solver | zhengxuanzenwu | 2023-11-29T09:53:46Z | 24 | 0 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T09:53:46Z | 2023-11-29T09:53:30.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | zhengxuanzenwu/gpt2-5token-solver | [
-0.3227650821208954,
-0.22568479180335999,
0.8622263669967651,
0.4346153140068054,
-0.5282987952232361,
0.7012966871261597,
0.7915722727775574,
0.07618651539087296,
0.7746027112007141,
0.2563222348690033,
-0.7852821350097656,
-0.225738525390625,
-0.910447895526886,
0.5715667009353638,
-0... |
flozi00/mistral-openchat_3.5-lora | flozi00 | 2023-11-29T11:30:10Z | 24 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | 2023-11-29T11:30:10Z | 2023-11-29T11:29:39.000Z | null | null | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.1
| null | peft | null | null | null | null | null | null | null | null | null | null | flozi00/mistral-openchat_3.5-lora | [
-0.6006837487220764,
-0.5601157546043396,
0.4423378109931946,
0.1037740632891655,
-0.22056613862514496,
-0.3067236542701721,
0.12250815331935883,
-0.5630480051040649,
0.07925711572170258,
0.6820694804191589,
-0.7475411295890808,
-0.6544148921966553,
-0.5481134653091431,
-0.1328515112400055... |
metame/faster-distil-whisper-medium.en | metame | 2023-11-29T16:20:39Z | 23 | 0 | null | [
"transformers",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-11-29T16:20:39Z | 2023-11-11T09:06:07.000Z | null | null | ---
license: mit
---
Please refer to [faster-distil-whisper](https://github.com/metame-none/faster-distil-whisper) for more details. | null | transformers | null | null | null | null | null | null | null | null | null | null | metame/faster-distil-whisper-medium.en | [
-0.23246625065803528,
-0.8098845481872559,
0.8060644268989563,
0.8322966694831848,
-0.4009571969509125,
0.17268450558185577,
0.03303892910480499,
-0.2777956426143646,
0.6468537449836731,
0.4293636977672577,
-0.892362117767334,
-0.5273257493972778,
-0.7262541651725769,
-0.06460852921009064,... |
SidXXD/path-to-save-model | SidXXD | 2023-11-29T17:18:17Z | 23 | 0 | null | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | 2023-11-29T17:18:17Z | 2023-11-28T09:26:39.000Z | null | null |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <new1> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/path-to-save-model
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | SidXXD/path-to-save-model | [
-0.5578861832618713,
-0.5000231266021729,
0.485263854265213,
0.2548658847808838,
-0.2108985334634781,
0.01325393095612526,
0.5487146377563477,
0.26293322443962097,
0.7745240926742554,
0.6719549298286438,
-0.6916535496711731,
-0.21944497525691986,
-0.4724518358707428,
-0.06190501153469086,
... |
GiuliaMP/ppo-Huggy | GiuliaMP | 2023-11-29T14:29:44Z | 23 | 0 | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | 2023-11-29T14:29:44Z | 2023-11-29T14:29:38.000Z | null | null | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GiuliaMP/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| null | ml-agents | reinforcement-learning | null | null | null | null | null | null | null | null | null | GiuliaMP/ppo-Huggy | [
-0.5968357920646667,
-0.6554551720619202,
0.23691655695438385,
0.043884601444005966,
-0.22600652277469635,
0.22932174801826477,
0.18279527127742767,
-0.3262617588043213,
0.5915223956108093,
0.47630569338798523,
-0.6762183904647827,
-0.646571695804596,
-0.42778632044792175,
-0.2646219432353... |
ceadar-ie/FinanceConnect-13B | ceadar-ie | 2023-11-29T22:55:47Z | 22 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"summarization",
"classification",
"translation",
"NLP",
"finance",
"domain specific llm",
"en",
"dataset:FinTalk-19k",
"doi:10.57967/hf/1405",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-gener... | 2023-11-29T22:55:47Z | 2023-11-28T14:26:49.000Z | null | null | ---
language: en
datasets:
- FinTalk-19k
tags:
- summarization
- classification
- translation
- NLP
- finance
- domain specific llm
license: apache-2.0
pipeline_tag: text-generation
---
# Finance Connect
FinanceConnect is a state-of-the-art, open-source chat model tailored for finance and economic discussions. Built on the robust Llama2-13B architecture, this model has been fine-tuned on a combination of FinTalk-19k and Alpaca datasets, making it a valuable resource for finance professionals, researchers, and enthusiasts.
## Model Details
- Architecture: Llama2-13B
- Training Dataset: [FinTalk-19k](https://huggingface.co/datasets/ceadar-ie/FinTalk-19k), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)
## Dataset Utilized: FinTalk-19k and Alpaca
Drawing strength from the FinTalk-19k and Alpaca dataset, a curated collection focused on financial knowledge, this model provides insights and information related to the finance industry. For a deeper dive into the dataset, visit: [FinTalk-19k](https://huggingface.co/datasets/ceadar-ie/FinTalk-19k), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)
## Model Specification
- **Developed by:** CeADAR Connect Group
- **Model type:** Large Language Model
- **Language(s):** en
- **Finetuned from model:** Llama2-13B
## Key Features and Functionalities
- **Domain Specialization:** The FinanceConnect model is specialized in Finance conversations, serving as a resource for financial researchers, and enthusiasts.
- **Model API Accessibility:** Offers a straightforward Python integration for generating financial content insights.
- **Performance Optimisation:** Efficient performance across both CPU and GPU platforms.
- **Data Representation:** Utilises a combination of comprehensive Finance dataset, enabling content generation to professional standards.
## Model Usage
Experience the capabilities of the FinanceConnect model through a well-structured Python interface. To kick-start your exploration, follow the steps and snippets given below:
### Prerequisites
#### 1. Ensure required packages are available
```python
import torch
from typing import Any, Dict
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
PreTrainedTokenizerFast,
pipeline,
logging,
)
import time
```
#### 2. Initiate the model and tokenizer
```python
model_name = "ceadar-ie/FinanceConnect-13B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, load_in_8bit = True, device_map = "auto", trust_remote_code=True)
```
#### 3. Create a function for generating text
```python
def generate_text(input_text):
tokenizer.padding_side = "right"
num_words_input = len(input_text.split())
start_time = time.time()
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length = 1000)
prompt = f"<s>[INST] {input_text} [/INST]"
#prompt = input_text
result = pipe(prompt)
generated_text = result[0]['generated_text'].replace(prompt, "").strip()
end_time = time.time()
time_taken = end_time - start_time
num_words_generated = len(generated_text.split())
print("Generated Output:")
print(generated_text)
print('=' * 60 )
print("\nNumber of Words in Input Text:", num_words_input)
print("Number of Words in Generated Output:", num_words_generated)
print("Time Taken to Generate Output: {:.2f} seconds".format(time_taken))
```
## Example Evaluation and Use
### Example Prompt 1:
List in detail ten key factors influencing the current state of the global economy.
### Generated Output:
The current state of the global economy is influenced by several factors, including:
1. Globalization: The increasing interconnectedness of economies through trade, investment, and the movement of people.
2. Technological advancements: The rapid pace of technological innovation and its impact on industries and jobs.
3. Political factors: Government policies and decisions that affect economic growth and stability.
4. Demographics: Changes in population demographics, such as aging populations and youth bulges, can impact economic growth and stability.
5. Environmental factors: Climate change, natural disasters, and other environmental factors can disrupt economic activity and impact growth.
6. International trade: The rise of protectionism and trade tensions between countries can impact economic growth and stability.
7. Financial factors: The performance of financial markets and the availability of credit can impact economic growth and stability.
8. Infrastructure: The availability and quality of infrastructure, such as transportation and communication networks, can impact economic growth and stability.
9. Education: The availability and quality of education can impact economic growth and stability by influencing the supply of skilled workers and the ability of countries to adapt to changing economic conditions.
10. Health: The availability and quality of healthcare can impact economic growth and stability by influencing the productivity of the workforce and the ability of countries to attract and retain talent.
### Example Prompt 2:
Explain the concept of quantitative easing and its impact on financial markets.
### Generated Output:
Quantitative easing is a monetary policy tool used by central banks to stimulate economic growth and combat inflation. It involves the purchase of assets such as government bonds, mortgage-backed securities, and corporate bonds by a central bank. This increases the amount of money in circulation, lowers interest rates, and encourages banks to lend more, which can boost economic growth. However, quantitative easing can also lead to inflation if it is not implemented carefully, as it can cause prices to rise more than wages can keep up with.
### Example Prompt 3:
I've recently started working and want to manage my finances. Give me a detailed answer in 10 points on how do I set up a budget?
### Generated Output:
Setting up a budget can help manage finances and save money. Here are some tips:
1. Determine your income and expenses.
2. Create a budget that includes all necessary expenses, such as rent, utilities, and groceries.
3. Set aside a portion of your income for savings and investments.
4. Monitor your spending regularly and adjust your budget as needed.
5. Use cash or debit cards instead of credit cards to avoid overspending.
6. Cut back on unnecessary expenses, such as eating out or subscriptions.
7. Consider using a budgeting app or tool to help track your spending.
8. Prioritize your spending based on your financial goals.
9. Review your budget regularly and make adjustments as needed.
10. Be flexible and willing to make changes to your budget as your financial situation changes.
## Training Details
### Training Hyperparameters
- per_device_train_batch_size = 10
- gradient_accumulation_steps = 4
- optim = "paged_adamw_32bit"
- learning_rate = 2e-4
- max_grad_norm = 0.3
- warmup_ratio = 0.03
## Licensing
The FinanceConnect model, developed by CeADAR Connect Group, combines the licensing frameworks of Llama2, FinTalk-8k and Alpaca. Under Meta's terms, users are granted a non-exclusive, worldwide, non-transferable, royalty-free limited license for the use and modification of Llama Materials, inclusive of the Llama2 model and its associated documentation. When redistributing, the provided Agreement and a specific attribution notice must be included. Further, in alignment with the FinTalk dataset's(Apache 2.0) licensing and Alpaca dataset's(cc-by-nc-4.0) licensing, the model is distributed under the umbrella of all three licenses.
## Model Limitations
### Out-of-Scope Use
FinanceConnect is specifically tailored for finanical discussions and knowledge. It is not optimized for:
- General conversations.
- Domain-specific tasks outside financial tasks.
- Direct interfacing with physical devices or applications.
### Bias, Risks, and Limitations
- Dataset Biases: The FinTalk-19k and Alpaca dataset may contain inherent biases that influence the model's outputs.
- Over-reliance: The model is an aid, not a replacement for human expertise. Decisions should be made with careful consideration.
- Content Understanding: The model lacks human-like understanding and cannot judge the veracity of knowledge.
- Language Limitations: The model's primary language is English. Performance may decrease with other languages.
- Knowledge Cut-off: The model may not be aware of events or trends post its last training update.
## Citation
```
@misc {ceadar_2023,
author = { {CeADAR} },
title = { FinanceConnect-13B (Revision 5f7841d) },
year = 2023,
url = { https://huggingface.co/ceadar-ie/FinanceConnect-13B },
doi = { 10.57967/hf/1405 },
publisher = { Hugging Face }
}
```
## Contact
For any further inquiries or feedback concerning FinanceConnect, please forward your communications to ahtsham.zafar@ucd.ie
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | ceadar-ie/FinanceConnect-13B | [
-0.1984514445066452,
-0.8903122544288635,
0.16640396416187286,
0.4635560214519501,
-0.21258118748664856,
0.08368036150932312,
0.04718947783112526,
-0.630060076713562,
0.030849523842334747,
0.4377363324165344,
-0.6574718356132507,
-0.19316883385181427,
-0.5110417008399963,
-0.02478989399969... |
deepseek-ai/deepseek-llm-67b-base | deepseek-ai | 2023-11-29T11:41:38Z | 22 | 15 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T11:41:38Z | 2023-11-29T03:23:47.000Z | null | null | ---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek LLM
Introducing DeepSeek LLM, an advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. In order to foster research, we have made DeepSeek LLM 7B/67B Base and DeepSeek LLM 7B/67B Chat open source for the research community.
### 2. Model Summary
`deepseek-llm-67b-base` is a 67B parameter model with Grouped-Query Attention trained on 2 trillion tokens from scratch.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-LLM](https://github.com/deepseek-ai/deepseek-LLM)
- **Chat With DeepSeek LLM:** [DeepSeek-LLM](https://chat.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### Text Completion
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "deepseek-ai/deepseek-llm-67b-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id
text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek LLM models is subject to the Model License. DeepSeek LLM supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-LLM/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [service@deepseek.com](mailto:service@deepseek.com).
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | deepseek-ai/deepseek-llm-67b-base | [
-0.3270307779312134,
-0.7512065768241882,
0.4040359854698181,
0.34164008498191833,
-0.4763199985027313,
-0.15435774624347687,
-0.30858132243156433,
-0.4932252764701843,
0.06378762423992157,
0.2726799249649048,
-0.6778574585914612,
-0.790531575679779,
-0.6184454560279846,
-0.139318689703941... |
pandaresiddhi/lora-sparsegpt-2-4 | pandaresiddhi | 2023-11-29T05:26:54Z | 22 | 0 | null | [
"peft",
"region:us"
] | 2023-11-29T05:26:54Z | 2023-11-29T05:25:51.000Z | null | null | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
| null | peft | null | null | null | null | null | null | null | null | null | null | pandaresiddhi/lora-sparsegpt-2-4 | [
-0.6008351445198059,
-0.7704287171363831,
0.4562922418117523,
0.4660435914993286,
-0.560562252998352,
0.10317724198102951,
0.15182171761989594,
-0.1810612678527832,
-0.16928145289421082,
0.43691903352737427,
-0.5656930804252625,
-0.1273678094148636,
-0.4861360788345337,
0.10606901347637177... |
Gbssreejith/new-donut | Gbssreejith | 2023-11-29T07:15:34Z | 22 | 0 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | 2023-11-29T07:15:34Z | 2023-11-29T06:33:23.000Z | null | null | Entry not found | null | transformers | null | null | null | null | null | null | null | null | null | null | Gbssreejith/new-donut | [
-0.32276451587677,
-0.2256847620010376,
0.8622261881828308,
0.43461543321609497,
-0.5282991528511047,
0.7012973427772522,
0.7915714979171753,
0.07618623226881027,
0.7746027708053589,
0.25632160902023315,
-0.7852810025215149,
-0.22573824226856232,
-0.9104477763175964,
0.5715674161911011,
... |
meghwork1/RESDSQL-Demo | meghwork1 | 2023-11-29T16:09:12Z | 22 | 0 | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T16:09:12Z | 2023-11-29T16:03:09.000Z | null | null | ---
license: apache-2.0
---
| null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | meghwork1/RESDSQL-Demo | [
-0.12853312492370605,
-0.18616832792758942,
0.6529129147529602,
0.494362473487854,
-0.19319364428520203,
0.23607414960861206,
0.36071962118148804,
0.05056367814540863,
0.5793655514717102,
0.7400145530700684,
-0.6508100032806396,
-0.237839937210083,
-0.7102250456809998,
-0.0478254035115242,... |
ptamm/layoutlm_trained | ptamm | 2023-11-29T17:05:55Z | 21 | 0 | null | [
"transformers",
"pytorch",
"safetensors",
"layoutlmv3",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T17:05:55Z | 2023-11-21T09:19:05.000Z | null | null | Entry not found | null | transformers | token-classification | null | null | null | null | null | null | null | null | null | ptamm/layoutlm_trained | [
-0.32276463508605957,
-0.2256849706172943,
0.8622266054153442,
0.4346153736114502,
-0.5282987952232361,
0.7012974619865417,
0.7915722131729126,
0.07618652284145355,
0.7746030688285828,
0.2563217282295227,
-0.7852814793586731,
-0.22573867440223694,
-0.9104479551315308,
0.571567177772522,
... |
BEE-spoke-data/smol_llama-101M-midjourney-messages | BEE-spoke-data | 2023-11-29T20:26:35Z | 21 | 1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"art",
"text2image",
"prompt",
"prompt generator",
"diffusion util",
"dataset:pszemraj/midjourney-messages-cleaned",
"base_model:BEE-spoke-data/smol_llama-101M-GQA",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatib... | 2023-11-29T20:26:35Z | 2023-11-24T22:49:24.000Z | null | null | ---
license: apache-2.0
base_model: BEE-spoke-data/smol_llama-101M-GQA
tags:
- art
- text2image
- prompt
- prompt generator
- diffusion util
metrics:
- accuracy
inference:
parameters:
max_new_tokens: 64
do_sample: true
temperature: 0.8
repetition_penalty: 1.15
no_repeat_ngram_size: 4
eta_cutoff: 0.001
renormalize_logits: true
widget:
- text: avocado chair
example_title: avocado chair
- text: A mysterious potato
example_title: potato
pipeline_tag: text-generation
datasets:
- pszemraj/midjourney-messages-cleaned
---
# smol_llama-101M-midjourney-messages
Given a 'partial prompt' for a text2image model, this generates additional relevant text to include for a full prompt.

## Model description
This model is a fine-tuned version of [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) on the `pszemraj/midjourney-messages-cleaned` dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8431
- Accuracy: 0.4682
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17056
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0 | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | BEE-spoke-data/smol_llama-101M-midjourney-messages | [
-0.2887355387210846,
-0.6501184701919556,
0.7408512234687805,
0.34943199157714844,
-0.6436055898666382,
-0.21596933901309967,
-0.23285888135433197,
-0.2857157588005066,
0.1736571341753006,
0.39088964462280273,
-1.0541507005691528,
-0.4890481233596802,
-0.8349170684814453,
0.174817070364952... |
makhataei/qa-persian-mdeberta-v3-base-squad2 | makhataei | 2023-11-30T01:02:45Z | 21 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"question-answering",
"generated_from_trainer",
"dataset:pquad",
"base_model:makhataei/qa-persian-mdeberta-v3-base-squad2",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2023-11-30T01:02:45Z | 2023-11-26T06:17:48.000Z | null | null | ---
license: mit
base_model: makhataei/qa-persian-mdeberta-v3-base-squad2
tags:
- generated_from_trainer
datasets:
- pquad
model-index:
- name: qa-persian-mdeberta-v3-base-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa-persian-mdeberta-v3-base-squad2
This model is a fine-tuned version of [makhataei/qa-persian-mdeberta-v3-base-squad2](https://huggingface.co/makhataei/qa-persian-mdeberta-v3-base-squad2) on the pquad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.25e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.2324 | 0.04 | 500 | 1.4461 |
| 0.2076 | 0.08 | 1000 | 1.5599 |
| 0.2298 | 0.12 | 1500 | 1.6634 |
| 0.2049 | 0.16 | 2000 | 1.7076 |
| 0.201 | 0.19 | 2500 | 1.7011 |
| 0.1981 | 0.23 | 3000 | 1.6738 |
| 0.1588 | 0.27 | 3500 | 1.7657 |
| 0.1836 | 0.31 | 4000 | 1.7728 |
| 0.1958 | 0.35 | 4500 | 1.6861 |
| 0.162 | 0.39 | 5000 | 1.7768 |
| 0.1811 | 0.43 | 5500 | 1.7534 |
| 0.1775 | 0.47 | 6000 | 1.7344 |
| 0.1806 | 0.51 | 6500 | 1.7266 |
| 0.1566 | 0.55 | 7000 | 1.8093 |
| 0.1517 | 0.58 | 7500 | 1.7544 |
| 0.1146 | 0.62 | 8000 | 1.9351 |
| 0.154 | 0.66 | 8500 | 1.8271 |
| 0.323 | 0.7 | 9000 | 1.4894 |
| 0.2732 | 0.74 | 9500 | 1.4975 |
| 0.2902 | 0.78 | 10000 | 1.5645 |
| 0.2561 | 0.82 | 10500 | 1.5566 |
| 0.2754 | 0.86 | 11000 | 1.4860 |
| 0.5959 | 0.9 | 11500 | 1.1121 |
| 0.5385 | 0.94 | 12000 | 1.1161 |
| 0.5452 | 0.97 | 12500 | 1.0867 |
| 0.4369 | 1.01 | 13000 | 1.2922 |
| 0.3144 | 1.05 | 13500 | 1.3008 |
| 0.3284 | 1.09 | 14000 | 1.4088 |
| 0.292 | 1.13 | 14500 | 1.4120 |
| 0.3237 | 1.17 | 15000 | 1.3833 |
| 0.3077 | 1.21 | 15500 | 1.3974 |
| 0.3051 | 1.25 | 16000 | 1.5286 |
| 0.3015 | 1.29 | 16500 | 1.4756 |
| 0.3496 | 1.32 | 17000 | 1.4013 |
| 0.3178 | 1.36 | 17500 | 1.3949 |
| 0.3188 | 1.4 | 18000 | 1.3854 |
| 0.3176 | 1.44 | 18500 | 1.4037 |
| 0.3291 | 1.48 | 19000 | 1.3074 |
| 0.3241 | 1.52 | 19500 | 1.4160 |
| 0.3164 | 1.56 | 20000 | 1.4171 |
| 0.3118 | 1.6 | 20500 | 1.4151 |
| 0.3429 | 1.64 | 21000 | 1.4271 |
| 0.2833 | 1.68 | 21500 | 1.4760 |
| 0.3184 | 1.71 | 22000 | 1.3960 |
| 0.2887 | 1.75 | 22500 | 1.4839 |
| 0.31 | 1.79 | 23000 | 1.4136 |
| 0.3282 | 1.83 | 23500 | 1.3990 |
| 0.3153 | 1.87 | 24000 | 1.4032 |
| 0.2832 | 1.91 | 24500 | 1.4633 |
| 0.3439 | 1.95 | 25000 | 1.3783 |
| 0.3133 | 1.99 | 25500 | 1.4371 |
| 0.2562 | 2.03 | 26000 | 1.5103 |
| 0.2338 | 2.06 | 26500 | 1.6106 |
| 0.2464 | 2.1 | 27000 | 1.6430 |
| 0.2187 | 2.14 | 27500 | 1.6828 |
| 0.2353 | 2.18 | 28000 | 1.6362 |
| 0.2726 | 2.22 | 28500 | 1.5727 |
| 0.2491 | 2.26 | 29000 | 1.5545 |
| 0.2743 | 2.3 | 29500 | 1.5949 |
| 0.2419 | 2.34 | 30000 | 1.6422 |
| 0.2661 | 2.38 | 30500 | 1.5882 |
| 0.2105 | 2.42 | 31000 | 1.6584 |
| 0.2323 | 2.45 | 31500 | 1.6550 |
| 0.2778 | 2.49 | 32000 | 1.5761 |
| 0.2411 | 2.53 | 32500 | 1.6776 |
| 0.2552 | 2.57 | 33000 | 1.6707 |
| 0.2468 | 2.61 | 33500 | 1.5738 |
| 0.2398 | 2.65 | 34000 | 1.6479 |
| 0.2318 | 2.69 | 34500 | 1.6217 |
| 0.2093 | 2.73 | 35000 | 1.7018 |
| 0.2344 | 2.77 | 35500 | 1.6763 |
| 0.2243 | 2.81 | 36000 | 1.6870 |
| 0.1943 | 2.84 | 36500 | 1.6926 |
| 0.221 | 2.88 | 37000 | 1.6862 |
| 0.2256 | 2.92 | 37500 | 1.7141 |
| 0.3765 | 2.96 | 38000 | 1.5414 |
| 0.3601 | 3.0 | 38500 | 1.4698 |
| 0.2237 | 3.04 | 39000 | 1.7001 |
| 0.2426 | 3.08 | 39500 | 1.6693 |
| 0.2216 | 3.12 | 40000 | 1.7385 |
| 0.2417 | 3.16 | 40500 | 1.6941 |
| 0.2604 | 3.19 | 41000 | 1.6964 |
| 0.2762 | 3.23 | 41500 | 1.6379 |
| 0.2399 | 3.27 | 42000 | 1.6806 |
| 0.2249 | 3.31 | 42500 | 1.7414 |
| 0.2582 | 3.35 | 43000 | 1.6874 |
| 0.2524 | 3.39 | 43500 | 1.6648 |
| 0.2359 | 3.43 | 44000 | 1.7382 |
| 0.2729 | 3.47 | 44500 | 1.6762 |
| 0.2729 | 3.51 | 45000 | 1.6736 |
| 0.2478 | 3.55 | 45500 | 1.7487 |
| 0.2557 | 3.58 | 46000 | 1.6379 |
| 0.2486 | 3.62 | 46500 | 1.6746 |
| 0.2541 | 3.66 | 47000 | 1.6942 |
| 0.2613 | 3.7 | 47500 | 1.6501 |
| 0.2552 | 3.74 | 48000 | 1.6790 |
| 0.2692 | 3.78 | 48500 | 1.6246 |
| 0.2769 | 3.82 | 49000 | 1.6306 |
| 0.2542 | 3.86 | 49500 | 1.6412 |
| 0.2477 | 3.9 | 50000 | 1.6786 |
| 0.2686 | 3.94 | 50500 | 1.6677 |
| 0.2324 | 3.97 | 51000 | 1.7063 |
| 0.2509 | 4.01 | 51500 | 1.6490 |
| 0.1966 | 4.05 | 52000 | 1.8161 |
| 0.227 | 4.09 | 52500 | 1.7389 |
| 0.1881 | 4.13 | 53000 | 1.8164 |
| 0.2244 | 4.17 | 53500 | 1.7851 |
| 0.2068 | 4.21 | 54000 | 1.8039 |
| 0.2094 | 4.25 | 54500 | 1.8641 |
| 0.1783 | 4.29 | 55000 | 1.8781 |
| 0.1916 | 4.32 | 55500 | 1.8887 |
| 0.2221 | 4.36 | 56000 | 1.8061 |
| 0.2238 | 4.4 | 56500 | 1.7892 |
| 0.1996 | 4.44 | 57000 | 1.8320 |
| 0.2074 | 4.48 | 57500 | 1.8944 |
| 0.2401 | 4.52 | 58000 | 1.7803 |
| 0.2174 | 4.56 | 58500 | 1.8466 |
| 0.2258 | 4.6 | 59000 | 1.8607 |
| 0.223 | 4.64 | 59500 | 1.7695 |
| 0.185 | 4.68 | 60000 | 1.8845 |
| 0.2464 | 4.71 | 60500 | 1.8049 |
| 0.2223 | 4.75 | 61000 | 1.8136 |
| 0.2192 | 4.79 | 61500 | 1.7870 |
| 0.2191 | 4.83 | 62000 | 1.7845 |
| 0.2471 | 4.87 | 62500 | 1.7158 |
| 0.2085 | 4.91 | 63000 | 1.7816 |
| 0.2316 | 4.95 | 63500 | 1.7406 |
| 0.2449 | 4.99 | 64000 | 1.7465 |
| 0.196 | 5.03 | 64500 | 1.8431 |
| 0.1851 | 5.07 | 65000 | 1.8751 |
| 0.1393 | 5.1 | 65500 | 1.9697 |
| 0.1752 | 5.14 | 66000 | 1.9985 |
| 0.1438 | 5.18 | 66500 | 2.0071 |
| 0.2112 | 5.22 | 67000 | 1.9434 |
| 0.1715 | 5.26 | 67500 | 1.9735 |
| 0.1982 | 5.3 | 68000 | 1.9319 |
| 0.1768 | 5.34 | 68500 | 1.9622 |
| 0.1872 | 5.38 | 69000 | 1.8810 |
| 0.2059 | 5.42 | 69500 | 1.8445 |
| 0.1903 | 5.45 | 70000 | 1.8744 |
| 0.1835 | 5.49 | 70500 | 1.9283 |
| 0.1843 | 5.53 | 71000 | 1.9938 |
| 0.1727 | 5.57 | 71500 | 1.9865 |
| 0.1994 | 5.61 | 72000 | 1.9390 |
| 0.172 | 5.65 | 72500 | 2.0077 |
| 0.163 | 5.69 | 73000 | 1.9794 |
| 0.196 | 5.73 | 73500 | 1.9307 |
| 0.183 | 5.77 | 74000 | 1.9463 |
| 0.1764 | 5.81 | 74500 | 1.9981 |
| 0.1951 | 5.84 | 75000 | 1.9378 |
| 0.2014 | 5.88 | 75500 | 1.9199 |
| 0.1766 | 5.92 | 76000 | 1.9824 |
| 0.1996 | 5.96 | 76500 | 1.9309 |
| 0.1919 | 6.0 | 77000 | 1.9458 |
| 0.1664 | 6.04 | 77500 | 2.0603 |
| 0.1517 | 6.08 | 78000 | 2.0253 |
| 0.1461 | 6.12 | 78500 | 2.1310 |
| 0.1379 | 6.16 | 79000 | 2.1506 |
| 0.1532 | 6.19 | 79500 | 2.0715 |
| 0.1546 | 6.23 | 80000 | 2.1345 |
| 0.156 | 6.27 | 80500 | 2.1732 |
| 0.1648 | 6.31 | 81000 | 2.1075 |
| 0.1494 | 6.35 | 81500 | 2.1547 |
| 0.1741 | 6.39 | 82000 | 2.0228 |
| 0.1391 | 6.43 | 82500 | 2.0426 |
| 0.1541 | 6.47 | 83000 | 2.0919 |
| 0.1609 | 6.51 | 83500 | 2.1206 |
| 0.159 | 6.55 | 84000 | 2.0798 |
| 0.153 | 6.58 | 84500 | 2.1216 |
| 0.1822 | 6.62 | 85000 | 2.1276 |
| 0.1466 | 6.66 | 85500 | 2.1533 |
| 0.1583 | 6.7 | 86000 | 2.1250 |
| 0.2012 | 6.74 | 86500 | 2.0619 |
| 0.1501 | 6.78 | 87000 | 2.0804 |
| 0.1748 | 6.82 | 87500 | 2.0684 |
| 0.1571 | 6.86 | 88000 | 2.0902 |
| 0.169 | 6.9 | 88500 | 2.0587 |
| 0.183 | 6.94 | 89000 | 2.0435 |
| 0.1891 | 6.97 | 89500 | 1.9954 |
| 0.1647 | 7.01 | 90000 | 2.0333 |
| 0.1511 | 7.05 | 90500 | 2.0657 |
| 0.1345 | 7.09 | 91000 | 2.1329 |
| 0.1394 | 7.13 | 91500 | 2.1481 |
| 0.133 | 7.17 | 92000 | 2.1518 |
| 0.1508 | 7.21 | 92500 | 2.1051 |
| 0.1493 | 7.25 | 93000 | 2.1017 |
| 0.148 | 7.29 | 93500 | 2.0833 |
| 0.1416 | 7.32 | 94000 | 2.1286 |
| 0.1185 | 7.36 | 94500 | 2.1419 |
| 0.1274 | 7.4 | 95000 | 2.1302 |
| 0.1326 | 7.44 | 95500 | 2.1720 |
| 0.1378 | 7.48 | 96000 | 2.1826 |
| 0.1117 | 7.52 | 96500 | 2.2190 |
| 0.1454 | 7.56 | 97000 | 2.1884 |
| 0.1288 | 7.6 | 97500 | 2.2184 |
| 0.1605 | 7.64 | 98000 | 2.1831 |
| 0.1492 | 7.68 | 98500 | 2.1518 |
| 0.1573 | 7.71 | 99000 | 2.1452 |
| 0.1496 | 7.75 | 99500 | 2.1474 |
| 0.1382 | 7.79 | 100000 | 2.1298 |
| 0.1368 | 7.83 | 100500 | 2.1231 |
| 0.1699 | 7.87 | 101000 | 2.0813 |
| 0.153 | 7.91 | 101500 | 2.1481 |
| 0.1412 | 7.95 | 102000 | 2.1022 |
| 0.1663 | 7.99 | 102500 | 2.0768 |
| 0.1055 | 8.03 | 103000 | 2.1489 |
| 0.1165 | 8.07 | 103500 | 2.1983 |
| 0.1165 | 8.1 | 104000 | 2.2075 |
| 0.1172 | 8.14 | 104500 | 2.1885 |
| 0.1222 | 8.18 | 105000 | 2.1968 |
| 0.1407 | 8.22 | 105500 | 2.2263 |
| 0.1048 | 8.26 | 106000 | 2.2442 |
| 0.1293 | 8.3 | 106500 | 2.2103 |
| 0.0964 | 8.34 | 107000 | 2.2572 |
| 0.1516 | 8.38 | 107500 | 2.2265 |
| 0.1415 | 8.42 | 108000 | 2.2039 |
| 0.1135 | 8.45 | 108500 | 2.2160 |
| 0.1431 | 8.49 | 109000 | 2.2018 |
| 0.1161 | 8.53 | 109500 | 2.2555 |
| 0.1705 | 8.57 | 110000 | 2.2277 |
| 0.1299 | 8.61 | 110500 | 2.2269 |
| 0.1354 | 8.65 | 111000 | 2.1957 |
| 0.0906 | 8.69 | 111500 | 2.2220 |
| 0.1186 | 8.73 | 112000 | 2.2277 |
| 0.1482 | 8.77 | 112500 | 2.1811 |
| 0.1628 | 8.81 | 113000 | 2.1620 |
| 0.1141 | 8.84 | 113500 | 2.1916 |
| 0.0998 | 8.88 | 114000 | 2.2243 |
| 0.1227 | 8.92 | 114500 | 2.2303 |
| 0.1434 | 8.96 | 115000 | 2.2154 |
| 0.1358 | 9.0 | 115500 | 2.1964 |
| 0.1263 | 9.04 | 116000 | 2.2122 |
| 0.0955 | 9.08 | 116500 | 2.2367 |
| 0.1016 | 9.12 | 117000 | 2.2425 |
| 0.1106 | 9.16 | 117500 | 2.2399 |
| 0.1081 | 9.2 | 118000 | 2.2621 |
| 0.1318 | 9.23 | 118500 | 2.2402 |
| 0.1174 | 9.27 | 119000 | 2.2364 |
| 0.1071 | 9.31 | 119500 | 2.2163 |
| 0.1049 | 9.35 | 120000 | 2.2512 |
| 0.1289 | 9.39 | 120500 | 2.2354 |
| 0.1214 | 9.43 | 121000 | 2.2384 |
| 0.1149 | 9.47 | 121500 | 2.2346 |
| 0.0977 | 9.51 | 122000 | 2.2553 |
| 0.1088 | 9.55 | 122500 | 2.2676 |
| 0.101 | 9.58 | 123000 | 2.2732 |
| 0.1135 | 9.62 | 123500 | 2.2706 |
| 0.1168 | 9.66 | 124000 | 2.2768 |
| 0.1164 | 9.7 | 124500 | 2.2803 |
| 0.113 | 9.74 | 125000 | 2.2813 |
| 0.0944 | 9.78 | 125500 | 2.2862 |
| 0.1189 | 9.82 | 126000 | 2.2904 |
| 0.1059 | 9.86 | 126500 | 2.2905 |
| 0.1108 | 9.9 | 127000 | 2.2920 |
| 0.1195 | 9.94 | 127500 | 2.2911 |
| 0.1009 | 9.97 | 128000 | 2.2905 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | question-answering | null | null | null | null | null | null | null | null | null | makhataei/qa-persian-mdeberta-v3-base-squad2 | [
-0.7539421319961548,
-0.49935561418533325,
0.38764020800590515,
0.20490722358226776,
-0.007207682356238365,
0.07451572269201279,
0.12721875309944153,
0.06683778762817383,
0.7666441202163696,
0.4211624264717102,
-0.6325256824493408,
-0.6646121740341187,
-0.6299108862876892,
-0.1708624511957... |
xiaotinghe/XTBert | xiaotinghe | 2023-11-29T20:20:01Z | 21 | 0 | null | [
"transformers",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T20:20:01Z | 2023-11-27T19:02:09.000Z | null | null | Entry not found | null | transformers | fill-mask | null | null | null | null | null | null | null | null | null | xiaotinghe/XTBert | [
-0.3227648138999939,
-0.22568409144878387,
0.8622261881828308,
0.43461495637893677,
-0.5282989740371704,
0.7012965083122253,
0.7915717959403992,
0.07618632167577744,
0.7746028304100037,
0.2563219666481018,
-0.7852813601493835,
-0.22573833167552948,
-0.9104479551315308,
0.5715669393539429,
... |
KaeriJenti/kaori-70b-v1 | KaeriJenti | 2023-11-29T08:55:13Z | 21 | 0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T08:55:13Z | 2023-11-29T02:23:55.000Z | null | null | ---
license: llama2
---
<h1>kaori-70b-v1 Model Card</h1>
<h3>Datasets:</h3>
- Open-Platypus
- dolphin
- OpenOrca
This Model Finetuned By Kaeri and Jenti.
<h3>Framework:</h3>
- https://github.com/hiyouga/LLaMA-Efficient-Tuning
<h3>Parameters:</h3>
- Finetune_Type : QLoRA
- GPUs : A100x4(80GB)
- Epochs : 1
- Batchsize : 8
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | KaeriJenti/kaori-70b-v1 | [
-0.859072208404541,
-0.7233968377113342,
0.521796703338623,
0.15780891478061676,
-0.738695502281189,
-0.41694337129592896,
0.06475537270307541,
-0.21130365133285522,
0.08480048179626465,
0.5017474889755249,
-0.7464640736579895,
-0.5009706616401672,
-0.16968375444412231,
-0.1166386753320694... |
deepseek-ai/deepseek-llm-7b-base | deepseek-ai | 2023-11-29T11:42:00Z | 21 | 5 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T11:42:00Z | 2023-11-29T03:08:22.000Z | null | null | ---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek LLM
Introducing DeepSeek LLM, an advanced language model comprising 7 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. In order to foster research, we have made DeepSeek LLM 7B/67B Base and DeepSeek LLM 7B/67B Chat open source for the research community.
### 2. Model Summary
`deepseek-llm-7b-base` is a 7B parameter model with Multi-Head Attention trained on 2 trillion tokens from scratch.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-LLM](https://github.com/deepseek-ai/deepseek-LLM)
- **Chat With DeepSeek LLM:** [DeepSeek-LLM](https://chat.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### Text Completion
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "deepseek-ai/deepseek-llm-7b-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id
text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek LLM models is subject to the Model License. DeepSeek LLM supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-LLM/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [service@deepseek.com](mailto:service@deepseek.com).
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | deepseek-ai/deepseek-llm-7b-base | [
-0.33617398142814636,
-0.7593737244606018,
0.41076579689979553,
0.3567091226577759,
-0.47349923849105835,
-0.14259636402130127,
-0.3197845220565796,
-0.519167959690094,
0.06090303137898445,
0.2819039821624756,
-0.6908672451972961,
-0.7530676126480103,
-0.6484923958778381,
-0.12303605675697... |
digiplay/MengX_Mix_Real_v3 | digiplay | 2023-11-29T18:45:08Z | 21 | 1 | null | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | 2023-11-29T18:45:08Z | 2023-11-29T16:57:16.000Z | null | null | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/142467?modelVersionId=238630
Sample image I generated with AUTOMATIC111 :

| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | digiplay/MengX_Mix_Real_v3 | [
-0.6235388517379761,
-0.11623475700616837,
0.44390273094177246,
0.33901944756507874,
-0.44051697850227356,
-0.2662602663040161,
0.7372536659240723,
-0.24802063405513763,
0.6938766837120056,
0.4410604238510132,
-0.8308306336402893,
-0.3743895888328552,
-0.3254507780075073,
0.113238915801048... |
JLenScott/dogbooth | JLenScott | 2023-11-29T02:17:10Z | 20 | 0 | null | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 2023-11-29T02:17:10Z | 2023-11-29T02:01:32.000Z | null | null |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - JLenScott/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | JLenScott/dogbooth | [
-0.11688967049121857,
-0.46575766801834106,
0.5217038989067078,
-0.007809889502823353,
-0.36243146657943726,
0.3142928183078766,
0.22608841955661774,
-0.3274184763431549,
0.6127873659133911,
0.35018670558929443,
-0.4667939245700836,
-0.3715245723724365,
-0.5847378373146057,
-0.229460820555... |
e-n-v-y/envy-stylized-xl-01 | e-n-v-y | 2023-11-29T02:46:44Z | 20 | 0 | null | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"anime",
"city",
"scenery",
"style",
"flat colors",
"woman",
"man",
"outline",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"has_space",
"region:us"
] | 2023-11-29T02:46:44Z | 2023-11-29T02:46:43.000Z | null | null | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Sell&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- anime
- city
- scenery
- style
- flat colors
- woman
- man
- outline
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text: 'scifi, solarpunk, alien, chemist, full body, wearing radiant aqua work clothes'
output:
url: >-
4010441.jpeg
- text: 'gargantuan fantasy cloud city beyond the end of time'
output:
url: >-
4010431.jpeg
- text: 'noon, scenery, in a elite Gnome Hills'
output:
url: >-
4010383.jpeg
- text: 'noon, scenery, "at the Unfathomable Aeon"'
output:
url: >-
4010392.jpeg
- text: 'high-end dance club in a pristine fantasy cloud arcology edge of the universe, masterpiece'
output:
url: >-
4010404.jpeg
- text: 'scifi, scrapcore, assassin, full body, wearing skin-tight iridescent light crimson power armor'
output:
url: >-
4010407.jpeg
- text: 'noon, architecture, rusted Baddie smithy'
output:
url: >-
4010413.jpeg
- text: 'morning, scenery, "at the Supernatural Megalopolis"'
output:
url: >-
4010415.jpeg
- text: 'Seagrass Bed'
output:
url: >-
4010416.jpeg
- text: 'warmly lit interior, in a sticky Scottish highland castle ruins'
output:
url: >-
4010417.jpeg
---
# Envy Stylized XL 01
<Gallery />
## Model description
<p>A retro, vaporwave-ish, cel shaded anime style.</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/e-n-v-y/envy-stylized-xl-01/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('e-n-v-y/envy-stylized-xl-01', weight_name='EnvyStylizedXL01.safetensors')
image = pipeline('warmly lit interior, in a sticky Scottish highland castle ruins').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | e-n-v-y/envy-stylized-xl-01 | [
-0.39373841881752014,
-0.5937250256538391,
0.15524867177009583,
0.38695228099823,
-0.3680151104927063,
-0.2601347267627716,
0.17347943782806396,
-0.24584704637527466,
0.7773153781890869,
0.5285664200782776,
-0.3742464482784271,
-0.33911797404289246,
-0.6921035051345825,
-0.1729655414819717... |
hanifabdlh/sample-setfit-model-example | hanifabdlh | 2023-11-29T09:27:56Z | 20 | 0 | null | [
"sentence-transformers",
"safetensors",
"albert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-11-29T09:27:56Z | 2023-11-29T09:27:45.000Z | null | null | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# hanifabdlh/sample-setfit-model-example
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("hanifabdlh/sample-setfit-model-example")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null | sentence-transformers | text-classification | null | null | null | null | null | null | null | null | null | hanifabdlh/sample-setfit-model-example | [
-0.11131517589092255,
-0.7436107397079468,
0.3068840503692627,
-0.22496958076953888,
-0.1663748025894165,
-0.23915062844753265,
-0.2521701753139496,
-0.05279189348220825,
-0.03652983158826828,
0.49154162406921387,
-0.5852976441383362,
-0.28787535429000854,
-0.42334163188934326,
0.133141756... |
Bpole/lora_mistral_saiga_sberhack_v1.0 | Bpole | 2023-11-29T08:55:47Z | 19 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:Bpole/lora_sberhack_v1.0",
"region:us"
] | 2023-11-29T08:55:47Z | 2023-11-29T08:55:44.000Z | null | null | ---
library_name: peft
base_model: Bpole/lora_sberhack_v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2
| null | peft | null | null | null | null | null | null | null | null | null | null | Bpole/lora_mistral_saiga_sberhack_v1.0 | [
-0.5850827097892761,
-0.5651814937591553,
0.39757731556892395,
0.08102090656757355,
-0.25711995363235474,
-0.2935924232006073,
0.0612250454723835,
-0.5389978289604187,
0.04234451428055763,
0.6078991889953613,
-0.7226898670196533,
-0.6308227181434631,
-0.5532264709472656,
-0.096566930413246... |
yuexishuihan/sd-class-butterflies-32 | yuexishuihan | 2023-11-29T11:32:01Z | 19 | 0 | null | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | 2023-11-29T11:32:01Z | 2023-11-29T11:28:01.000Z | null | null | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('yuexishuihan/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
| null | diffusers | unconditional-image-generation | null | null | null | null | null | null | null | null | null | yuexishuihan/sd-class-butterflies-32 | [
-0.37260720133781433,
-0.6375502347946167,
0.3012542128562927,
0.4329323172569275,
-0.5278863310813904,
-0.39413201808929443,
0.5436517596244812,
0.13259653747081757,
0.21354970335960388,
0.4653075039386749,
-0.652407705783844,
-0.3716704845428467,
-0.5138044357299805,
-0.1974392980337143,... |
Herocat/opt-125m-finetuned-wikitext2 | Herocat | 2023-11-29T15:29:05Z | 18 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T15:29:05Z | 2023-11-03T00:43:40.000Z | null | null | ---
license: other
base_model: facebook/opt-125m
tags:
- generated_from_trainer
model-index:
- name: opt-125m-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m-finetuned-wikitext2
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2192 | 1.0 | 1184 | 3.1833 |
| 3.024 | 2.0 | 2368 | 3.1701 |
| 2.9101 | 3.0 | 3552 | 3.1709 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | Herocat/opt-125m-finetuned-wikitext2 | [
-0.5350219011306763,
-0.7495762705802917,
0.11831279844045639,
0.17893783748149872,
-0.3351476788520813,
-0.4305497109889984,
-0.1700790524482727,
-0.22014069557189941,
0.25614413619041443,
0.38650432229042053,
-1.032137393951416,
-0.49252092838287354,
-0.4847288131713867,
-0.1294856220483... |
bluestarburst/AnimateDiff-SceneFusion | bluestarburst | 2023-11-30T00:01:50Z | 18 | 0 | null | [
"diffusers",
"arxiv:2307.04725",
"endpoints_compatible",
"region:us"
] | 2023-11-30T00:01:50Z | 2023-11-24T16:49:23.000Z | null | null | # AnimateDiff
This repository is the official implementation of [AnimateDiff](https://arxiv.org/abs/2307.04725).
**[AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning](https://arxiv.org/abs/2307.04725)**
</br>
Yuwei Guo,
Ceyuan Yang*,
Anyi Rao,
Yaohui Wang,
Yu Qiao,
Dahua Lin,
Bo Dai
<p style="font-size: 0.8em; margin-top: -1em">*Corresponding Author</p>
<!-- [Arxiv Report](https://arxiv.org/abs/2307.04725) | [Project Page](https://animatediff.github.io/) -->
[](https://arxiv.org/abs/2307.04725)
[](https://animatediff.github.io/)
[](https://openxlab.org.cn/apps/detail/Masbfca/AnimateDiff)
[](https://huggingface.co/spaces/guoyww/AnimateDiff)
## Next
One with better controllability and quality is coming soon. Stay tuned.
## Features
- **[2023/11/10]** Release the Motion Module (beta version) on SDXL, available at [Google Drive](https://drive.google.com/file/d/1EK_D9hDOPfJdK4z8YDB8JYvPracNx2SX/view?usp=share_link
) / [HuggingFace](https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt
) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules). High resolution videos (i.e., 1024x1024x16 frames with various aspect ratios) could be produced **with/without** personalized models. Inference usually requires ~13GB VRAM and tuned hyperparameters (e.g., #sampling steps), depending on the chosen personalized models. Checkout to the branch [sdxl](https://github.com/guoyww/AnimateDiff/tree/sdxl) for more details of the inference. More checkpoints with better-quality would be available soon. Stay tuned. Examples below are manually downsampled for fast loading.
<table class="center">
<tr style="line-height: 0">
<td width=50% style="border: none; text-align: center">Original SDXL</td>
<td width=30% style="border: none; text-align: center">Personalized SDXL</td>
<td width=20% style="border: none; text-align: center">Personalized SDXL</td>
</tr>
<tr>
<td width=50% style="border: none"><img src="__assets__/animations/motion_xl/01.gif"></td>
<td width=30% style="border: none"><img src="__assets__/animations/motion_xl/02.gif"></td>
<td width=20% style="border: none"><img src="__assets__/animations/motion_xl/03.gif"></td>
</tr>
</table>
- **[2023/09/25]** Release **MotionLoRA** and its model zoo, **enabling camera movement controls**! Please download the MotionLoRA models (**74 MB per model**, available at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) ) and save them to the `models/MotionLoRA` folder. Example:
```
python -m scripts.animate --config configs/prompts/v2/5-RealisticVision-MotionLoRA.yaml
```
<table class="center">
<tr style="line-height: 0">
<td colspan="2" style="border: none; text-align: center">Zoom In</td>
<td colspan="2" style="border: none; text-align: center">Zoom Out</td>
<td colspan="2" style="border: none; text-align: center">Zoom Pan Left</td>
<td colspan="2" style="border: none; text-align: center">Zoom Pan Right</td>
</tr>
<tr>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/01.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/02.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/02.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/01.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/03.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/04.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/04.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/03.gif"></td>
</tr>
<tr style="line-height: 0">
<td colspan="2" style="border: none; text-align: center">Tilt Up</td>
<td colspan="2" style="border: none; text-align: center">Tilt Down</td>
<td colspan="2" style="border: none; text-align: center">Rolling Anti-Clockwise</td>
<td colspan="2" style="border: none; text-align: center">Rolling Clockwise</td>
</tr>
<tr>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/05.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/05.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/06.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/06.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/07.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/07.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_01/08.gif"></td>
<td style="border: none"><img src="__assets__/animations/motion_lora/model_02/08.gif"></td>
</tr>
</table>
- **[2023/09/10]** New Motion Module release! `mm_sd_v15_v2.ckpt` was trained on larger resolution & batch size, and gains noticeable quality improvements. Check it out at [Google Drive](https://drive.google.com/drive/folders/1EqLC65eR1-W-sGD0Im7fkED6c8GkiNFI?usp=sharing) / [HuggingFace](https://huggingface.co/guoyww/animatediff) / [CivitAI](https://civitai.com/models/108836/animatediff-motion-modules) and use it with `configs/inference/inference-v2.yaml`. Example:
```
python -m scripts.animate --config configs/prompts/v2/5-RealisticVision.yaml
```
Here is a qualitative comparison between `mm_sd_v15.ckpt` (left) and `mm_sd_v15_v2.ckpt` (right):
<table class="center">
<tr>
<td><img src="__assets__/animations/compare/old_0.gif"></td>
<td><img src="__assets__/animations/compare/new_0.gif"></td>
<td><img src="__assets__/animations/compare/old_1.gif"></td>
<td><img src="__assets__/animations/compare/new_1.gif"></td>
<td><img src="__assets__/animations/compare/old_2.gif"></td>
<td><img src="__assets__/animations/compare/new_2.gif"></td>
<td><img src="__assets__/animations/compare/old_3.gif"></td>
<td><img src="__assets__/animations/compare/new_3.gif"></td>
</tr>
</table>
- GPU Memory Optimization, ~12GB VRAM to inference
## Quick Demo
User Interface developed by community:
- A1111 Extension [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) (by [@continue-revolution](https://github.com/continue-revolution))
- ComfyUI Extension [ComfyUI-AnimateDiff-Evolved](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) (by [@Kosinkadink](https://github.com/Kosinkadink))
- Google Colab: [Colab](https://colab.research.google.com/github/camenduru/AnimateDiff-colab/blob/main/AnimateDiff_colab.ipynb) (by [@camenduru](https://github.com/camenduru))
We also create a Gradio demo to make AnimateDiff easier to use. To launch the demo, please run the following commands:
```
conda activate animatediff
python app.py
```
By default, the demo will run at `localhost:7860`.
<br><img src="__assets__/figs/gradio.jpg" style="width: 50em; margin-top: 1em">
## Model Zoo
<details open>
<summary>Motion Modules</summary>
| Name | Parameter | Storage Space |
|----------------------|-----------|---------------|
| mm_sd_v14.ckpt | 417 M | 1.6 GB |
| mm_sd_v15.ckpt | 417 M | 1.6 GB |
| mm_sd_v15_v2.ckpt | 453 M | 1.7 GB |
</details>
<details open>
<summary>MotionLoRAs</summary>
| Name | Parameter | Storage Space |
|--------------------------------------|-----------|---------------|
| v2_lora_ZoomIn.ckpt | 19 M | 74 MB |
| v2_lora_ZoomOut.ckpt | 19 M | 74 MB |
| v2_lora_PanLeft.ckpt | 19 M | 74 MB |
| v2_lora_PanRight.ckpt | 19 M | 74 MB |
| v2_lora_TiltUp.ckpt | 19 M | 74 MB |
| v2_lora_TiltDown.ckpt | 19 M | 74 MB |
| v2_lora_RollingClockwise.ckpt | 19 M | 74 MB |
| v2_lora_RollingAnticlockwise.ckpt | 19 M | 74 MB |
</details>
## Common Issues
<details>
<summary>Installation</summary>
Please ensure the installation of [xformer](https://github.com/facebookresearch/xformers) that is applied to reduce the inference memory.
</details>
<details>
<summary>Various resolution or number of frames</summary>
Currently, we recommend users to generate animation with 16 frames and 512 resolution that are aligned with our training settings. Notably, various resolution/frames may affect the quality more or less.
</details>
<details>
<summary>How to use it without any coding</summary>
1) Get lora models: train lora model with [A1111](https://github.com/continue-revolution/sd-webui-animatediff) based on a collection of your own favorite images (e.g., tutorials [English](https://www.youtube.com/watch?v=mfaqqL5yOO4), [Japanese](https://www.youtube.com/watch?v=N1tXVR9lplM), [Chinese](https://www.bilibili.com/video/BV1fs4y1x7p2/))
or download Lora models from [Civitai](https://civitai.com/).
2) Animate lora models: using gradio interface or A1111
(e.g., tutorials [English](https://github.com/continue-revolution/sd-webui-animatediff), [Japanese](https://www.youtube.com/watch?v=zss3xbtvOWw), [Chinese](https://941ai.com/sd-animatediff-webui-1203.html))
3) Be creative togther with other techniques, such as, super resolution, frame interpolation, music generation, etc.
</details>
<details>
<summary>Animating a given image</summary>
We totally agree that animating a given image is an appealing feature, which we would try to support officially in future. For now, you may enjoy other efforts from the [talesofai](https://github.com/talesofai/AnimateDiff).
</details>
<details>
<summary>Contributions from community</summary>
Contributions are always welcome!! The <code>dev</code> branch is for community contributions. As for the main branch, we would like to align it with the original technical report :)
</details>
## Training and inference
Please refer to [ANIMATEDIFF](./__assets__/docs/animatediff.md) for the detailed setup.
## Gallery
We collect several generated results in [GALLERY](./__assets__/docs/gallery.md).
## BibTeX
```
@article{guo2023animatediff,
title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Wang, Yaohui and Qiao, Yu and Lin, Dahua and Dai, Bo},
journal={arXiv preprint arXiv:2307.04725},
year={2023}
}
```
## Disclaimer
This project is released for academic use. We disclaim responsibility for user-generated content. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for, users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards.
## Contact Us
**Yuwei Guo**: [guoyuwei@pjlab.org.cn](mailto:guoyuwei@pjlab.org.cn)
**Ceyuan Yang**: [yangceyuan@pjlab.org.cn](mailto:yangceyuan@pjlab.org.cn)
**Bo Dai**: [daibo@pjlab.org.cn](mailto:daibo@pjlab.org.cn)
## Acknowledgements
Codebase built upon [Tune-a-Video](https://github.com/showlab/Tune-A-Video).
| null | diffusers | null | null | null | null | null | null | null | null | null | null | bluestarburst/AnimateDiff-SceneFusion | [
-0.6127331852912903,
-0.5921269655227661,
0.11976095288991928,
0.2414504736661911,
-0.2551180124282837,
-0.28388145565986633,
0.1290029138326645,
-0.39234843850135803,
0.6898066997528076,
0.14488419890403748,
-1.0069559812545776,
-0.47625595331192017,
-0.49989330768585205,
-0.1585357636213... |
sangmin6600/t5-v1_1-xl-ko-chat | sangmin6600 | 2023-11-29T07:24:49Z | 18 | 0 | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T07:24:49Z | 2023-11-29T04:22:37.000Z | null | null | ---
license: apache-2.0
language:
- ko
pipeline_tag: text2text-generation
--- | null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | sangmin6600/t5-v1_1-xl-ko-chat | [
-0.1285332441329956,
-0.18616808950901031,
0.6529132127761841,
0.49436241388320923,
-0.19319361448287964,
0.23607464134693146,
0.3607200086116791,
0.0505637601017952,
0.579365611076355,
0.7400144338607788,
-0.6508100032806396,
-0.23784014582633972,
-0.710224986076355,
-0.047825563699007034... |
rntc/pubmedbert-bigbio_blurb-ncbi_disease | rntc | 2023-11-29T13:32:36Z | 18 | 0 | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T13:32:36Z | 2023-11-29T12:22:57.000Z | null | null | Entry not found | null | transformers | token-classification | null | null | null | null | null | null | null | null | null | rntc/pubmedbert-bigbio_blurb-ncbi_disease | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
softwareweaver/Sdxl-Turbo-Olive-Onnx | softwareweaver | 2023-11-29T16:16:21Z | 18 | 1 | null | [
"diffusers",
"onnx",
"text-to-image",
"en",
"license:cc-by-nc-4.0",
"diffusers:ORTStableDiffusionXLPipeline",
"region:us"
] | 2023-11-29T16:16:21Z | 2023-11-29T16:10:38.000Z | null | null | ---
license: cc-by-nc-4.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
Olive Optimized DirectML Onnx model for https://huggingface.co/stabilityai/sdxl-turb
Licensed for Non Commercial Use Only. Contact Stabilty AI for a commercial License! | null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | softwareweaver/Sdxl-Turbo-Olive-Onnx | [
-0.21637007594108582,
-0.545552670955658,
0.2133236676454544,
0.7488566637039185,
-0.533270537853241,
-0.3286939263343811,
0.35043877363204956,
-0.4530637562274933,
0.13219895958900452,
0.7898251414299011,
-0.4241744577884674,
-0.5815874338150024,
-0.28612780570983887,
-0.03679909929633140... |
SebastianMoncaleano/cammel_model_nov | SebastianMoncaleano | 2023-11-29T16:31:27Z | 18 | 0 | null | [
"transformers",
"safetensors",
"opt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T16:31:27Z | 2023-11-29T16:30:59.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | SebastianMoncaleano/cammel_model_nov | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
yentinglin/Taiwan-LLaMa-v0.0 | yentinglin | 2023-11-29T05:59:39Z | 17 | 1 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"dataset:yentinglin/traditional_mandarin_instructions",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T05:59:39Z | 2023-08-10T05:29:14.000Z | null | null | ---
license: llama2
datasets:
- yentinglin/traditional_mandarin_instructions
language:
- zh
widget:
- text: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:"
library_name: transformers
pipeline_tag: text-generation
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Model Card for Taiwan LLM 13B v0.0 chat
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning.
This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances.
It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance.
For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf).
## Model description
- **Model type:** A 13B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
- **Finetuned from model:** [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
- **Demo:** https://twllm.com/
## Performance

## Intended uses
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.34
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLaMa-v0.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "你是一個人工智慧助理",
},
{"role": "user", "content": "東北季風如何影響台灣氣候?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
### Training hyperparameters



The following hyperparameters were used during training:
- learning_rate: 5e-05
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5.0
## Citation
If you find Taiwan LLM is useful in your work, please cite it with:
```
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Language Models for Taiwanese Culture},
year={2023},
url={https://github.com/MiuLab/Taiwan-LLaMa},
note={Code and models available at https://github.com/MiuLab/Taiwan-LLaMa},
}
```
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | yentinglin/Taiwan-LLaMa-v0.0 | [
-0.41253864765167236,
-0.9836713075637817,
0.30311304330825806,
0.5143293142318726,
-0.47272923588752747,
0.09536104649305344,
-0.45265939831733704,
-0.5621394515037537,
0.4416308104991913,
0.3735349774360657,
-0.49108704924583435,
-0.6793140769004822,
-0.5370416641235352,
0.12042467296123... |
zaanind/llma-2-7b-tuned | zaanind | 2023-11-29T14:20:00Z | 17 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T14:20:00Z | 2023-11-16T16:51:25.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | zaanind/llma-2-7b-tuned | [
-0.3227650821208954,
-0.22568479180335999,
0.8622263669967651,
0.4346153140068054,
-0.5282987952232361,
0.7012966871261597,
0.7915722727775574,
0.07618651539087296,
0.7746027112007141,
0.2563222348690033,
-0.7852821350097656,
-0.225738525390625,
-0.910447895526886,
0.5715667009353638,
-0... |
NghiemAbe/QAvietnamese | NghiemAbe | 2023-11-29T16:07:36Z | 16 | 0 | null | [
"peft",
"text-generation",
"vi",
"region:us"
] | 2023-11-29T16:07:36Z | 2023-09-06T05:08:19.000Z | null | null | ---
library_name: peft
language:
- vi
pipeline_tag: text-generation
---
## Information
- Model Pretrain: VietAI/gpt-neo-1.3B-vietnamese-news
## Training procedure
batch_size: int = 128
micro_batch_size: int = 2
num_epochs: int = 1
learning_rate: float = 3e-4
cutoff_len: int = 256
val_set_size: int = 0
### Lora hyperparams
lora_r: int = 6
lora_alpha: int = 12
lora_dropout: float = 0.05
lora_target_modules: str = "q_proj k_proj v_proj"
### Framework versions
- transformer 4.32.1
- pytorch
- PEFT 0.5.0
## How to use
```python
BASE_MODEL = "VietAI/gpt-neo-1.3B-vietnamese-news"
PEFT_WEIGHTS = "NghiemAbe/QAvietnamese"
model = AutoModelForCausalLM.from_pretrained(BASE_MODEL, torch_dtype=torch.bfloat16)
model = PeftModel.from_pretrained(model, PEFT_WEIGHTS, torch_dtype=torch.bfloat16)
``` | null | peft | text-generation | null | null | null | null | null | null | null | null | null | NghiemAbe/QAvietnamese | [
-0.22088640928268433,
-0.6667260527610779,
0.23570764064788818,
0.3640420436859131,
-0.4638478755950928,
-0.2506643235683441,
-0.006552760023623705,
0.154534250497818,
0.039515215903520584,
0.592153787612915,
-0.4654366374015808,
-0.4415033161640167,
-0.64552903175354,
-0.05792364478111267... |
StarkWizard/Mistral-7b-instruct-cairo-PEFT | StarkWizard | 2023-11-29T10:04:44Z | 16 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | 2023-11-29T10:04:44Z | 2023-10-23T22:55:03.000Z | null | null | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
| null | peft | null | null | null | null | null | null | null | null | null | null | StarkWizard/Mistral-7b-instruct-cairo-PEFT | [
-0.5839648842811584,
-0.5444982647895813,
0.4422629177570343,
0.10047031193971634,
-0.21837837994098663,
-0.29282113909721375,
0.1171051636338234,
-0.560427188873291,
0.08466268330812454,
0.6898143887519836,
-0.749681830406189,
-0.6463689804077148,
-0.5569517612457275,
-0.12423540651798248... |
Joetib/pythia-410m-finetuned-1000-steps-qa | Joetib | 2023-11-29T18:49:56Z | 16 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T18:49:56Z | 2023-11-03T20:41:59.000Z | null | null | ---
license: apache-2.0
base_model: EleutherAI/pythia-410m
tags:
- generated_from_trainer
model-index:
- name: pythia-410m-finetuned-1000-steps-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-410m-finetuned-1000-steps-qa
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5589 | 24.0 | 120 | 3.0738 |
| 0.0825 | 48.0 | 240 | 3.2161 |
| 0.0719 | 72.0 | 360 | 3.4322 |
| 0.0672 | 96.0 | 480 | 3.5203 |
| 0.0643 | 120.0 | 600 | 3.5671 |
| 0.0633 | 144.0 | 720 | 3.6143 |
| 0.0623 | 168.0 | 840 | 3.6509 |
| 0.0611 | 192.0 | 960 | 3.6646 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| null | transformers | text-generation | null | null | null | null | null | null | null | null | null | Joetib/pythia-410m-finetuned-1000-steps-qa | [
-0.4855503737926483,
-0.520466148853302,
0.07735028862953186,
0.03060346655547619,
-0.32096508145332336,
-0.44082391262054443,
-0.055029094219207764,
-0.07496730983257294,
0.18689623475074768,
0.36218470335006714,
-0.7864567637443542,
-0.5729644894599915,
-0.5583630800247192,
-0.1389307528... |
Dave12121/Fsentiment | Dave12121 | 2023-11-30T00:51:43Z | 16 | 1 | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"en",
"dataset:financial_phrasebank",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-30T00:51:43Z | 2023-11-26T22:13:24.000Z | null | null | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Dave12121/Fsentiment
results: []
datasets:
- financial_phrasebank
language:
- en
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Dave12121/Fsentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [financial phrasebank sentences all agree](https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree) dataset.
It achieves the following results on the evaluation set [financial phrasebank sentences 75%](https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_75agree):
- Train Loss: 0.0517
- Validation Loss: 0.2117
- Train Accuracy: 0.9384
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 705, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5028 | 0.3128 | 0.8904 | 0 |
| 0.1137 | 0.2117 | 0.9375 | 1 |
| 0.0517 | 0.2117 | 0.9384 | 2 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.11.1
- Datasets 2.14.6
- Tokenizers 0.14.1 | null | transformers | text-classification | null | null | null | null | null | null | null | null | null | Dave12121/Fsentiment | [
-0.5239611268043518,
-0.6768238544464111,
0.22980357706546783,
0.14880482852458954,
-0.3492935597896576,
-0.20488348603248596,
-0.18329356610774994,
-0.16519804298877716,
0.086197629570961,
0.10682183504104614,
-0.6759312748908997,
-0.7142374515533447,
-0.8048564791679382,
-0.1621762067079... |
Jungwonchang/whisper-large-v2-LoRA-SPGIspeech-xs | Jungwonchang | 2023-11-29T05:34:56Z | 16 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"model-index",
"region:us"
] | 2023-11-29T05:34:56Z | 2023-11-27T13:11:26.000Z | null | null | ---
library_name: peft
base_model: openai/whisper-large-v2
model-index:
- name: Jungwonchang/whisper-large-v2-LoRA-SPGIspeech-xs
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Test set for spgispeech
type: kensho/spgispeech
config: S
split: test
metrics:
- type: wer
value: 6.72
name: WER
- type: cer
value: 1.99
name: CER
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
| null | peft | null | null | null | null | null | null | null | null | null | null | Jungwonchang/whisper-large-v2-LoRA-SPGIspeech-xs | [
-0.5839648842811584,
-0.544498085975647,
0.4422629177570343,
0.10047036409378052,
-0.21837827563285828,
-0.2928210496902466,
0.11710502207279205,
-0.560427188873291,
0.0846625566482544,
0.689814567565918,
-0.7496819496154785,
-0.6463689804077148,
-0.5569517016410828,
-0.12423544377088547,
... |
jagadeesr/text_classifier | jagadeesr | 2023-11-29T03:24:00Z | 16 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T03:24:00Z | 2023-11-29T03:23:38.000Z | null | null | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: text_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5131
- Accuracy: 0.6982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 96 | 1.2437 | 0.6404 |
| No log | 2.0 | 192 | 1.0802 | 0.6798 |
| No log | 3.0 | 288 | 1.0126 | 0.6772 |
| No log | 4.0 | 384 | 1.0858 | 0.6745 |
| No log | 5.0 | 480 | 1.2306 | 0.6850 |
| 0.7338 | 6.0 | 576 | 1.3097 | 0.6824 |
| 0.7338 | 7.0 | 672 | 1.3371 | 0.6850 |
| 0.7338 | 8.0 | 768 | 1.4222 | 0.7034 |
| 0.7338 | 9.0 | 864 | 1.3962 | 0.7060 |
| 0.7338 | 10.0 | 960 | 1.4529 | 0.6955 |
| 0.0955 | 11.0 | 1056 | 1.4697 | 0.6955 |
| 0.0955 | 12.0 | 1152 | 1.4944 | 0.7008 |
| 0.0955 | 13.0 | 1248 | 1.5037 | 0.6982 |
| 0.0955 | 14.0 | 1344 | 1.5129 | 0.7008 |
| 0.0955 | 15.0 | 1440 | 1.5131 | 0.6982 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text-classification | null | null | null | null | null | null | null | null | null | jagadeesr/text_classifier | [
-0.5172120332717896,
-0.5600986480712891,
0.07989117503166199,
0.07901155948638916,
-0.16811084747314453,
-0.26247891783714294,
-0.048005811870098114,
-0.13800764083862305,
0.14846999943256378,
0.22237657010555267,
-0.6213005781173706,
-0.7713426947593689,
-0.814103901386261,
-0.2004886865... |
evan6007/experiments | evan6007 | 2023-11-29T17:24:27Z | 16 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:baffo32/decapoda-research-llama-7B-hf",
"region:us"
] | 2023-11-29T17:24:27Z | 2023-11-29T17:24:20.000Z | null | null | ---
library_name: peft
base_model: baffo32/decapoda-research-llama-7B-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0 | null | peft | null | null | null | null | null | null | null | null | null | null | evan6007/experiments | [
-0.574804425239563,
-0.5590018033981323,
0.40296828746795654,
0.07961388677358627,
-0.2534928023815155,
-0.27700263261795044,
0.060468919575214386,
-0.5367451906204224,
0.04952648654580116,
0.6133862733840942,
-0.7236800193786621,
-0.6278332471847534,
-0.5595568418502808,
-0.08562324941158... |
tsuyuan/speech-chatgpt-base-nar-se-sk | tsuyuan | 2023-11-30T00:14:19Z | 15 | 0 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-30T00:14:19Z | 2023-11-19T20:53:43.000Z | null | null | Entry not found | null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | tsuyuan/speech-chatgpt-base-nar-se-sk | [
-0.32276451587677,
-0.2256847620010376,
0.8622261881828308,
0.43461543321609497,
-0.5282991528511047,
0.7012973427772522,
0.7915714979171753,
0.07618623226881027,
0.7746027708053589,
0.25632160902023315,
-0.7852810025215149,
-0.22573824226856232,
-0.9104477763175964,
0.5715674161911011,
... |
ilsilfverskiold/tech-keywords-extractor | ilsilfverskiold | 2023-11-29T15:17:47Z | 15 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T15:17:47Z | 2023-11-23T19:19:22.000Z | null | null | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
model-index:
- name: tech-keyword-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tech-keyword-extractor
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on a private dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8795
## Model description
This model extracts tech terms, tools, company names from texts so they can easily be aggregated. It is trained to extract tech terms, tools, languages, platforms but may be used on other texts.
## Intended uses & limitations
Use to extract keywords from texts.
Example text: "If a task raises an exception, or a worker process dies, Celery will by default lose the job. So if you happen to reboot or redeploy, any running jobs with be lost to the sands of time."
Output: "Celery, Exception Handling, Worker Process"
Example text: "Spin 2.0 – open-source tool for building and running WebAssembly applications -"
Output: "Spin 2.0, WebAssembly, Open Source"
Example text: "Do you think that low-code and no-code is a threat for developers in the long term?"
Output: "Low Code, No Code, Developers"
Example text: "I'm reaching out for some guidance on choosing the right no-code or low-code platform for my web app development projects. As a proficient back-end developer with a strong grasp of AWS, I have always struggled with front-end development"
Output: "No Code, Low Code, Web App Development, AWS"
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5095 | 0.44 | 50 | 1.1766 |
| 1.1875 | 0.89 | 100 | 0.9652 |
| 1.0428 | 1.33 | 150 | 1.0587 |
| 0.9392 | 1.78 | 200 | 0.8968 |
| 0.786 | 2.22 | 250 | 1.0131 |
| 0.8503 | 2.67 | 300 | 0.8795 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | ilsilfverskiold/tech-keywords-extractor | [
-0.4808235466480255,
-0.7023410201072693,
0.09809715300798416,
0.19337286055088043,
-0.20840871334075928,
-0.10678200423717499,
-0.2367297112941742,
-0.37715139985084534,
0.13678114116191864,
0.13642098009586334,
-0.6149066686630249,
-0.6623948216438293,
-0.5692183971405029,
-0.21132977306... |
Starbourne/cogvlm-chat-hf | Starbourne | 2023-11-29T01:55:23Z | 15 | 0 | null | [
"transformers",
"safetensors",
"text-generation",
"custom_code",
"en",
"arxiv:2311.03079",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T01:55:23Z | 2023-11-28T14:01:18.000Z | null | null | ---
license: apache-2.0
language:
- en
---
# CogVLM
**CogVLM** 是一个强大的开源视觉语言模型(VLM)。CogVLM-17B 拥有 100 亿视觉参数和 70 亿语言参数,在 10 个经典跨模态基准测试上取得了 SOTA 性能,包括 NoCaps、Flicker30k captioning、RefCOCO、RefCOCO+、RefCOCOg、Visual7W、GQA、ScienceQA、VizWiz VQA 和 TDIUC,而在 VQAv2、OKVQA、TextVQA、COCO captioning 等方面则排名第二,超越或与 PaLI-X 55B 持平。您可以通过线上 [demo](http://36.103.203.44:7861/) 体验 CogVLM 多模态对话。
**CogVLM** is a powerful **open-source visual language model** (**VLM**). CogVLM-17B has 10 billion vision parameters and 7 billion language parameters. CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and rank the 2nd on VQAv2, OKVQA, TextVQA, COCO captioning, etc., **surpassing or matching PaLI-X 55B**. CogVLM can also [chat with you](http://36.103.203.44:7861/) about images.
<div align="center">
<img src="https://github.com/THUDM/CogVLM/raw/main/assets/metrics-min.png" alt="img" style="zoom: 50%;" />
</div>
# 快速开始(Qiuckstart)
硬件需求(hardware requirement)
需要近 40GB GPU 显存用于模型推理。如果没有一整块GPU显存超过40GB,则需要使用accelerate的将模型切分到多个有较小显存的GPU设备上。
40GB VRAM for inference. If there is no single GPU with more than 40GB of VRAM, you will need to use the "accelerate" library to dispatch the model into multiple GPUs with smaller VRAM.
安装依赖(dependencies)
```base
pip install torch==2.1.0 transformers==4.35.0 accelerate==0.24.1 sentencepiece==0.1.99 einops==0.7.0 xformers==0.0.22.post7 triton==2.1.0
```
代码示例(example)
```python
import torch
import requests
from PIL import Image
from transformers import AutoModelForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained('lmsys/vicuna-7b-v1.5')
model = AutoModelForCausalLM.from_pretrained(
'THUDM/cogvlm-chat-hf',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).to('cuda').eval()
# chat example
query = 'Describe this image'
image = Image.open(requests.get('https://github.com/THUDM/CogVLM/blob/main/examples/1.png?raw=true', stream=True).raw).convert('RGB')
inputs = model.build_conversation_input_ids(tokenizer, query=query, history=[], images=[image]) # chat mode
inputs = {
'input_ids': inputs['input_ids'].unsqueeze(0).to('cuda'),
'token_type_ids': inputs['token_type_ids'].unsqueeze(0).to('cuda'),
'attention_mask': inputs['attention_mask'].unsqueeze(0).to('cuda'),
'images': [[inputs['images'][0].to('cuda').to(torch.bfloat16)]],
}
gen_kwargs = {"max_length": 2048, "do_sample": False}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
print(tokenizer.decode(outputs[0]))
# This image captures a moment from a basketball game. Two players are prominently featured: one wearing a yellow jersey with the number
# 24 and the word 'Lakers' written on it, and the other wearing a navy blue jersey with the word 'Washington' and the number 34. The player
# in yellow is holding a basketball and appears to be dribbling it, while the player in navy blue is reaching out with his arm, possibly
# trying to block or defend. The background shows a filled stadium with spectators, indicating that this is a professional game.</s>
# vqa example
query = 'How many houses are there in this cartoon?'
image = Image.open(requests.get('https://github.com/THUDM/CogVLM/blob/main/examples/3.jpg?raw=true', stream=True).raw).convert('RGB')
inputs = model.build_conversation_input_ids(tokenizer, query=query, history=[], images=[image], template_version='vqa') # vqa mode
inputs = {
'input_ids': inputs['input_ids'].unsqueeze(0).to('cuda'),
'token_type_ids': inputs['token_type_ids'].unsqueeze(0).to('cuda'),
'attention_mask': inputs['attention_mask'].unsqueeze(0).to('cuda'),
'images': [[inputs['images'][0].to('cuda').to(torch.bfloat16)]],
}
gen_kwargs = {"max_length": 2048, "do_sample": False}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
print(tokenizer.decode(outputs[0]))
# 4</s>
```
当单卡显存不足时,可以将模型切分到多个小显存GPU上。以下是个当你有两张24GB的GPU,16GBCPU内存的例子。
你可以将`infer_auto_device_map`的参数改成你的配置。注意这里将GPU显存少写了一点,这是为推理时中间状态预留出一部分显存。
dispatch the model into multiple GPUs with smaller VRAM. This is an example for you have two 24GB GPU and 16GB CPU memory.
you can change the arguments of `infer_auto_device_map` with your own setting.
```python
import torch
import requests
from PIL import Image
from transformers import AutoModelForCausalLM, LlamaTokenizer
from accelerate import init_empty_weights, infer_auto_device_map, load_checkpoint_and_dispatch
tokenizer = LlamaTokenizer.from_pretrained('lmsys/vicuna-7b-v1.5')
with init_empty_weights():
model = AutoModelForCausalLM.from_pretrained(
'THUDM/cogvlm-chat-hf',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True,
)
device_map = infer_auto_device_map(model, max_memory={0:'20GiB',1:'20GiB','cpu':'16GiB'}, no_split_module_classes='CogVLMDecoderLayer')
model = load_checkpoint_and_dispatch(
model,
'local/path/to/hf/version/chat/model', # typical, '~/.cache/huggingface/hub/models--THUDM--cogvlm-chat-hf/snapshots/balabala'
device_map=device_map,
)
model = model.eval()
# check device for weights if u want to
for n, p in model.named_parameters():
print(f"{n}: {p.device}")
# chat example
query = 'Describe this image'
image = Image.open(requests.get('https://github.com/THUDM/CogVLM/blob/main/examples/1.png?raw=true', stream=True).raw).convert('RGB')
inputs = model.build_conversation_input_ids(tokenizer, query=query, history=[], images=[image]) # chat mode
inputs = {
'input_ids': inputs['input_ids'].unsqueeze(0).to('cuda'),
'token_type_ids': inputs['token_type_ids'].unsqueeze(0).to('cuda'),
'attention_mask': inputs['attention_mask'].unsqueeze(0).to('cuda'),
'images': [[inputs['images'][0].to('cuda').to(torch.bfloat16)]],
}
gen_kwargs = {"max_length": 2048, "do_sample": False}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
print(tokenizer.decode(outputs[0]))
```
# 方法(Method)
CogVLM 模型包括四个基本组件:视觉变换器(ViT)编码器、MLP适配器、预训练的大型语言模型(GPT)和一个**视觉专家模块**。更多细节请参见[Paper](https://github.com/THUDM/CogVLM/blob/main/assets/cogvlm-paper.pdf)。
CogVLM model comprises four fundamental components: a vision transformer (ViT) encoder, an MLP adapter, a pretrained large language model (GPT), and a **visual expert module**. See [Paper](https://github.com/THUDM/CogVLM/blob/main/assets/cogvlm-paper.pdf) for more details.
<div align="center">
<img src="https://github.com/THUDM/CogVLM/raw/main/assets/method-min.png" style="zoom:50%;" />
</div>
# 许可(License)
此存储库中的代码是根据 [Apache-2.0 许可](https://github.com/THUDM/CogVLM/raw/main/LICENSE) 开放源码,而使用 CogVLM 模型权重必须遵循 [模型许可](https://github.com/THUDM/CogVLM/raw/main/MODEL_LICENSE)。
The code in this repository is open source under the [Apache-2.0 license](https://github.com/THUDM/CogVLM/raw/main/LICENSE), while the use of the CogVLM model weights must comply with the [Model License](https://github.com/THUDM/CogVLM/raw/main/MODEL_LICENSE).
# 引用(Citation)
If you find our work helpful, please consider citing the following papers
```
@article{wang2023cogvlm,
title={CogVLM: Visual Expert for Pretrained Language Models},
author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
year={2023},
eprint={2311.03079},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | Starbourne/cogvlm-chat-hf | [
-0.4759187400341034,
-0.9477226734161377,
0.1703292429447174,
0.21534305810928345,
-0.38769349455833435,
-0.08281803131103516,
-0.2094995081424713,
-0.4503667652606964,
-0.032587338238954544,
0.2644384801387787,
-0.4577181041240692,
-0.6473731994628906,
-0.4955289363861084,
-0.340844810009... |
damerajee/codellama2-finetuned-alpaca-18k-fin | damerajee | 2023-11-29T07:52:45Z | 15 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"en",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T07:52:45Z | 2023-11-28T16:01:57.000Z | null | null | ---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: codellama2-finetuned-codex-py
results: []
datasets:
- iamtarun/python_code_instructions_18k_alpaca
language:
- en
pipeline_tag: text-generation
---
# codellama2-finetuned-codex-py
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the [iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Example Use Cases:
```
from transformers import AutoTokenizer
from transformers import pipeline
import torch
tokenizer = AutoTokenizer.from_pretrained("damerajee/codellama2-finetuned-alpaca-18k-fin")
pipe = pipeline(
"text-generation",
model="damerajee/codellama2-finetuned-alpaca-18k-fin",
torch_dtype=torch.float16,
device_map="auto",
)
text = "write a function that takes in print out each individual characters in a string"
sequences = pipe(
text,
do_sample=True,
temperature=0.1,
top_p=0.7,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=70,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training and evaluation data
| Step | Training Loss |
|------|---------------|
| 10 | 0.792200 |
| 20 | 0.416100 |
| 30 | 0.348600 |
| 40 | 0.323200 |
| 50 | 0.316300 |
| 60 | 0.317500 |
| 70 | 0.333600 |
| 80 | 0.329500 |
| 90 | 0.333400 |
| 100 | 0.309900 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0 | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | damerajee/codellama2-finetuned-alpaca-18k-fin | [
-0.4178982675075531,
-0.6037363409996033,
0.016736136749386787,
0.3278909921646118,
-0.2402438074350357,
-0.39255738258361816,
-0.3467060327529907,
-0.12023647874593735,
0.1293467879295349,
0.39890971779823303,
-0.49790212512016296,
-0.6032649278640747,
-0.7147784233093262,
-0.031842291355... |
tsavage68/Llama2_SFT_chat | tsavage68 | 2023-11-29T07:43:15Z | 15 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T07:43:15Z | 2023-11-28T16:09:23.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | tsavage68/Llama2_SFT_chat | [
-0.32276490330696106,
-0.22568461298942566,
0.862226128578186,
0.43461498618125916,
-0.5282989740371704,
0.7012966871261597,
0.7915717363357544,
0.07618622481822968,
0.7746026515960693,
0.25632232427597046,
-0.785281777381897,
-0.22573840618133545,
-0.9104479551315308,
0.5715670585632324,
... |
athirdpath/BigLlama-20b | athirdpath | 2023-11-29T02:35:13Z | 15 | 0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T02:35:13Z | 2023-11-29T02:27:09.000Z | null | null | ---
license: llama2
---
I'm going to compare DARE merges using this (mostly vanilla, alpaca-tinted) 20b model vs using Harmonia.
slices:
- sources:
- model: athirdpath/alpaca-2-13b-english_full-model
-
layer_range: [0, 16]
- sources:
- model: TheBloke/Llama-2-13B-fp16
-
layer_range: [8, 24]
- sources:
- model: athirdpath/alpaca-2-13b-english_full-model
-
layer_range: [17, 32]
- sources:
- model: TheBloke/Llama-2-13B-fp16
-
layer_range: [25, 40]
merge_method: passthrough
dtype: float16 | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | athirdpath/BigLlama-20b | [
-0.6225364208221436,
-0.7531182765960693,
0.614471435546875,
0.5724249482154846,
-0.4172067642211914,
-0.13447365164756775,
-0.09864168614149094,
-0.7234538793563843,
0.06845567375421524,
0.4946865141391754,
-0.5306392312049866,
-0.5866242051124573,
-0.2661523222923279,
-0.0818305090069770... |
sh-zheng/vit-base-patch16-224-in21k-fintuned-SurfaceRoughness | sh-zheng | 2023-11-29T03:53:34Z | 15 | 0 | null | [
"transformers",
"pytorch",
"vit",
"image-classification",
"en",
"dataset:sh-zheng/SurfaceRoughness",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T03:53:34Z | 2023-11-29T03:34:16.000Z | null | null | ---
license: mit
datasets:
- sh-zheng/SurfaceRoughness
language:
- en
metrics:
- accuracy
pipeline_tag: image-classification
---
## Vision Transformer (Fine-Tuned model)
refer to https://huggingface.co/google/vit-base-patch16-224 for model detail and how to use
## Model Description
Predict surface roughness category using snips taken from google maps aerial view. There are 3 categories: surface roughness B, surface roughness C, surface roughness D as defined in ASCE 7-16 section 26.7.2. | null | transformers | image-classification | null | null | null | null | null | null | null | null | null | sh-zheng/vit-base-patch16-224-in21k-fintuned-SurfaceRoughness | [
-0.48148688673973083,
-0.6047345995903015,
0.42627596855163574,
0.3311009407043457,
-0.3319338262081146,
-0.09793377667665482,
0.30968979001045227,
-0.2952723503112793,
-0.059653446078300476,
0.9452136158943176,
-0.6933643221855164,
-0.6078299880027771,
-0.7283816337585449,
-0.167171835899... |
lora-library/ahmeng | lora-library | 2023-11-29T08:26:32Z | 15 | 0 | null | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"has_space",
"region:us"
] | 2023-11-29T08:26:32Z | 2023-11-29T08:26:13.000Z | null | null | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: a photo of ahmeng
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - ahmeng
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "a photo of ahmeng" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: ahmeng




| null | diffusers | text-to-image | null | null | null | null | null | null | null | null | null | lora-library/ahmeng | [
-0.2554784119129181,
-0.7466234564781189,
0.2715367078781128,
0.17539119720458984,
-0.5034016370773315,
-0.09336201101541519,
0.46387019753456116,
-0.15277868509292603,
0.8797029852867126,
0.6009414792060852,
-0.7066906690597534,
-0.448635458946228,
-0.5325735807418823,
-0.1915680915117263... |
sangmin6600/t5-v1_1-base-ko-chat | sangmin6600 | 2023-11-29T07:24:18Z | 14 | 0 | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T07:24:18Z | 2023-11-28T11:41:26.000Z | null | null | ---
license: apache-2.0
language:
- ko
pipeline_tag: text2text-generation
--- | null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | sangmin6600/t5-v1_1-base-ko-chat | [
-0.12853386998176575,
-0.18616794049739838,
0.6529127359390259,
0.4943622946739197,
-0.19319306313991547,
0.2360745519399643,
0.36072012782096863,
0.05056336894631386,
0.579365611076355,
0.740013837814331,
-0.6508102416992188,
-0.23784014582633972,
-0.7102251052856445,
-0.04782590642571449... |
utkarsh-mujumdar/whisper-finetune | utkarsh-mujumdar | 2023-11-29T05:17:57Z | 14 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T05:17:57Z | 2023-11-29T00:21:15.000Z | null | null | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Accent Finetuned - Utkarsh Mujumdar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Accent Finetuned - Utkarsh Mujumdar
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0132
- Wer: 9.6377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0466 | 10.0 | 100 | 0.0429 | 75.7609 |
| 0.0001 | 20.0 | 200 | 0.0134 | 9.8551 |
| 0.0 | 30.0 | 300 | 0.0133 | 9.8913 |
| 0.0 | 40.0 | 400 | 0.0132 | 9.6377 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | utkarsh-mujumdar/whisper-finetune | [
-0.463049054145813,
-0.5727716684341431,
0.026470012962818146,
0.10039088129997253,
-0.25748729705810547,
-0.5624840259552002,
-0.4140026271343231,
-0.3248502016067505,
0.2234252691268921,
0.3286452889442444,
-0.7299878597259521,
-0.5963518023490906,
-0.5695212483406067,
-0.184484228491783... |
Shamik/whisper-base.en-finetuned-gtzan | Shamik | 2023-11-29T07:55:43Z | 14 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:openai/whisper-base.en",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | 2023-11-29T07:55:43Z | 2023-11-29T07:55:38.000Z | null | null | ---
license: apache-2.0
base_model: openai/whisper-base.en
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-base.en-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.92
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base.en-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3412
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5696 | 0.99 | 56 | 1.3573 | 0.62 |
| 0.9913 | 2.0 | 113 | 0.7820 | 0.77 |
| 0.4771 | 2.99 | 169 | 0.4873 | 0.84 |
| 0.4411 | 4.0 | 226 | 0.3367 | 0.91 |
| 0.1615 | 4.99 | 282 | 0.3412 | 0.92 |
| 0.1339 | 6.0 | 339 | 0.4125 | 0.91 |
| 0.0331 | 6.99 | 395 | 0.4773 | 0.89 |
| 0.0382 | 8.0 | 452 | 0.4282 | 0.88 |
| 0.049 | 8.99 | 508 | 0.4634 | 0.9 |
| 0.0312 | 9.91 | 560 | 0.4444 | 0.9 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | audio-classification | null | null | null | null | null | null | null | null | null | Shamik/whisper-base.en-finetuned-gtzan | [
-0.4594447612762451,
-0.5131576657295227,
0.1598840206861496,
0.12190098315477371,
-0.2712500989437103,
-0.39375364780426025,
-0.19455374777317047,
-0.3166835904121399,
0.2951214909553528,
0.3825701177120209,
-0.8882489800453186,
-0.8182507157325745,
-0.675658643245697,
-0.3883257806301117... |
SonishMaharjan/whisper-large-v3-ne2.0-colab | SonishMaharjan | 2023-11-29T08:04:59Z | 14 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3",
"region:us"
] | 2023-11-29T08:04:59Z | 2023-11-29T08:04:54.000Z | null | null | ---
library_name: peft
base_model: openai/whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | SonishMaharjan/whisper-large-v3-ne2.0-colab | [
-0.574804425239563,
-0.5590018033981323,
0.40296828746795654,
0.07961388677358627,
-0.2534928023815155,
-0.27700263261795044,
0.060468919575214386,
-0.5367451906204224,
0.04952648654580116,
0.6133862733840942,
-0.7236800193786621,
-0.6278332471847534,
-0.5595568418502808,
-0.08562324941158... |
steve-cse/mello-lite | steve-cse | 2023-11-29T13:13:51Z | 14 | 0 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T13:13:51Z | 2023-11-29T09:34:48.000Z | null | null | Entry not found | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | steve-cse/mello-lite | [
-0.3227650821208954,
-0.22568479180335999,
0.8622263669967651,
0.4346153140068054,
-0.5282987952232361,
0.7012966871261597,
0.7915722727775574,
0.07618651539087296,
0.7746027112007141,
0.2563222348690033,
-0.7852821350097656,
-0.225738525390625,
-0.910447895526886,
0.5715667009353638,
-0... |
Wu2940/AlpacaTest1 | Wu2940 | 2023-11-29T14:23:07Z | 14 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:baffo32/decapoda-research-llama-7B-hf",
"region:us"
] | 2023-11-29T14:23:07Z | 2023-11-29T10:02:08.000Z | null | null | ---
library_name: peft
base_model: baffo32/decapoda-research-llama-7B-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | Wu2940/AlpacaTest1 | [
-0.5842753052711487,
-0.5491565465927124,
0.42491504549980164,
0.07196325808763504,
-0.24534834921360016,
-0.2985903024673462,
0.08938688039779663,
-0.551741898059845,
0.04892130568623543,
0.6491771340370178,
-0.7157074213027954,
-0.6185835599899292,
-0.565116286277771,
-0.1107997819781303... |
Ainura/wh | Ainura | 2023-11-29T12:34:10Z | 14 | 0 | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | 2023-11-29T12:34:10Z | 2023-11-29T12:01:51.000Z | null | null | Entry not found | null | transformers | automatic-speech-recognition | null | null | null | null | null | null | null | null | null | Ainura/wh | [
-0.3227650821208954,
-0.22568479180335999,
0.8622263669967651,
0.4346153140068054,
-0.5282987952232361,
0.7012966871261597,
0.7915722727775574,
0.07618651539087296,
0.7746027112007141,
0.2563222348690033,
-0.7852821350097656,
-0.225738525390625,
-0.910447895526886,
0.5715667009353638,
-0... |
zuu/youtube-content-summarization-bart | zuu | 2023-11-29T15:39:17Z | 14 | 0 | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T15:39:17Z | 2023-11-29T15:37:24.000Z | null | null | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
model-index:
- name: youtube-content-summarization-bart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# youtube-content-summarization-bart
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | zuu/youtube-content-summarization-bart | [
-0.5807212591171265,
-0.962052583694458,
0.13988663256168365,
0.1619572639465332,
-0.43308958411216736,
-0.3332001864910126,
-0.3103545904159546,
-0.2689344584941864,
0.4008079171180725,
0.46690815687179565,
-0.7132384181022644,
-0.5718120336532593,
-0.5988108515739441,
-0.0162273719906806... |
genejalston/bart-qanda | genejalston | 2023-11-30T01:26:32Z | 13 | 0 | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-30T01:26:32Z | 2023-11-26T16:04:29.000Z | null | null | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
model-index:
- name: bart-qanda
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-qanda
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2199 | 1.0 | 2500 | 1.8120 |
| 0.1398 | 2.0 | 5000 | 2.2787 |
| 0.0713 | 3.0 | 7500 | 3.9625 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | genejalston/bart-qanda | [
-0.455985963344574,
-0.8275231122970581,
0.2762099802494049,
0.21407441794872284,
-0.390726238489151,
-0.25020426511764526,
-0.17746074497699738,
-0.2636200487613678,
0.29635030031204224,
0.4250377416610718,
-0.8986268639564514,
-0.5882084369659424,
-0.5595003962516785,
-0.176558718085289,... |
FDSRashid/bert-base-arabic-camelbert-ca-Hadith-SentenceTransformer | FDSRashid | 2023-11-29T20:05:10Z | 13 | 0 | null | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"Hadith",
"Islam",
"Arabic",
"ar",
"dataset:FDSRashid/hadith_info",
"endpoints_compatible",
"region:us"
] | 2023-11-29T20:05:10Z | 2023-11-27T15:58:36.000Z | null | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- Hadith
- Islam
- Arabic
datasets:
- FDSRashid/hadith_info
language:
- ar
library_name: sentence-transformers
---
# bert-base-arabic-camelbert-ca-Hadith-SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model originates from the [Camel-Bert_Classical Arabic](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca) model. It was then trained on the Jawami' Kalim dataset,
specifically a dataset of 440,000 matns and their corresponding taraf labels.
Taraf labels indicate two hadith are about the same report, and as such, are more semantically similar.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["أنا أحب القراءة والكتابة.", "الطيور تحلق في السماء."]
model = SentenceTransformer('FDSRashid/bert-base-arabic-camelbert-ca-Hadith-SentenceTransformer')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["أنا أحب القراءة والكتابة.", "الطيور تحلق في السماء."]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('FDSRashid/bert-base-arabic-camelbert-ca-Hadith-SentenceTransformer')
model = AutoModel.from_pretrained('FDSRashid/bert-base-arabic-camelbert-ca-Hadith-SentenceTransformer')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
The dataset was plit into 70% training, 20 % eval, 10% test.
Validation Results during Training:
Binary Classification Evaluation:
| epoch | steps | cossim_accuracy | cossim_accuracy_threshold | cossim_f1 | cossim_precision | cossim_recall | cossim_f1_threshold | cossim_ap | manhattan_accuracy | manhattan_accuracy_threshold | manhattan_f1 | manhattan_precision | manhattan_recall | manhattan_f1_threshold | manhattan_ap | euclidean_accuracy | euclidean_accuracy_threshold | euclidean_f1 | euclidean_precision | euclidean_recall | euclidean_f1_threshold | euclidean_ap | dot_accuracy | dot_accuracy_threshold | dot_f1 | dot_precision | dot_recall | dot_f1_threshold | dot_ap |
|-------|-------|--------------------|---------------------------|--------------------|--------------------|--------------------|---------------------|--------------------|--------------------|------------------------------|--------------------|---------------------|--------------------|------------------------|--------------------|--------------------|------------------------------|--------------------|---------------------|--------------------|------------------------|--------------------|--------------------|------------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
| 0 | 10000 | 0.8185333333333333 | 0.6753207445144653 | 0.8097997356917759 | 0.8234442836468886 | 0.7966 | 0.6260570287704468 | 0.907338523341563 | 0.8213666666666667 | 396.5802001953125 | 0.8116847826086956 | 0.8274238227146814 | 0.7965333333333333 | 418.51788330078125 | 0.9098047632355247 | 0.8212 | 17.995182037353516 | 0.811710049873608 | 0.8323525290738405 | 0.7920666666666667 | 18.89975357055664 | 0.9095716833901042 | 0.8100666666666667 | 321.6893310546875 | 0.8031089864841627 | 0.8294380905022377 | 0.7784 | 310.6561279296875 | 0.8981838497410884 |
| 0 | 20000 | 0.8920666666666667 | 0.6414158940315247 | 0.887929557679789 | 0.91337139634877 | 0.8638666666666667 | 0.6253253817558289 | 0.9595904386736971 | 0.8921333333333333 | 389.108154296875 | 0.8876199649134876 | 0.9169213275531235 | 0.8601333333333333 | 395.12353515625 | 0.9596683723090225 | 0.8919333333333334 | 17.55472183227539 | 0.8878217308688893 | 0.9214055216923629 | 0.8566 | 17.803363800048828 | 0.9598077915006282 | 0.8889666666666667 | 276.0915222167969 | 0.8847497879558948 | 0.9007944732297064 | 0.8692666666666666 | 265.98468017578125 | 0.9563410477156602 |
| 0 | -1 | 0.8799666666666667 | 0.572989821434021 | 0.8746392182772891 | 0.9141527949407574 | 0.8384 | 0.568950355052948 | 0.9513399306713622 | 0.8794666666666666 | 449.26116943359375 | 0.8741721854304636 | 0.9056603773584906 | 0.8448 | 460.2845458984375 | 0.95130764507131 | 0.8799333333333333 | 20.54857063293457 | 0.8742647567642378 | 0.9087901021435765 | 0.8422666666666667 | 20.775745391845703 | 0.9513172577664986 | 0.8799333333333333 | 287.732177734375 | 0.874394620396502 | 0.9158455587183417 | 0.8365333333333334 | 282.2729187011719 | 0.9510810202807323 |
| 1 | 10000 | 0.8699 | 0.5817208886146545 | 0.866977313696813 | 0.8850079855565586 | 0.8496666666666667 | 0.5325590372085571 | 0.9490862915889822 | 0.8709 | 453.5019836425781 | 0.8675715209034934 | 0.8857400847398763 | 0.8501333333333333 | 467.4560546875 | 0.9495175299306309 | 0.8706666666666667 | 20.513057708740234 | 0.8678797736437261 | 0.8825029288126249 | 0.8537333333333333 | 21.26980972290039 | 0.9496943434749116 | 0.8686666666666667 | 263.48236083984375 | 0.8655246830321014 | 0.8756823144104804 | 0.8556 | 250.86581420898438 | 0.9473003412899217 |
| 1 | 20000 | 0.8732333333333333 | 0.5751998424530029 | 0.8677392964275975 | 0.8878348214285714 | 0.8485333333333334 | 0.5290995240211487 | 0.9497394180226975 | 0.8747666666666667 | 442.58648681640625 | 0.8688535864108383 | 0.8971736969180514 | 0.8422666666666667 | 451.35986328125 | 0.9506097634836377 | 0.8755 | 19.94643783569336 | 0.8696621386276558 | 0.9105762217359592 | 0.8322666666666667 | 20.156265258789062 | 0.9508577461307928 | 0.8694666666666667 | 252.27151489257812 | 0.8638988177069014 | 0.8916004540295119 | 0.8378666666666666 | 243.75277709960938 | 0.9471919651523734 |
| 1 | -1 | 0.8906666666666667 | 0.5318530797958374 | 0.8873533335613861 | 0.9112625588421275 | 0.8646666666666667 | 0.5200827121734619 | 0.9592570025350691 | 0.8904 | 460.5562744140625 | 0.8871981250636868 | 0.9043695034969877 | 0.8706666666666667 | 477.9581604003906 | 0.9591507056523458 | 0.8903333333333333 | 21.302833557128906 | 0.8871634908795009 | 0.9026493721539948 | 0.8722 | 21.70052719116211 | 0.9593634742939047 | 0.8904 | 254.2342529296875 | 0.8869553302274595 | 0.9122040586245772 | 0.8630666666666666 | 250.12918090820312 | 0.9584341803091609 |
| 2 | 10000 | 0.8701666666666666 | 0.6031050682067871 | 0.8642017495899399 | 0.886497476163769 | 0.843 | 0.5695009827613831 | 0.9469273756288177 | 0.8692666666666666 | 427.70361328125 | 0.8630628145900848 | 0.8936884192488933 | 0.8344666666666667 | 439.19140625 | 0.9466876663769355 | 0.8694333333333333 | 19.404329299926758 | 0.8635439285837383 | 0.8939556126454007 | 0.8351333333333333 | 19.909446716308594 | 0.9467776234867994 | 0.8697 | 284.9708251953125 | 0.8631759069130731 | 0.8868495077355837 | 0.8407333333333333 | 268.7862548828125 | 0.9461137401168651 |
| 2 | 20000 | 0.8836 | 0.4925994873046875 | 0.8801009102376165 | 0.9005790832344939 | 0.8605333333333334 | 0.46053194999694824 | 0.9558927269250999 | 0.8846666666666667 | 486.0755920410156 | 0.8800995712902779 | 0.9141051422005171 | 0.8485333333333334 | 489.2696228027344 | 0.9563158147120508 | 0.8846666666666667 | 21.90320587158203 | 0.8809613475298709 | 0.9054824407065944 | 0.8577333333333333 | 22.456363677978516 | 0.9564373347798594 | 0.8816333333333334 | 233.56451416015625 | 0.8778644312364724 | 0.9056496774650883 | 0.8517333333333333 | 223.49090576171875 | 0.9543714710206004 |
| 2 | -1 | 0.8686 | 0.6206364631652832 | 0.8657593461889164 | 0.8772842379029498 | 0.8545333333333334 | 0.5895036458969116 | 0.9475885938983436 | 0.8693 | 414.64385986328125 | 0.8658871377732588 | 0.8806618407445709 | 0.8516 | 433.5650634765625 | 0.9478468822010874 | 0.8693333333333333 | 18.791778564453125 | 0.8664596273291926 | 0.8775984682713348 | 0.8556 | 19.72414779663086 | 0.9480104382995361 | 0.8663666666666666 | 292.85955810546875 | 0.8638269100743745 | 0.8762688614540466 | 0.8517333333333333 | 279.73236083984375 | 0.9452786297221922 |
| 3 | 10000 | 0.8900666666666667 | 0.5382900834083557 | 0.8860707571355324 | 0.9092248333917924 | 0.8640666666666666 | 0.5037112832069397 | 0.9585242325322465 | 0.8905333333333333 | 465.81280517578125 | 0.886197721225353 | 0.9074332821014391 | 0.8659333333333333 | 488.3186340332031 | 0.9585587439037276 | 0.8906333333333334 | 21.188072204589844 | 0.8862763752505862 | 0.9037488739519091 | 0.8694666666666667 | 22.2010498046875 | 0.9587201530294449 | 0.8889 | 257.38787841796875 | 0.8848083552339671 | 0.9064969578292188 | 0.8641333333333333 | 245.6819305419922 | 0.9577412403235752 |
| 3 | 20000 | 0.8926666666666667 | 0.5790956020355225 | 0.8895350826177689 | 0.9135689691518516 | 0.8667333333333334 | 0.5641399621963501 | 0.9605109537798899 | 0.8916333333333334 | 425.7421569824219 | 0.8878623002429593 | 0.9121141812557125 | 0.8648666666666667 | 434.70794677734375 | 0.9603288102905432 | 0.8917333333333334 | 19.365734100341797 | 0.8886928193294421 | 0.9050317943046724 | 0.8729333333333333 | 19.960369110107422 | 0.9605575209756227 | 0.8897333333333334 | 254.62574768066406 | 0.8871011541072641 | 0.9036652835408022 | 0.8711333333333333 | 248.25608825683594 | 0.958690876430548 |
| 3 | -1 | 0.8929333333333334 | 0.517296552658081 | 0.8903309098321669 | 0.9077871691838714 | 0.8735333333333334 | 0.4716871976852417 | 0.9617544750688011 | 0.8921666666666667 | 482.2630615234375 | 0.8891629835200603 | 0.9146401635299922 | 0.8650666666666667 | 482.2630615234375 | 0.9610207071084935 | 0.8929 | 21.727081298828125 | 0.8896065968046728 | 0.9178305565402339 | 0.8630666666666666 | 21.761863708496094 | 0.9613428820739136 | 0.8919666666666667 | 233.58987426757812 | 0.8897225320220353 | 0.9022551237233533 | 0.8775333333333334 | 215.48638916015625 | 0.9612510502786269 |
| 4 | 10000 | 0.8928333333333334 | 0.43506312370300293 | 0.8894993475722821 | 0.9171505452485483 | 0.8634666666666667 | 0.4334607720375061 | 0.9599132545692284 | 0.8907333333333334 | 492.53369140625 | 0.8865317704286727 | 0.8985824830514277 | 0.8748 | 511.95355224609375 | 0.9595456556424534 | 0.8905 | 22.21883773803711 | 0.88677004512911 | 0.9029783705341717 | 0.8711333333333333 | 23.095184326171875 | 0.9596931084218348 | 0.8901333333333333 | 202.63510131835938 | 0.8865618347867884 | 0.9139883902024636 | 0.8607333333333334 | 198.4236602783203 | 0.9580851887225883 |
| 4 | 20000 | 0.9014 | 0.5484203100204468 | 0.8993719186871075 | 0.9112494867934857 | 0.8878 | 0.5242471694946289 | 0.9655938550372046 | 0.9019666666666667 | 464.3271789550781 | 0.9002424895594773 | 0.9096787367274707 | 0.891 | 478.24969482421875 | 0.9659393439265678 | 0.9024 | 21.42523193359375 | 0.9005502343590789 | 0.9179476526796843 | 0.8838 | 21.42523193359375 | 0.9661104664825438 | 0.8995333333333333 | 268.49462890625 | 0.897092100374886 | 0.9090971319049901 | 0.8854 | 256.41510009765625 | 0.9644100082897498 |
| 4 | -1 | 0.9114666666666666 | 0.4048388600349426 | 0.9097028625824437 | 0.9281948106008048 | 0.8919333333333334 | 0.4048388600349426 | 0.970794412407026 | 0.9107666666666666 | 527.2335205078125 | 0.9093921157617147 | 0.9225545342296612 | 0.8966 | 530.4526977539062 | 0.9705547195949027 | 0.9111666666666667 | 23.984663009643555 | 0.9097925058389466 | 0.9240871897132641 | 0.8959333333333334 | 23.99327850341797 | 0.970664250398173 | 0.9101666666666667 | 195.34571838378906 | 0.9083304998299898 | 0.9269257460097154 | 0.8904666666666666 | 194.2305908203125 | 0.970390853682988 |
| 5 | 10000 | 0.9008666666666667 | 0.418465256690979 | 0.8978681712220008 | 0.9101993287211453 | 0.8858666666666667 | 0.38236597180366516 | 0.9655420677904472 | 0.9025333333333333 | 523.75927734375 | 0.8996025219298245 | 0.9254794134235759 | 0.8751333333333333 | 531.1295166015625 | 0.9664249074371127 | 0.9029666666666667 | 23.870399475097656 | 0.8999352221199414 | 0.9209406182401786 | 0.8798666666666667 | 24.183685302734375 | 0.9665301858757089 | 0.8987 | 194.86569213867188 | 0.8965190942886111 | 0.9091158327621659 | 0.8842666666666666 | 186.55052185058594 | 0.9638831561162711 |
| 5 | 20000 | 0.9106333333333333 | 0.3829056918621063 | 0.9085276595744681 | 0.9282782608695652 | 0.8896 | 0.37733447551727295 | 0.9706245868290425 | 0.9110333333333334 | 533.107177734375 | 0.9087648054145516 | 0.9228178694158076 | 0.8951333333333333 | 544.3071899414062 | 0.9706707591959348 | 0.9114666666666666 | 24.306652069091797 | 0.9092085106382979 | 0.9289739130434782 | 0.8902666666666667 | 24.475175857543945 | 0.9707924459139304 | 0.9097333333333333 | 191.7635040283203 | 0.9077372858308403 | 0.9261237513873474 | 0.8900666666666667 | 180.81005859375 | 0.969914909724604 |
| 5 | -1 | 0.9119666666666667 | 0.39385879039764404 | 0.909730680741643 | 0.923605386095081 | 0.8962666666666667 | 0.3492705523967743 | 0.9713314691130481 | 0.9124666666666666 | 545.5827026367188 | 0.9101603912314898 | 0.9344147180675514 | 0.8871333333333333 | 546.857666015625 | 0.9713844524227049 | 0.9126 | 24.46957778930664 | 0.9101538461538462 | 0.9341052631578948 | 0.8874 | 24.76699447631836 | 0.9715792503349012 | 0.9113 | 178.85435485839844 | 0.9093138089562405 | 0.9294117647058824 | 0.8900666666666667 | 174.52761840820312 | 0.9706295257505094 |
Triplet Evaluation:
| epoch | steps | accuracy_cosinus | accuracy_manhattan | accuracy_euclidean |
|-------|-------|------------------|--------------------|--------------------|
| 0 | 10000 | 0.9062 | 0.9075 | 0.9069 |
| 0 | 20000 | 0.9508 | 0.9518 | 0.9518 |
| 0 | -1 | 0.9364 | 0.9353 | 0.9361 |
| 1 | 10000 | 0.9406 | 0.939 | 0.9395 |
| 1 | 20000 | 0.9436 | 0.9435 | 0.9437 |
| 1 | -1 | 0.9523 | 0.9507 | 0.9519 |
| 2 | 10000 | 0.9365 | 0.9348 | 0.9358 |
| 2 | 20000 | 0.9446 | 0.9435 | 0.9439 |
| 2 | -1 | 0.9433 | 0.9424 | 0.9433 |
| 3 | 10000 | 0.9507 | 0.95 | 0.95 |
| 3 | 20000 | 0.9566 | 0.9546 | 0.956 |
| 3 | -1 | 0.9561 | 0.9555 | 0.9558 |
| 4 | 10000 | 0.9515 | 0.9507 | 0.9505 |
| 4 | 20000 | 0.9609 | 0.9611 | 0.961 |
| 4 | -1 | 0.9633 | 0.9631 | 0.9636 |
| 5 | 10000 | 0.961 | 0.96 | 0.9598 |
| 5 | 20000 | 0.9651 | 0.9635 | 0.9642 |
| 5 | -1 | 0.9668 | 0.9651 | 0.9658 |
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 25729 with parameters:
```
{'batch_size': 12, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 10000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | null | sentence-transformers | sentence-similarity | null | null | null | null | null | null | null | null | null | FDSRashid/bert-base-arabic-camelbert-ca-Hadith-SentenceTransformer | [
-0.6928786039352417,
-0.764907717704773,
0.21716563403606415,
0.22240591049194336,
-0.1736985594034195,
0.1326117217540741,
-0.0014913234626874328,
-0.2086116522550583,
0.7682321071624756,
0.23023070394992828,
-0.5593318939208984,
-0.5042670369148254,
-0.8259594440460205,
0.086739666759967... |
adi-kmt/audio-classification-hubert | adi-kmt | 2023-11-29T04:12:05Z | 13 | 0 | null | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"endpoints_compatible",
"region:us"
] | 2023-11-29T04:12:05Z | 2023-11-28T06:17:16.000Z | null | null | Simple Audio classification model that is Distil hubert finetuned. Gives label 1 for positive sounds and label 0 for negative.
Unfortunately the model seems to have a bias of 0.05 i.e. >=0.55 indicates an actual positve sentiment from the audio and so on.
---
language:
- en
pipeline_tag:
- audio-classification
--- | null | transformers | audio-classification | null | null | null | null | null | null | null | null | null | adi-kmt/audio-classification-hubert | [
-0.8021204471588135,
-0.5216514468193054,
0.4293614625930786,
0.31503424048423767,
-0.1968027800321579,
-0.3551269471645355,
-0.034507110714912415,
-0.24278774857521057,
0.5346867442131042,
0.2518385350704193,
-0.8602479696273804,
-0.8036931157112122,
-0.6894725561141968,
-0.17396256327629... |
ElmoVT/ppo-LunarLander-v2 | ElmoVT | 2023-11-29T18:56:59Z | 13 | 0 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | 2023-11-29T18:56:59Z | 2023-11-28T17:40:55.000Z | null | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.41 +/- 19.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| null | stable-baselines3 | reinforcement-learning | null | null | null | null | null | null | null | null | null | ElmoVT/ppo-LunarLander-v2 | [
-0.003174463054165244,
-0.3944116532802582,
0.2481766641139984,
0.3390541076660156,
-0.08787598460912704,
0.04007986560463905,
0.5000530481338501,
-0.17607852816581726,
0.28882235288619995,
0.9444825649261475,
-0.626924991607666,
-0.5120338797569275,
-0.49809587001800537,
-0.27938339114189... |
Shaleen123/yi-6b-200k-medical | Shaleen123 | 2023-11-29T23:48:08Z | 13 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:01-ai/Yi-6B-200K",
"region:us"
] | 2023-11-29T23:48:08Z | 2023-11-28T18:27:54.000Z | null | null | ---
library_name: peft
base_model: 01-ai/Yi-6B-200K
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
| null | peft | null | null | null | null | null | null | null | null | null | null | Shaleen123/yi-6b-200k-medical | [
-0.5874956250190735,
-0.563825249671936,
0.399409681558609,
0.08381328731775284,
-0.2572990655899048,
-0.2915675640106201,
0.06127071753144264,
-0.541031002998352,
0.04177960380911827,
0.6070727109909058,
-0.7249290347099304,
-0.633515477180481,
-0.5520119071006775,
-0.09192848205566406,
... |
ramsenth/outputs | ramsenth | 2023-11-29T03:05:01Z | 13 | 0 | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T03:05:01Z | 2023-11-29T03:04:18.000Z | null | null | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: langbot-gec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# langbot-gec
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1379
- Precision: 0.7729
- Recall: 0.3969
- F1: 0.5244
- Accuracy: 0.9553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1778 | 1.0 | 126 | 0.1379 | 0.7729 | 0.3969 | 0.5244 | 0.9553 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | token-classification | null | null | null | null | null | null | null | null | null | ramsenth/outputs | [
-0.5845827460289001,
-0.6953462362289429,
0.21801882982254028,
0.13546665012836456,
-0.3275110423564911,
-0.3003520667552948,
-0.3660062253475189,
-0.3296297490596771,
0.11844661831855774,
0.2987257242202759,
-0.7510581016540527,
-0.8219908475875854,
-0.6675930023193359,
-0.245073661208152... |
fanjiang98/STDPR-MSMARCO | fanjiang98 | 2023-11-29T04:11:58Z | 13 | 0 | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-11-29T04:11:58Z | 2023-11-29T04:11:09.000Z | null | null | ---
license: apache-2.0
---
| null | transformers | feature-extraction | null | null | null | null | null | null | null | null | null | fanjiang98/STDPR-MSMARCO | [
-0.12853386998176575,
-0.18616794049739838,
0.6529127359390259,
0.4943622946739197,
-0.19319306313991547,
0.2360745519399643,
0.36072012782096863,
0.05056336894631386,
0.579365611076355,
0.740013837814331,
-0.6508102416992188,
-0.23784014582633972,
-0.7102251052856445,
-0.04782590642571449... |
mlabonne/NeuralHermes-2.5-Mistral-7B | mlabonne | 2023-11-30T00:07:10Z | 13 | 25 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatib... | 2023-11-30T00:07:10Z | 2023-11-29T12:23:31.000Z | null | null | ---
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
license: apache-2.0
language:
- en
datasets:
- mlabonne/chatml_dpo_pairs
---
<center><img src="https://i.imgur.com/qIhaFNM.png"></center>
# NeuralHermes 2.5 - Mistral 7B
NeuralHermes is an [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset. It surpasses the original model on several benchmarks (see results).
It is directly inspired by the RLHF process described by [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)'s authors to improve performance. I used the same dataset and reformatted it to apply the ChatML template.
The code to train this model is available on [Google Colab](https://colab.research.google.com/drive/15iFBr1xWgztXvhrj5I9fBv20c7CFOPBE?usp=sharing) and [GitHub](https://github.com/mlabonne/llm-course/tree/main). It required an A100 GPU for about an hour.
### Quantized models
* GGUF: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GGUF
* AWQ: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-AWQ
* EXL2 (5pbw): https://huggingface.co/IconicAI/NeuralHermes-2.5-Mistral-7B-exl2-5bpw
## Results
Teknium (author of OpenHermes-2.5-Mistral-7B) benchmarked the model ([see his tweet](https://twitter.com/Teknium1/status/1729955709377503660)).
Results are improved on every benchmark: **AGIEval** (from 43.07% to 43.62%), **GPT4All** (from 73.12% to 73.25%), and **TruthfulQA**.
### AGIEval

### GPT4All

### TruthfulQA

You can check the Weights & Biases project [here](https://wandb.ai/mlabonne/NeuralHermes-2-5-Mistral-7B/overview?workspace=user-mlabonne).
## Usage
You can run this model using [LM Studio](https://lmstudio.ai/) or any other frontend.
You can also run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
## Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536 | null | transformers | text-generation | null | null | null | null | null | null | null | null | null | mlabonne/NeuralHermes-2.5-Mistral-7B | [
-0.254064679145813,
-0.7595427632331848,
0.24388332664966583,
0.13809168338775635,
-0.2297484278678894,
-0.17029507458209991,
-0.3964689075946808,
-0.2790544927120209,
0.04557684063911438,
0.206063911318779,
-0.5435386896133423,
-0.5907238721847534,
-0.6771522760391235,
-0.1023459210991859... |
bumblebee-testing/tiny-random-T5Model-feed_forward_proj-gated | bumblebee-testing | 2023-11-29T12:32:05Z | 13 | 0 | null | [
"transformers",
"safetensors",
"t5",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2023-11-29T12:32:05Z | 2023-11-29T12:31:58.000Z | null | null | Entry not found | null | transformers | feature-extraction | null | null | null | null | null | null | null | null | null | bumblebee-testing/tiny-random-T5Model-feed_forward_proj-gated | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
bkpandey/Llama-2-7b-hf_finetuned_finance_jupyter | bkpandey | 2023-11-29T18:58:22Z | 13 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"region:us"
] | 2023-11-29T18:58:22Z | 2023-11-29T18:58:09.000Z | null | null | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | bkpandey/Llama-2-7b-hf_finetuned_finance_jupyter | [
-0.5779397487640381,
-0.5580516457557678,
0.4049737751483917,
0.08317568898200989,
-0.25341418385505676,
-0.2754514515399933,
0.060684483498334885,
-0.538404107093811,
0.0487721748650074,
0.6135932803153992,
-0.7259423732757568,
-0.6298723220825195,
-0.5585342645645142,
-0.0797137916088104... |
GoatMilk98/635-a2-model-flan | GoatMilk98 | 2023-11-29T20:13:33Z | 13 | 0 | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | 2023-11-29T20:13:33Z | 2023-11-29T20:11:51.000Z | null | null | Entry not found | null | transformers | question-answering | null | null | null | null | null | null | null | null | null | GoatMilk98/635-a2-model-flan | [
-0.3227648437023163,
-0.2256842851638794,
0.8622258305549622,
0.4346150755882263,
-0.5282991528511047,
0.7012966275215149,
0.7915719151496887,
0.07618607580661774,
0.774602472782135,
0.25632160902023315,
-0.7852813005447388,
-0.22573809325695038,
-0.910448431968689,
0.571567177772522,
-0... |
seatond/rank16_lr0.0002_targetqv_epochs2_laplha16 | seatond | 2023-11-29T20:36:36Z | 13 | 0 | null | [
"peft",
"tensorboard",
"arxiv:1910.09700",
"base_model:TheBloke/Mistral-7B-v0.1-GPTQ",
"region:us"
] | 2023-11-29T20:36:36Z | 2023-11-29T20:34:08.000Z | null | null | ---
library_name: peft
base_model: TheBloke/Mistral-7B-v0.1-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: gptq
- bits: 4
- tokenizer: None
- dataset: None
- group_size: 128
- damp_percent: 0.1
- desc_act: True
- sym: True
- true_sequential: True
- use_cuda_fp16: False
- model_seqlen: None
- block_name_to_quantize: None
- module_name_preceding_first_block: None
- batch_size: 1
- pad_token_id: None
- use_exllama: False
- max_input_length: None
- exllama_config: {'version': <ExllamaVersion.ONE: 1>}
- cache_block_outputs: True
### Framework versions
- PEFT 0.7.0.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | seatond/rank16_lr0.0002_targetqv_epochs2_laplha16 | [
-0.5695146322250366,
-0.6021898984909058,
0.393865168094635,
0.07504037022590637,
-0.26126527786254883,
-0.2962699234485626,
0.0572650246322155,
-0.4982392489910126,
0.04218740016222,
0.5961052179336548,
-0.7253223657608032,
-0.6829841732978821,
-0.5607760548591614,
-0.13972577452659607,
... |
RajuEEE/GeneratorModel_SFT_AvishekLLama_SmallerQuestion | RajuEEE | 2023-11-29T06:43:40Z | 12 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:abhishek/llama-2-7b-hf-small-shards",
"region:us"
] | 2023-11-29T06:43:40Z | 2023-11-26T15:12:52.000Z | null | null | ---
library_name: peft
base_model: abhishek/llama-2-7b-hf-small-shards
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
| null | peft | null | null | null | null | null | null | null | null | null | null | RajuEEE/GeneratorModel_SFT_AvishekLLama_SmallerQuestion | [
-0.574804425239563,
-0.5590018033981323,
0.40296828746795654,
0.07961388677358627,
-0.2534928023815155,
-0.27700263261795044,
0.060468919575214386,
-0.5367451906204224,
0.04952648654580116,
0.6133862733840942,
-0.7236800193786621,
-0.6278332471847534,
-0.5595568418502808,
-0.08562324941158... |
elnasharomar2/ANER_arabic_keyword_extraction | elnasharomar2 | 2023-11-29T06:45:55Z | 12 | 0 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:elnasharomar2/ANER_arabic_keyword_extraction",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T06:45:55Z | 2023-11-27T05:22:20.000Z | null | null | ---
base_model: elnasharomar2/ANER_arabic_keyword_extraction
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ANER_arabic_keyword_extraction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ANER_arabic_keyword_extraction
This model is a fine-tuned version of [elnasharomar2/ANER_arabic_keyword_extraction](https://huggingface.co/elnasharomar2/ANER_arabic_keyword_extraction) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4047
- Precision: 0.6061
- Recall: 0.6492
- F1: 0.6269
- Accuracy: 0.9462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0108 | 1.0 | 750 | 0.2997 | 0.5879 | 0.6302 | 0.6083 | 0.9451 |
| 0.0104 | 2.0 | 1500 | 0.2822 | 0.5699 | 0.6425 | 0.6040 | 0.9428 |
| 0.007 | 3.0 | 2250 | 0.3270 | 0.5965 | 0.6182 | 0.6072 | 0.9446 |
| 0.0053 | 4.0 | 3000 | 0.3436 | 0.5792 | 0.6439 | 0.6099 | 0.9437 |
| 0.0038 | 5.0 | 3750 | 0.3373 | 0.6063 | 0.6223 | 0.6142 | 0.9469 |
| 0.0039 | 6.0 | 4500 | 0.3518 | 0.5961 | 0.6503 | 0.6220 | 0.9462 |
| 0.0031 | 7.0 | 5250 | 0.3654 | 0.5887 | 0.6488 | 0.6173 | 0.9445 |
| 0.0029 | 8.0 | 6000 | 0.3985 | 0.5973 | 0.6492 | 0.6222 | 0.9446 |
| 0.0022 | 9.0 | 6750 | 0.3953 | 0.5927 | 0.6570 | 0.6232 | 0.9456 |
| 0.002 | 10.0 | 7500 | 0.3884 | 0.6145 | 0.6365 | 0.6253 | 0.9474 |
| 0.0015 | 11.0 | 8250 | 0.4170 | 0.5964 | 0.6566 | 0.6251 | 0.9446 |
| 0.0015 | 12.0 | 9000 | 0.4421 | 0.5918 | 0.6629 | 0.6253 | 0.9445 |
| 0.0016 | 13.0 | 9750 | 0.4313 | 0.6078 | 0.6480 | 0.6273 | 0.9465 |
| 0.0025 | 14.0 | 10500 | 0.4096 | 0.6066 | 0.6432 | 0.6244 | 0.9463 |
| 0.0023 | 15.0 | 11250 | 0.4047 | 0.6061 | 0.6492 | 0.6269 | 0.9462 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | token-classification | null | null | null | null | null | null | null | null | null | elnasharomar2/ANER_arabic_keyword_extraction | [
-0.6298425793647766,
-0.6780760884284973,
0.10209216922521591,
0.002901663538068533,
-0.18853840231895447,
-0.04140866920351982,
0.006335169076919556,
-0.15574435889720917,
0.5774491429328918,
0.38804537057876587,
-0.688823401927948,
-0.9029895663261414,
-0.8347651362419128,
-0.16973803937... |
gangyeolkim/qlora-llama-2-koen-13b | gangyeolkim | 2023-11-29T04:09:34Z | 12 | 0 | null | [
"peft",
"arxiv:1910.09700",
"base_model:beomi/llama-2-koen-13b",
"region:us"
] | 2023-11-29T04:09:34Z | 2023-11-29T01:25:46.000Z | null | null | ---
library_name: peft
base_model: beomi/llama-2-koen-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
| null | peft | null | null | null | null | null | null | null | null | null | null | gangyeolkim/qlora-llama-2-koen-13b | [
-0.587495744228363,
-0.5638253092765808,
0.3994097113609314,
0.08381328731775284,
-0.25729915499687195,
-0.2915676534175873,
0.06127074360847473,
-0.5410311818122864,
0.04177956283092499,
0.6070725321769714,
-0.7249288558959961,
-0.633515477180481,
-0.552012026309967,
-0.09192849695682526,... |
genejalston/bart-qanda2 | genejalston | 2023-11-29T21:58:00Z | 12 | 0 | null | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-29T21:58:00Z | 2023-11-29T01:27:22.000Z | null | null | ---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
model-index:
- name: bart-qanda2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-qanda2
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
| null | transformers | text2text-generation | null | null | null | null | null | null | null | null | null | genejalston/bart-qanda2 | [
-0.45138195157051086,
-0.8622068762779236,
0.25481459498405457,
0.2535967230796814,
-0.40627720952033997,
-0.1998685896396637,
-0.13330954313278198,
-0.4091997444629669,
0.39576882123947144,
0.42257159948349,
-0.8472356200218201,
-0.486647367477417,
-0.5514490008354187,
-0.0891750156879425... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.