modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Anonymreign/savagebeta
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-14T03:36:08Z
Quantized using https://github.com/0cc4m/GPTQ-for-LLaMa ; CUDA_VISIBLE_DEVICES=0,1 python gptneox.py /hmem/Ais/databricks_dolly-v2-12b/ c4 --wbits 4 --groupsize 128 --save_safetensors dolly_v2-12b-GPTQ-4bit-128g.safetensors --nsamples 1 Edit: Thanks to @clxy and @NePe for figuring out how to run this. Modify llama_inference.py to replace 'Llama' with 'GptNeoX', and use the following command: Command: python3 llama_inference.py ./quant --wbits 4 --groupsize 128 --load quant/dolly_v2-12b-GPTQ-4bit-128g.safetensors --max_length 100 --text "In the series A Song of Ice and Fire, who is the founder of House Karstark?"
Antony/mint_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### riobiagirlnew Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Anubhav23/IndianlegalBert
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="checkRaiseOnCloud/q-FrozenLake-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Anubhav23/indianlegal
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Perplexity: 18.47. Deleting this soon. NOTE: PLEASE DO NOT USE THIS MODEL. THIS IS ONLY FOR EXPERIMENTS.
Apisate/DialoGPT-small-jordan
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="checkRaiseOnCloud/q-taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Aplinxy9plin/toxic-detection-rus
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 library_name: ultralytics pipeline_tag: object-detection inference: false tags: - roboflow - hand_gesture --- # Yolov8x Tuned to Hand Gestures Yolo v8x Finetuned on the [hand gestures roboflow dataset.](https://universe.roboflow.com/hand-gestures-recognition/hand-gestures-dataset) ## Inference ```py from ultralyticsplus import YOLO, render_result # load model model = YOLO('lewiswatson/yolov8x-tuned-hand-gestures') # set image image = 'https://thumbs.dreamstime.com/b/young-man-showing-peace-sign-isolated-over-yellow-background-133210696.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` ## Training Parameters Trained using [Ultralytics](https://ultralytics.com/) and [Ultralyticsplus](https://pypi.org/project/ultralyticsplus/). - Epochs: 10 - Optimiser: SGD - Patience 50 - Imgsz: 640 - Batch: -1
ArBert/albert-base-v2-finetuned-ner-gmm-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
Access to model Axat04/ainft is restricted and you are not in the authorized list. Visit https://huggingface.co/Axat04/ainft to ask for access.
ArBert/roberta-base-finetuned-ner-kmeans-twitter
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: en license: apache-2.0 library_name: pytorch tags: - deep-reinforcement-learning - reinforcement-learning - DI-engine - LunarLander-v2 benchmark_name: OpenAI/Gym/Box2d task_name: LunarLander-v2 pipeline_tag: reinforcement-learning model-index: - name: SAC results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: OpenAI/Gym/Box2d-LunarLander-v2 type: OpenAI/Gym/Box2d-LunarLander-v2 metrics: - type: mean_reward value: 226.73 +/- 6.75 name: mean_reward --- # Play **LunarLander-v2** with **SAC** Policy ## Model Description <!-- Provide a longer summary of what this model is. --> This is a simple **SAC** implementation to OpenAI/Gym/Box2d **LunarLander-v2** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo). **DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework. ## Model Usage ### Install the Dependencies <details close> <summary>(Click for Details)</summary> ```shell # install huggingface_ding git clone https://github.com/opendilab/huggingface_ding.git pip3 install -e ./huggingface_ding/ # install environment dependencies if needed pip3 install DI-engine[common_env] ``` </details> ### Git Clone from Huggingface and Run the Model <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import SACAgent from ding.config import Config from easydict import EasyDict import torch # Pull model from files which are git cloned from huggingface policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) cfg = EasyDict(Config.file_to_dict("policy_config.py")) # Instantiate the agent agent = SACAgent( env="lunarlander_continuous", exp_name="lunarlander-sac", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ### Run Model by Using Huggingface_ding <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import SACAgent from huggingface_ding import pull_model_from_hub # Pull model from Hugggingface hub policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/LunarLander-v2-SAC") # Instantiate the agent agent = SACAgent( env="lunarlander_continuous", exp_name="LunarLander-v2-SAC", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ## Model Training ### Train the Model and Push to Huggingface_hub <details close> <summary>(Click for Details)</summary> ```shell #Training Your Own Agent python3 -u train.py ``` **train.py** ```python from ding.bonus import SACAgent from huggingface_ding import push_model_to_hub # Instantiate the agent agent = SACAgent("lunarlander_continuous", exp_name="LunarLander-v2-SAC") # Train the agent return_ = agent.train(step=int(4000000), collector_env_num=8, evaluator_env_num=8) # Push model to huggingface hub push_model_to_hub( agent=agent.best, env_name="OpenAI/Gym/Box2d", task_name="LunarLander-v2", algo_name="SAC", wandb_url=return_.wandb_url, github_repo_url="https://github.com/opendilab/DI-engine", github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html", github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html", installation_guide="pip3 install DI-engine[common_env]", usage_file_by_git_clone="./sac/lunarlander_sac_deploy.py", usage_file_by_huggingface_ding="./sac/lunarlander_sac_download.py", train_file="./sac/lunarlander_sac.py", repo_id="OpenDILabCommunity/LunarLander-v2-SAC" ) ``` </details> **Configuration** <details close> <summary>(Click for Details)</summary> ```python exp_config = { 'env': { 'manager': { 'episode_num': float("inf"), 'max_retry': 1, 'retry_type': 'reset', 'auto_reset': True, 'step_timeout': None, 'reset_timeout': None, 'retry_waiting_time': 0.1, 'cfg_type': 'BaseEnvManagerDict' }, 'stop_value': 240, 'env_id': 'LunarLanderContinuous-v2', 'collector_env_num': 4, 'evaluator_env_num': 8, 'act_scale': True, 'n_evaluator_episode': 8 }, 'policy': { 'model': { 'twin_critic': True, 'action_space': 'reparameterization', 'obs_shape': 8, 'action_shape': 2 }, 'learn': { 'learner': { 'train_iterations': 1000000000, 'dataloader': { 'num_workers': 0 }, 'log_policy': True, 'hook': { 'load_ckpt_before_run': '', 'log_show_after_iter': 100, 'save_ckpt_after_iter': 10000, 'save_ckpt_after_run': True }, 'cfg_type': 'BaseLearnerDict' }, 'update_per_collect': 256, 'batch_size': 128, 'learning_rate_q': 0.001, 'learning_rate_policy': 0.0003, 'learning_rate_alpha': 0.0003, 'target_theta': 0.005, 'discount_factor': 0.99, 'alpha': 0.2, 'auto_alpha': True, 'log_space': True, 'target_entropy': None, 'ignore_done': False, 'init_w': 0.003 }, 'collect': { 'collector': {}, 'n_sample': 256, 'unroll_len': 1, 'collector_logit': False }, 'eval': { 'evaluator': { 'eval_freq': 1000, 'render': { 'render_freq': -1, 'mode': 'train_iter' }, 'cfg_type': 'InteractionSerialEvaluatorDict', 'n_episode': 8, 'stop_value': 240 } }, 'other': { 'replay_buffer': { 'replay_buffer_size': 100000 } }, 'on_policy': False, 'cuda': True, 'multi_gpu': False, 'bp_update_sync': True, 'traj_len_inf': False, 'type': 'sac', 'priority': False, 'priority_IS_weight': False, 'random_collect_size': 10000, 'transition_with_policy_data': True, 'multi_agent': False, 'cfg_type': 'SACPolicyDict' }, 'exp_name': 'LunarLander-v2-SAC', 'seed': 0, 'wandb_logger': { 'gradient_logger': True, 'video_logger': True, 'plot_logger': True, 'action_logger': True, 'return_logger': False } } ``` </details> **Training Procedure** <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/LunarLander-v2-SAC) ## Model Information <!-- Provide the basic links for the model. --> - **Github Repository:** [repo link](https://github.com/opendilab/DI-engine) - **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html) - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-SAC/blob/main/policy_config.py) - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-SAC/blob/main/replay.mp4) <!-- Provide the size information for the model. --> - **Parameters total size:** 58.02 KB - **Last Update Date:** 2023-04-14 ## Environments <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> - **Benchmark:** OpenAI/Gym/Box2d - **Task:** LunarLander-v2 - **Gym version:** 0.25.1 - **DI-engine version:** v0.4.7 - **PyTorch version:** 1.7.1 - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html)
Aran/DialoGPT-medium-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.29 +/- 0.45 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="malanevans/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Aran/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ArashEsk95/bert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: resnet-101-finetuned_resnet101-sgd-optimizer-autotags results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.6780952380952381 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-101-finetuned_resnet101-sgd-optimizer-autotags This model is a fine-tuned version of [microsoft/resnet-101](https://huggingface.co/microsoft/resnet-101) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0245 - Accuracy: 0.6781 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7106 | 0.99 | 65 | 2.7615 | 0.1371 | | 2.5331 | 1.99 | 130 | 2.6702 | 0.1495 | | 2.2557 | 2.99 | 195 | 2.2049 | 0.2924 | | 2.0473 | 3.99 | 260 | 2.0434 | 0.3619 | | 1.6644 | 4.99 | 325 | 1.6585 | 0.4438 | | 1.5685 | 5.99 | 390 | 1.4183 | 0.5419 | | 1.377 | 6.99 | 455 | 1.2873 | 0.5981 | | 1.2441 | 7.99 | 520 | 1.1502 | 0.6362 | | 1.1983 | 8.99 | 585 | 1.0553 | 0.6657 | | 1.0988 | 9.99 | 650 | 1.0245 | 0.6781 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu117 - Datasets 2.11.0 - Tokenizers 0.13.2
ArashEsk95/bert-base-uncased-finetuned-stsb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.42 +/- 2.75 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="malanevans/taxi-v3-1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ArenaGrenade/char-cnn
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-14T05:50:45Z
--- tags: - conversational --- # Walter White DailoGPT Model
Arina/Erine
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
Arpita/opus-mt-en-ro-finetuned-syn-to-react
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Chenhsing/output These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
ArthurcJP/DialoGPT-small-YODA
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Parailaragirlnew Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Aruden/DialoGPT-medium-harrypotterall
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8653353814644136 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1339 - F1: 0.8653 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 | | 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 | | 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 | ### Framework versions - Transformers 4.11.3 - Pytorch 2.0.0+cu118 - Datasets 1.16.1 - Tokenizers 0.10.3
AshiNLP/Bert_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of caibao dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Chenhsing/caibao_timesquare These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of caibao dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
AshtonBenson/DialoGPT-small-quentin
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: varevshatyan/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Augustvember/WokkaBot9
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en metrics: - accuracy - AUC ROC - precision - recall tags: - biology - chemistry - therapeutic science - drug design - drug development - therapeutics library_name: tdc license: bsd-2-clause --- ## Dataset description As a membrane separating circulating blood and brain extracellular fluid, the blood-brain barrier (BBB) is the protective layer that blocks most foreign drugs. Thus the ability of a drug to penetrate the barrier to deliver to the site of action forms a crucial challenge in developing drugs for the central nervous system. ## Task description Binary classification. Given a drug SMILES string, predict the activity of BBB. ## Dataset statistics Total: 1,975 drugs ## Dataset split Random split with 70% training, 10% validation, and 20% testing To load the dataset in TDC, type ```python from tdc.single_pred import ADME data = ADME(name = 'BBB_Martins') ``` ## Model description CNN is applying Convolutional Neural Network on SMILES string fingerprint. Model is tuned with 100 runs using Ax platform. To load the pre-trained model, type ```python from tdc import tdc_hf_interface tdc_hf = tdc_hf_interface("BBB_Martins-CNN") # load deeppurpose model from this repo dp_model = tdc_hf_herg.load_deeppurpose('./data') tdc_hf.predict_deeppurpose(dp_model, ['YOUR SMILES STRING']) ``` ## References * Dataset entry in Therapeutics Data Commons, https://tdcommons.ai/single_pred_tasks/adme/#bbb-blood-brain-barrier-martins-et-al * Martins, Ines Filipa, et al. “A Bayesian approach to in silico blood-brain barrier penetration modeling.” Journal of chemical information and modeling 52.6 (2012): 1686-1697.
Augustvember/wokka5
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- datasets: - wikiann language: - fr - pt - ja - zh metrics: - accuracy pipeline_tag: token-classification ---
Ayham/albert_roberta_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: flan-t5-large-da-multiwoz2.0_80-loss-ep50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-large-da-multiwoz2.0_80-loss-ep50 This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4604 - Accuracy: 33.2618 - Num: 7358 - Gen Len: 15.5321 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 192 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:| | 1.1359 | 8.0 | 200 | 0.4929 | 27.6197 | 7358 | 14.857 | | 0.474 | 16.0 | 400 | 0.4604 | 33.2618 | 7358 | 15.5321 | | 0.3764 | 24.0 | 600 | 0.4672 | 34.9136 | 7358 | 15.5559 | | 0.323 | 32.0 | 800 | 0.4783 | 35.5058 | 7358 | 15.7635 | | 0.2858 | 40.0 | 1000 | 0.5008 | 35.7085 | 7358 | 15.9337 | | 0.2709 | 48.0 | 1200 | 0.5069 | 35.8528 | 7358 | 15.981 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.5.1 - Tokenizers 0.12.1
Ayham/bert_bert_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: flan-t5-large-da-multiwoz2.1_80-loss-ep50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-large-da-multiwoz2.1_80-loss-ep50 This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4579 - Accuracy: 33.9166 - Num: 7365 - Gen Len: 16.1092 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 192 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:| | 1.1431 | 8.0 | 200 | 0.4916 | 29.9135 | 7365 | 15.2805 | | 0.4731 | 16.0 | 400 | 0.4579 | 33.9166 | 7365 | 16.1092 | | 0.3788 | 24.0 | 600 | 0.4705 | 34.9699 | 7365 | 15.9742 | | 0.3188 | 32.0 | 800 | 0.4872 | 34.3973 | 7365 | 15.6263 | | 0.2879 | 40.0 | 1000 | 0.4989 | 35.5581 | 7365 | 15.8967 | | 0.2672 | 48.0 | 1200 | 0.5088 | 35.5744 | 7365 | 15.9563 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.5.1 - Tokenizers 0.12.1
Ayham/bert_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: en license: apache-2.0 library_name: pytorch tags: - deep-reinforcement-learning - reinforcement-learning - DI-engine - LunarLander-v2 benchmark_name: OpenAI/Gym/Box2d task_name: LunarLander-v2 pipeline_tag: reinforcement-learning model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: OpenAI/Gym/Box2d-LunarLander-v2 type: OpenAI/Gym/Box2d-LunarLander-v2 metrics: - type: mean_reward value: 261.47 +/- 25.18 name: mean_reward --- # Play **LunarLander-v2** with **A2C** Policy ## Model Description <!-- Provide a longer summary of what this model is. --> This is a simple **A2C** implementation to OpenAI/Gym/Box2d **LunarLander-v2** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo). **DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework. ## Model Usage ### Install the Dependencies <details close> <summary>(Click for Details)</summary> ```shell # install huggingface_ding git clone https://github.com/opendilab/huggingface_ding.git pip3 install -e ./huggingface_ding/ # install environment dependencies if needed pip3 install DI-engine[common_env] ``` </details> ### Git Clone from Huggingface and Run the Model <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import A2CAgent from ding.config import Config from easydict import EasyDict import torch # Pull model from files which are git cloned from huggingface policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) cfg = EasyDict(Config.file_to_dict("policy_config.py")) # Instantiate the agent agent = A2CAgent( env="lunarlander_discrete", exp_name="Lunarlander-v2-A2C", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ### Run Model by Using Huggingface_ding <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import A2CAgent from huggingface_ding import pull_model_from_hub # Pull model from Hugggingface hub policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/Lunarlander-v2-A2C") # Instantiate the agent agent = A2CAgent( env="lunarlander_discrete", exp_name="Lunarlander-v2-A2C", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ## Model Training ### Train the Model and Push to Huggingface_hub <details close> <summary>(Click for Details)</summary> ```shell #Training Your Own Agent python3 -u train.py ``` **train.py** ```python from ding.bonus import A2CAgent from huggingface_ding import push_model_to_hub # Instantiate the agent agent = A2CAgent(env="lunarlander_discrete", exp_name="Lunarlander-v2-A2C") # Train the agent return_ = agent.train(step=int(20000000), collector_env_num=8, evaluator_env_num=8, debug=False) # Push model to huggingface hub push_model_to_hub( agent=agent.best, env_name="OpenAI/Gym/Box2d", task_name="LunarLander-v2", algo_name="A2C", wandb_url=return_.wandb_url, github_repo_url="https://github.com/opendilab/DI-engine", github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/a2c.html", github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html", installation_guide="pip3 install DI-engine[common_env]", usage_file_by_git_clone="./a2c/lunarlander_a2c_deploy.py", usage_file_by_huggingface_ding="./a2c/lunarlander_a2c_download.py", train_file="./a2c/lunarlander_a2c.py", repo_id="OpenDILabCommunity/Lunarlander-v2-A2C" ) ``` </details> **Configuration** <details close> <summary>(Click for Details)</summary> ```python exp_config = { 'env': { 'manager': { 'episode_num': float("inf"), 'max_retry': 1, 'retry_type': 'reset', 'auto_reset': True, 'step_timeout': None, 'reset_timeout': None, 'retry_waiting_time': 0.1, 'cfg_type': 'BaseEnvManagerDict' }, 'stop_value': 240, 'collector_env_num': 8, 'evaluator_env_num': 8, 'env_id': 'LunarLander-v2', 'n_evaluator_episode': 8 }, 'policy': { 'model': { 'obs_shape': 8, 'action_shape': 4 }, 'learn': { 'learner': { 'train_iterations': 1000000000, 'dataloader': { 'num_workers': 0 }, 'log_policy': True, 'hook': { 'load_ckpt_before_run': '', 'log_show_after_iter': 100, 'save_ckpt_after_iter': 10000, 'save_ckpt_after_run': True }, 'cfg_type': 'BaseLearnerDict' }, 'update_per_collect': 1, 'batch_size': 160, 'learning_rate': 0.0003, 'betas': [0.9, 0.999], 'eps': 1e-08, 'grad_norm': 0.5, 'value_weight': 0.5, 'entropy_weight': 0.001, 'adv_norm': True, 'ignore_done': False }, 'collect': { 'collector': {}, 'unroll_len': 1, 'discount_factor': 0.99, 'gae_lambda': 0.95, 'n_sample': 320 }, 'eval': { 'evaluator': { 'eval_freq': 1000, 'render': { 'render_freq': -1, 'mode': 'train_iter' }, 'cfg_type': 'InteractionSerialEvaluatorDict', 'n_episode': 8, 'stop_value': 240 } }, 'other': { 'replay_buffer': {} }, 'on_policy': True, 'cuda': True, 'multi_gpu': False, 'bp_update_sync': True, 'traj_len_inf': False, 'type': 'a2c', 'priority': False, 'priority_IS_weight': False, 'cfg_type': 'A2CPolicyDict' }, 'exp_name': 'Lunarlander-v2-A2C', 'wandb_logger': { 'gradient_logger': True, 'video_logger': True, 'plot_logger': True, 'action_logger': True, 'return_logger': False }, 'seed': 0 } ``` </details> **Training Procedure** <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/Lunarlander-v2-A2C) ## Model Information <!-- Provide the basic links for the model. --> - **Github Repository:** [repo link](https://github.com/opendilab/DI-engine) - **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/a2c.html) - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/Lunarlander-v2-A2C/blob/main/policy_config.py) - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/Lunarlander-v2-A2C/blob/main/replay.mp4) <!-- Provide the size information for the model. --> - **Parameters total size:** 371.29 KB - **Last Update Date:** 2023-04-14 ## Environments <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> - **Benchmark:** OpenAI/Gym/Box2d - **Task:** LunarLander-v2 - **Gym version:** 0.25.1 - **DI-engine version:** v0.4.7 - **PyTorch version:** 1.7.1 - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html)
Ayham/robertagpt2_cnn
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5353925809123671 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5788 - Matthews Correlation: 0.5354 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5234 | 1.0 | 535 | 0.5177 | 0.4383 | | 0.3481 | 2.0 | 1070 | 0.5110 | 0.5056 | | 0.2335 | 3.0 | 1605 | 0.5788 | 0.5354 | | 0.184 | 4.0 | 2140 | 0.7498 | 0.5116 | | 0.1367 | 5.0 | 2675 | 0.7809 | 0.5301 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Ayham/robertagpt2_xsum4
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: en license: apache-2.0 library_name: pytorch tags: - deep-reinforcement-learning - reinforcement-learning - DI-engine - Hopper-v3 benchmark_name: OpenAI/Gym/MuJoCo task_name: Hopper-v3 pipeline_tag: reinforcement-learning model-index: - name: SAC results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: OpenAI/Gym/MuJoCo-Hopper-v3 type: OpenAI/Gym/MuJoCo-Hopper-v3 metrics: - type: mean_reward value: 4279.46 +/- 47.85 name: mean_reward --- # Play **Hopper-v3** with **SAC** Policy ## Model Description <!-- Provide a longer summary of what this model is. --> This is a simple **SAC** implementation to OpenAI/Gym/MuJoCo **Hopper-v3** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo). **DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework. ## Model Usage ### Install the Dependencies <details close> <summary>(Click for Details)</summary> ```shell # install huggingface_ding git clone https://github.com/opendilab/huggingface_ding.git pip3 install -e ./huggingface_ding/ # install environment dependencies if needed sudo apt update -y && sudo apt install -y build-essential libgl1-mesa-dev libgl1-mesa-glx libglew-dev libosmesa6-dev libglfw3 libglfw3-dev libsdl2-dev libsdl2-image-dev libglm-dev libfreetype6-dev patchelf mkdir -p ~/.mujoco wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz tar -xf mujoco.tar.gz -C ~/.mujoco echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin pip3 install DI-engine[common_env] ``` </details> ### Git Clone from Huggingface and Run the Model <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import SACAgent from ding.config import Config from easydict import EasyDict import torch # Pull model from files which are git cloned from huggingface policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) cfg = EasyDict(Config.file_to_dict("policy_config.py")) # Instantiate the agent agent = SACAgent(env="hopper", exp_name="Hopper-v3-SAC", cfg=cfg.exp_config, policy_state_dict=policy_state_dict) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ### Run Model by Using Huggingface_ding <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import SACAgent from huggingface_ding import pull_model_from_hub # Pull model from Hugggingface hub policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/Hopper-v3-SAC") # Instantiate the agent agent = SACAgent(env="hopper", exp_name="Hopper-v3-SAC", cfg=cfg.exp_config, policy_state_dict=policy_state_dict) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ## Model Training ### Train the Model and Push to Huggingface_hub <details close> <summary>(Click for Details)</summary> ```shell #Training Your Own Agent python3 -u train.py ``` **train.py** ```python from ding.bonus.sac import SACAgent from huggingface_ding import push_model_to_hub # Instantiate the agent agent = SACAgent(env="hopper", exp_name="Hopper-v3-SAC") # Train the agent return_ = agent.train(step=int(10000000), collector_env_num=4, evaluator_env_num=4, debug=False) # Push model to huggingface hub push_model_to_hub( agent=agent.best, env_name="OpenAI/Gym/MuJoCo", task_name="Hopper-v3", algo_name="SAC", wandb_url=return_.wandb_url, github_repo_url="https://github.com/opendilab/DI-engine", github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html", github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html", installation_guide=''' sudo apt update -y \ && sudo apt install -y \ build-essential \ libgl1-mesa-dev \ libgl1-mesa-glx \ libglew-dev \ libosmesa6-dev \ libglfw3 \ libglfw3-dev \ libsdl2-dev \ libsdl2-image-dev \ libglm-dev \ libfreetype6-dev \ patchelf mkdir -p ~/.mujoco wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz tar -xf mujoco.tar.gz -C ~/.mujoco echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin pip3 install DI-engine[common_env] ''', usage_file_by_git_clone="./sac/hopper_sac_deploy.py", usage_file_by_huggingface_ding="./sac/hopper_sac_download.py", train_file="./sac/hopper_sac.py", repo_id="OpenDILabCommunity/Hopper-v3-SAC" ) ``` </details> **Configuration** <details close> <summary>(Click for Details)</summary> ```python exp_config = { 'env': { 'manager': { 'episode_num': float("inf"), 'max_retry': 1, 'retry_type': 'reset', 'auto_reset': True, 'step_timeout': None, 'reset_timeout': None, 'retry_waiting_time': 0.1, 'cfg_type': 'BaseEnvManagerDict' }, 'stop_value': 6000, 'env_id': 'Hopper-v3', 'collector_env_num': 8, 'evaluator_env_num': 8, 'n_evaluator_episode': 8 }, 'policy': { 'model': { 'twin_critic': True, 'action_space': 'reparameterization', 'obs_shape': 11, 'action_shape': 3, 'actor_head_hidden_size': 256, 'critic_head_hidden_size': 256 }, 'learn': { 'learner': { 'train_iterations': 1000000000, 'dataloader': { 'num_workers': 0 }, 'log_policy': True, 'hook': { 'load_ckpt_before_run': '', 'log_show_after_iter': 100, 'save_ckpt_after_iter': 10000, 'save_ckpt_after_run': True }, 'cfg_type': 'BaseLearnerDict' }, 'update_per_collect': 1, 'batch_size': 256, 'learning_rate_q': 0.001, 'learning_rate_policy': 0.001, 'learning_rate_alpha': 0.0003, 'target_theta': 0.005, 'discount_factor': 0.99, 'alpha': 0.2, 'auto_alpha': False, 'log_space': True, 'target_entropy': None, 'ignore_done': False, 'init_w': 0.003, 'reparameterization': True }, 'collect': { 'collector': {}, 'n_sample': 1, 'unroll_len': 1, 'collector_logit': False }, 'eval': { 'evaluator': { 'eval_freq': 1000, 'render': { 'render_freq': -1, 'mode': 'train_iter' }, 'cfg_type': 'InteractionSerialEvaluatorDict', 'n_episode': 8, 'stop_value': 6000 } }, 'other': { 'replay_buffer': { 'replay_buffer_size': 1000000 } }, 'on_policy': False, 'cuda': True, 'multi_gpu': False, 'bp_update_sync': True, 'traj_len_inf': False, 'type': 'sac', 'priority': False, 'priority_IS_weight': False, 'random_collect_size': 10000, 'transition_with_policy_data': True, 'multi_agent': False, 'cfg_type': 'SACPolicyDict' }, 'exp_name': 'Hopper-v3-SAC', 'seed': 0, 'wandb_logger': { 'gradient_logger': True, 'video_logger': True, 'plot_logger': True, 'action_logger': True, 'return_logger': False } } ``` </details> **Training Procedure** <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/Hopper-v3-SAC) ## Model Information <!-- Provide the basic links for the model. --> - **Github Repository:** [repo link](https://github.com/opendilab/DI-engine) - **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html) - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/Hopper-v3-SAC/blob/main/policy_config.py) - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/Hopper-v3-SAC/blob/main/replay.mp4) <!-- Provide the size information for the model. --> - **Parameters total size:** 821.03 KB - **Last Update Date:** 2023-04-14 ## Environments <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> - **Benchmark:** OpenAI/Gym/MuJoCo - **Task:** Hopper-v3 - **Gym version:** 0.25.1 - **DI-engine version:** v0.4.7 - **PyTorch version:** 1.7.1 - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html)
Ayham/xlnet_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-04-14T08:24:58Z
--- license: other tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b1-solarModuleAnomaly-v0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b1-solarModuleAnomaly-v0.1 This model is a fine-tuned version of [nvidia/mit-b1](https://huggingface.co/nvidia/mit-b1) on the zklee98/solarModuleAnomaly dataset. It achieves the following results on the evaluation set: - Loss: 0.1547 - Mean Iou: 0.3822 - Mean Accuracy: 0.7643 - Overall Accuracy: 0.7643 - Accuracy Unlabelled: nan - Accuracy Anomaly: 0.7643 - Iou Unlabelled: 0.0 - Iou Anomaly: 0.7643 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabelled | Accuracy Anomaly | Iou Unlabelled | Iou Anomaly | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:----------------:|:--------------:|:-----------:| | 0.4699 | 0.4 | 20 | 0.6337 | 0.4581 | 0.9162 | 0.9162 | nan | 0.9162 | 0.0 | 0.9162 | | 0.3129 | 0.8 | 40 | 0.4636 | 0.3704 | 0.7407 | 0.7407 | nan | 0.7407 | 0.0 | 0.7407 | | 0.2732 | 1.2 | 60 | 0.3164 | 0.3867 | 0.7734 | 0.7734 | nan | 0.7734 | 0.0 | 0.7734 | | 0.2653 | 1.6 | 80 | 0.3769 | 0.4090 | 0.8180 | 0.8180 | nan | 0.8180 | 0.0 | 0.8180 | | 0.2232 | 2.0 | 100 | 0.2976 | 0.2479 | 0.4958 | 0.4958 | nan | 0.4958 | 0.0 | 0.4958 | | 0.5305 | 2.4 | 120 | 0.3151 | 0.3807 | 0.7613 | 0.7613 | nan | 0.7613 | 0.0 | 0.7613 | | 0.2423 | 2.8 | 140 | 0.3189 | 0.4152 | 0.8305 | 0.8305 | nan | 0.8305 | 0.0 | 0.8305 | | 0.3341 | 3.2 | 160 | 0.2384 | 0.3861 | 0.7723 | 0.7723 | nan | 0.7723 | 0.0 | 0.7723 | | 0.2146 | 3.6 | 180 | 0.3200 | 0.4621 | 0.9243 | 0.9243 | nan | 0.9243 | 0.0 | 0.9243 | | 0.1866 | 4.0 | 200 | 0.2510 | 0.3646 | 0.7291 | 0.7291 | nan | 0.7291 | 0.0 | 0.7291 | | 0.2861 | 4.4 | 220 | 0.2736 | 0.4202 | 0.8404 | 0.8404 | nan | 0.8404 | 0.0 | 0.8404 | | 0.2048 | 4.8 | 240 | 0.2410 | 0.3912 | 0.7823 | 0.7823 | nan | 0.7823 | 0.0 | 0.7823 | | 0.1604 | 5.2 | 260 | 0.2233 | 0.3672 | 0.7344 | 0.7344 | nan | 0.7344 | 0.0 | 0.7344 | | 0.2756 | 5.6 | 280 | 0.2705 | 0.4494 | 0.8987 | 0.8987 | nan | 0.8987 | 0.0 | 0.8987 | | 0.1859 | 6.0 | 300 | 0.2211 | 0.4045 | 0.8089 | 0.8089 | nan | 0.8089 | 0.0 | 0.8089 | | 0.1306 | 6.4 | 320 | 0.2140 | 0.3763 | 0.7525 | 0.7525 | nan | 0.7525 | 0.0 | 0.7525 | | 0.5508 | 6.8 | 340 | 0.2231 | 0.4185 | 0.8371 | 0.8371 | nan | 0.8371 | 0.0 | 0.8371 | | 0.1446 | 7.2 | 360 | 0.2139 | 0.3666 | 0.7332 | 0.7332 | nan | 0.7332 | 0.0 | 0.7332 | | 0.3275 | 7.6 | 380 | 0.2470 | 0.3964 | 0.7928 | 0.7928 | nan | 0.7928 | 0.0 | 0.7928 | | 0.164 | 8.0 | 400 | 0.2017 | 0.3910 | 0.7819 | 0.7819 | nan | 0.7819 | 0.0 | 0.7819 | | 0.1864 | 8.4 | 420 | 0.2307 | 0.4408 | 0.8816 | 0.8816 | nan | 0.8816 | 0.0 | 0.8816 | | 0.1578 | 8.8 | 440 | 0.1869 | 0.3707 | 0.7414 | 0.7414 | nan | 0.7414 | 0.0 | 0.7414 | | 0.1201 | 9.2 | 460 | 0.2115 | 0.3834 | 0.7667 | 0.7667 | nan | 0.7667 | 0.0 | 0.7667 | | 0.1783 | 9.6 | 480 | 0.2009 | 0.3747 | 0.7495 | 0.7495 | nan | 0.7495 | 0.0 | 0.7495 | | 0.1232 | 10.0 | 500 | 0.1797 | 0.3865 | 0.7729 | 0.7729 | nan | 0.7729 | 0.0 | 0.7729 | | 0.2572 | 10.4 | 520 | 0.1983 | 0.4057 | 0.8115 | 0.8115 | nan | 0.8115 | 0.0 | 0.8115 | | 0.1209 | 10.8 | 540 | 0.1607 | 0.4274 | 0.8547 | 0.8547 | nan | 0.8547 | 0.0 | 0.8547 | | 0.1234 | 11.2 | 560 | 0.2260 | 0.4066 | 0.8133 | 0.8133 | nan | 0.8133 | 0.0 | 0.8133 | | 0.145 | 11.6 | 580 | 0.1963 | 0.3939 | 0.7878 | 0.7878 | nan | 0.7878 | 0.0 | 0.7878 | | 0.0665 | 12.0 | 600 | 0.1912 | 0.3873 | 0.7747 | 0.7747 | nan | 0.7747 | 0.0 | 0.7747 | | 0.0826 | 12.4 | 620 | 0.2095 | 0.4186 | 0.8373 | 0.8373 | nan | 0.8373 | 0.0 | 0.8373 | | 0.1212 | 12.8 | 640 | 0.1732 | 0.4059 | 0.8118 | 0.8118 | nan | 0.8118 | 0.0 | 0.8118 | | 0.142 | 13.2 | 660 | 0.2086 | 0.4007 | 0.8013 | 0.8013 | nan | 0.8013 | 0.0 | 0.8013 | | 0.0899 | 13.6 | 680 | 0.1838 | 0.3928 | 0.7856 | 0.7856 | nan | 0.7856 | 0.0 | 0.7856 | | 0.1857 | 14.0 | 700 | 0.1638 | 0.4157 | 0.8315 | 0.8315 | nan | 0.8315 | 0.0 | 0.8315 | | 0.0788 | 14.4 | 720 | 0.1736 | 0.4112 | 0.8223 | 0.8223 | nan | 0.8223 | 0.0 | 0.8223 | | 0.2543 | 14.8 | 740 | 0.1547 | 0.3822 | 0.7643 | 0.7643 | nan | 0.7643 | 0.0 | 0.7643 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Ayoola/pytorch_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-14T08:35:31Z
--- language: - zh - en tags: - glm - chatglm - thudm --- # ChatGLM-6B-INT8 <p align="center"> 👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-1udqapmrr-ocT1DS_mxWe6dDY8ahRWzg" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM-6B/blob/main/resources/WECHAT.md" target="_blank">WeChat</a> </p> ## 介绍 ChatGLM-6B 是一个开源的、支持中英双语问答的对话语言模型,基于 [General Language Model (GLM)](https://github.com/THUDM/GLM) 架构,具有 62 亿参数。结合模型量化技术,用户可以在消费级的显卡上进行本地部署(INT4 量化级别下最低只需 6GB 显存)。ChatGLM-6B 使用了和 [ChatGLM](https://chatglm.cn) 相同的技术,针对中文问答和对话进行了优化。经过约 1T 标识符的中英双语训练,辅以监督微调、反馈自助、人类反馈强化学习等技术的加持,62 亿参数的 ChatGLM-6B 已经能生成相当符合人类偏好的回答。 ChatGLM-6B-INT8 是 ChatGLM-6B 量化后的模型权重。具体的,ChatGLM-6B-INT8 对 ChatGLM-6B 中的 28 个 GLM Block 进行了 INT8 量化,没有对 Embedding 和 LM Head 进行量化。量化后的模型理论上 8G 显存(使用 CPU 即内存)即可推理,具有在嵌入式设备(如树莓派)上运行的可能。 在 CPU 上运行时,会根据硬件自动编译 CPU Kernel ,请确保已安装 GCC 和 OpenMP (Linux一般已安装,对于Windows则需手动安装),以获得最佳并行计算能力。 ## 软件依赖 ```shell pip install protobuf transformers==4.27.1 cpm_kernels ``` ## 代码调用 可以通过如下代码调用 ChatGLM-6B 模型来生成对话: ```ipython >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int8", trust_remote_code=True) >>> model = AutoModel.from_pretrained("THUDM/chatglm-6b-int8", trust_remote_code=True).half().cuda() >>> response, history = model.chat(tokenizer, "你好", history=[]) >>> print(response) 你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。 >>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history) >>> print(response) 晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法: 1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。 2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。 3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。 4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。 5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。 6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。 如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。 ``` 关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM-6B)。 ## 协议 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。 ## 引用 如果你觉得我们的工作有帮助的话,请考虑引用下列论文: ``` @inproceedings{ zeng2023glm-130b, title={{GLM}-130B: An Open Bilingual Pre-trained Model}, author={Aohan Zeng and Xiao Liu and Zhengxiao Du and Zihan Wang and Hanyu Lai and Ming Ding and Zhuoyi Yang and Yifan Xu and Wendi Zheng and Xiao Xia and Weng Lam Tam and Zixuan Ma and Yufei Xue and Jidong Zhai and Wenguang Chen and Zhiyuan Liu and Peng Zhang and Yuxiao Dong and Jie Tang}, booktitle={The Eleventh International Conference on Learning Representations (ICLR)}, year={2023}, url={https://openreview.net/forum?id=-Aw0rrrPUF} } ``` ``` @inproceedings{du2022glm, title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling}, author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={320--335}, year={2022} } ```
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cnn_dailymail metrics: - rouge model-index: - name: cnn_news_summary_model_new_vs results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: train[:3%] args: 3.0.0 metrics: - name: Rouge1 type: rouge value: 0.3565 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_news_summary_model_new_vs This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6052 - Rouge1: 0.3565 - Rouge2: 0.1448 - Rougel: 0.2502 - Rougelsum: 0.25 - Gen Len: 76.1474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 431 | 1.6244 | 0.3536 | 0.144 | 0.248 | 0.2479 | 75.1196 | | 1.9199 | 2.0 | 862 | 1.6082 | 0.3534 | 0.1424 | 0.2487 | 0.2486 | 76.1579 | | 1.8198 | 3.0 | 1293 | 1.6052 | 0.3565 | 0.1448 | 0.2502 | 0.25 | 76.1474 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Ayumi/Jovana
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- metrics: - accuracy --- --- license: apache-2.0 language: am * **DeepSpeed-RLHF**系统训练:DeepSpeed-HE 能够在 RLHF 中无缝地在推理和训练模式之间切换,使其能够利用来自 **DeepSpeed-Inference** 的各种优化,如张量并行计算和高性能CUDA算子进行语言生成,同时对训练部分还能从 **ZeRO- 和 LoRA-based** 内存优化策略中受益。DeepSpeed-HE 还能够自动在 RLHF 的不同阶段进行智能的内存管理和数据缓存。 * Train Data:(English)--data_path Dahoas/rm-static Dahoas/full-hh-rlhf Dahoas/synthetic-instruct-gptj-pairwise yitingxie/rlhf-reward-datasets openai/webgpt_comparisons stanfordnlp/SHP * Train Data:(Chinese)--data_path wangrui6/Zhihu-KOL Cohere/miracl-zh-queries-22-12 Hello-SimpleAI/HC3-Chinese mkqa-Chinese * 可自定义actor model 和 reward model,亦可单独训练rlhf model * **Usage:** git clone https://github.com/microsoft/DeepSpeedExamples cd DeepSpeedExamples/applications/DeepSpeed-Chat pip install -r requirements.txt python chat.py --path Laurie/opt1.3b-deepspeed-chat
Azaghast/DistilBERT-SCP-Class-Classification
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Azaghast/GPT2-SCP-Descriptions
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
Access to model Xinhhd/autotrain-zhongxin-contest-49402119333 is restricted and you are not in the authorized list. Visit https://huggingface.co/Xinhhd/autotrain-zhongxin-contest-49402119333 to ask for access.
Azaghast/GPT2-SCP-Miscellaneous
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: openrail tags: - anime - style - merge - stable diffusion --- # Ritium is a universal anime model. The goals of working on it were as follows: - To achieve a good result without using the `hires fix'. - To make the model universal, not tied to a specific style. - Avoid using a lot of quality tags, such as ``hyper-detailed, photorealistic, ultra realistic'' and others, so as not to overload the query. - Use a short `negative prompt`. ### The settings will also be universal: Set `clip skip 1-2`, `ENSD: 31337`, use samplers: `DPM++ 2M Karass, Eular A, DDIM`, `CFG Scale 6-10` on average, but values above or below may also give good results. ### My `negative prompt:` `(deformed, distorted, disfigured:1.3), EasyNegative, (bad_prompt:0.8), (bad_prompt_version2:0.8), poorly draw, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4) badhandv4` ### The `Embeddings` I use in `negative prompt': [badhandv4](https://civitai.com/models/16993/badhandv4-animeillustdiffusion) [easynegative](https://civitai.com/models/7808/easynegative) [bad_prompt](https://huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main) --- license: openrail ---
Azura/data
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: pegasus_tweet_sum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus_tweet_sum This model is a fine-tuned version of [transformersbook/pegasus-samsum](https://huggingface.co/transformersbook/pegasus-samsum) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3260 - Rouge1: 27.0091 - Rouge2: 9.1616 - Rougel: 19.9548 - Rougelsum: 23.608 - Gen Len: 50.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 44 | 2.2587 | 26.8137 | 8.4052 | 18.7378 | 22.5031 | 48.25 | | No log | 2.0 | 88 | 2.3260 | 27.0091 | 9.1616 | 19.9548 | 23.608 | 50.6667 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Tokenizers 0.13.3
Azuris/DialoGPT-medium-envy
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class ---
Azuris/DialoGPT-medium-senorita
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1-Reinforce results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Azuris/DialoGPT-small-envy
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2023-04-14T09:02:44Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6380 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 1.9185 | | No log | 2.0 | 10 | 1.7098 | | No log | 3.0 | 15 | 1.6380 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Tokenizers 0.13.3
BSC-LT/roberta-base-bne-capitel-ner
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "ner", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PixelCopter-Reinforce results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 17.60 +/- 11.51 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
BSC-LT/roberta-base-bne-sqac
[ "pytorch", "roberta", "question-answering", "es", "dataset:BSC-TeMU/SQAC", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "qa", "question answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: openrail --- # caulkinum Series Checkpoint (caulkinumシリーズ チェックポイント) *It is also available at CIVITAI (CIVITAIにも置いてあります) <https://civitai.com/user/489> ※ 日本語での説明文は後半にあります ![](./img/00054-1630068329-caulkinumV2_ARCT%20%5B36f5431e82%5D-DPM%2B%2B%202M%20Karras.png) Sample - 作例 (caulkinumV2_ARCT) ``` best quality, high detailed, Faint lips,UC:realistic, cinematic lighting petite 1girl gradient hair yellow eyes, seductive happy school_Uniform, Disney land, warm light sunset, rays light sparkles lens flare deep shadows, depth of field peerless scenery sentimental Negative prompt: title text, signature, watermark, username, artist name EasyNegative NSFW Steps: 15, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1630068329, Size: 640x256, Model hash: 36f5431e82, Model: caulkinumV2_ARCT, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B ``` There are more images in the img folder (imgフォルダには他の画像もあります) <https://huggingface.co/Shiyaku/caulkinumCheckPoint/tree/main/img> ## Overview We did a hierarchical merge with Silicon28 as the base, focusing on the Elysium series. I made many attempts to achieve my preferred style of painting, but I did not keep records, so the detailed recipe was lost. I have decided to leave them here, organizing the remaining results. I will be very happy if someone likes these models. ## Model Variations ### 1. caulkinumV2 series for Anime Style These models are used to generate so-called 2D illustrations. Currently, there are 9 types of models with different styles. Particular emphasis is placed on the expression of eyes and light. #### modern-game-like - [caulkinumV2_AR for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR.safetensors) - [caulkinumV2_AR2 for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR2.safetensors) - [caulkinumV2_A4FS for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_A4FS.safetensors) #### anime-like - [caulkinumV2_ARCT for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARCT.safetensors) - [caulkinumV2_ARc for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARc.safetensors) - [caulkinumV2_ARNL for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARNL.safetensors) #### cel-artistic - [caulkinumV2_NCA for cel-artistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_NCA.safetensors) #### Pop Artistic - [caulkinumV2_NLPS for POP Art](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_NCA.safetensors) #### fluffy-beastman-anime-like - [caulkinumV2_FGA for furry](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_FGA.safetensors) ### 2. caulkinumV1_RL series for semi Realistic Style This model is aimed at the borderline between anime and live-action, which is generally referred to as 2.5D. This model also focuses on the expression of eyes and light. #### 2.5D for semi realistic - [caulkinumV1_RL for 2.5D](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RL.safetensors) #### 2D Characters + Realistic Backgrounds - [caulkinumV1_RLCT for VirtualReal](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RLCT.safetensors) ### 3. caulkinumV1_S Series for Realistic Style This model is for realistic character generation. We aimed for a JRPG character style, more western than oriental. #### realistic - [caulkinumV1_S8 for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_S8.safetensors) - [caulkinumV1_S4FN for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_S4FN.safetensors) ## 概要 Silicon28を根底としてElysiumシリーズを中心に階層マージしました。 好みの画風となるように試行錯誤を繰り返しましたが、記録を取っていなかったので詳細なレシピは失われました。 私は残った成果を整理しつつ、この場所へ残しておくことにしました。 これらのモデルを誰かが気に入ってくれたなら私はとても幸せでしょう。 ## モデルバリエーション ### 1. caulkinumV2シリーズ for Anime Style いわゆる2Dイラストを生成するためのモデルです。 現在は画風違いで9種あります。 特に瞳と光の表現に力を入れています。 #### モダン・ゲーム的 - [caulkinumV2_AR for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR.safetensors) - [caulkinumV2_AR2 for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_AR2.safetensors) - [caulkinumV2_A4FS for modern gamegraphic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_A4FS.safetensors) #### アニメ的 - [caulkinumV2_ARCT for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARCT.safetensors) - [caulkinumV2_ARc for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARc.safetensors) - [caulkinumV2_ARNL for Anime](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_ARNL.safetensors) #### セル画的 - [caulkinumV2_NCA for cel-artistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_NCA.safetensors) #### ポップアート的 - [caulkinumV2_NLPS for POP Art](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_NCA.safetensors) #### もふもふ獣人アニメ的 - [caulkinumV2_FGA for furry](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV2_FGA.safetensors) ### 2. caulkinumV1_RLシリーズ for semi Realistic Style 一般的に2.5Dと言われるような、アニメと実写の境目を目指したモデルです。 こちらも瞳と光の表現に力を入れています。 #### 2.5D 半写実的 - [caulkinumV1_RL for 2.5D](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RL.safetensors) #### 2Dキャラクター + リアル背景 - [caulkinumV1_RLCT for VirtualReal Style](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_RLCT.safetensors) ### 3. caulkinumV1_Sシリーズ for Realistic Style 写実的なキャラクター生成を行うためのモデルです。 東洋的よりは西洋寄り、JRPGキャラクター的な方向性を目指しました。 #### 写実的 - [caulkinumV1_S8 for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_S8.safetensors) - [caulkinumV1_S4FN for Realistic](https://huggingface.co/Shiyaku/caulkinumCheckPoint/blob/main/model/caulkinumV1_S4FN.safetensors) ## Illustration Sample (サンプルイラスト) Note: N2_CA renamed to V2_NCA ※ N2_CA は V2_NCA に名前変更しました ![](./img/tmp37_g1a63.png) ![](./img/tmp5fjy7hi2.png) ![](./img/tmpq8k5vcs1.png) ![](./img/tmpz8xxghst.png) ![](./img/tmpt3u7mq8u.png) ![](./img/V2_AR-1.png) ![](./img/V1_RL-1.png) ![](./img/EV1ChS8S4.png) ![](./img/ChS8S4-1.png) ![](./img/ChS8S4-3.png) ![](./img/ar2arctncafga.png) ![](./img/2c4FSNLNCPS-2.png) ## Turn to the Afterword. (あとがきに変えて) ### Donation (寄付) Coffee will increase work efficiency and motivation. コーヒーは作業効率を高めモチベーションをアップさせるでしょう。 <https://ko-fi.com/489489> ### thanks (謝辞) Thank you for reading to the end. I hope one of you will use it. We would also like to thank all those involved in SD development and those who developed the models for the merge. 最後まで読んでくれてありがとうございます。 使用してくれるか方が一人でもいることを祈っています。 また、SD開発に関わる全ての方、マージ用のモデルを開発した方々に感謝いたします。 ### Contact (連絡先) #### Twitter <https://twitter.com/Shiyaku> #### pixiv <https://www.pixiv.net/users/63951151> #### civitai <https://civitai.com/user/489> Note: I have posted many images on pixiv and civitai for your reference. ※ pixivとcivitaiには沢山の画像を投稿しているので参考にしてください
Barleysack/AERoberta2
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_rvl-cdip_single_10_2023-04-14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_rvl-cdip_single_10_2023-04-14 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 16.6313 - Accuracy: {'accuracy': 0.1} - Exit 0 Accuracy: 0.1062 - Exit 1 Accuracy: 0.0625 - Exit 2 Accuracy: 0.0813 - Exit 3 Accuracy: 0.0625 - Exit 4 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 1.0 | 160 | 16.6313 | {'accuracy': 0.1} | 0.1062 | 0.0625 | 0.0813 | 0.0625 | 0.0625 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.9.0 - Tokenizers 0.12.1
BeIR/query-gen-msmarco-t5-base-v1
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
1,816
null
--- license: cc-by-nc-nd-4.0 language: - en pipeline_tag: text-to-image tags: - stable-diffusion - text-to-image - artistic --- The Druuna-inspired LoRa been trained on the beautiful and unique art style of the Druuna graphic novel. It is capable of producing stunning images that capture the sensual and erotic atmosphere of the source material, with intricate details of the human form and dramatic, post-apocalyptic backgrounds. However, it is important to note that this model is intended for non-commercial use only. Any use of this model for commercial purposes without proper authorization and legal advice may infringe on the intellectual property rights of the original creators of the Druuna graphic novel. Therefore, it is strongly recommended that you do not use this model for any commercial purposes. In addition, it has been extensively tested with multiple SD 1.5 checkpoints, and has been found to be working flawlessly. ![00010.png](https://s3.amazonaws.com/moonup/production/uploads/642b041e1fde05202fc42d5d/_gf4k9WnpWb6cTEMofJVJ.png) ![00011.png](https://s3.amazonaws.com/moonup/production/uploads/642b041e1fde05202fc42d5d/lCxyOfP3GcCG6VXn1xSTb.png) ![00012.png](https://s3.amazonaws.com/moonup/production/uploads/642b041e1fde05202fc42d5d/fyH6yG6yHKZus5cmOX_30.png)
Bhuvana/t5-base-spellchecker
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
93
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: dhanunjaya/GPT-2-finetuned-abstracts results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dhanunjaya/GPT-2-finetuned-abstracts This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.4234 - Validation Loss: 2.2189 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 500, 'decay_rate': 0.95, 'staircase': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.4234 | 2.2189 | 0 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.12.0 - Datasets 2.10.1 - Tokenizers 0.13.2
BigSalmon/GPTHeHe
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-04-14T10:59:06Z
--- tags: - autotrain - text-classification language: - de widget: - text: "I love AutoTrain 🤗" datasets: - BIDEQUITY/autotrain-data-bidequity-page-classifier co2_eq_emissions: emissions: 1.316894718008702 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 49442119390 - CO2 Emissions (in grams): 1.3169 ## Validation Metrics - Loss: 0.416 - Accuracy: 0.870 - Macro F1: 0.781 - Micro F1: 0.870 - Weighted F1: 0.864 - Macro Precision: 0.888 - Micro Precision: 0.870 - Weighted Precision: 0.878 - Macro Recall: 0.752 - Micro Recall: 0.870 - Weighted Recall: 0.870 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/BIDEQUITY/autotrain-bidequity-page-classifier-49442119390 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("BIDEQUITY/autotrain-bidequity-page-classifier-49442119390", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("BIDEQUITY/autotrain-bidequity-page-classifier-49442119390", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
BigSalmon/GPTNeo350MInformalToFormalLincoln
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - autotrain - summarization language: - en widget: - text: "I love AutoTrain 🤗" datasets: - ishajo/autotrain-data-beproj_meeting_summarization_usingt5 co2_eq_emissions: emissions: 1.917540145828255 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 49444119396 - CO2 Emissions (in grams): 1.9175 ## Validation Metrics - Loss: 1.830 - Rouge1: 24.863 - Rouge2: 12.063 - RougeL: 20.488 - RougeLsum: 23.375 - Gen Len: 19.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ishajo/autotrain-beproj_meeting_summarization_usingt5-49444119396 ```
BigSalmon/MrLincoln10
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2023-04-14T11:37:27Z
--- inference: false language: - bg license: mit tags: - torch --- # LLaMA-7B This repo contains a low-rank adapter for LLaMA-7b trained on Bulgarian dataset. This model was introduced in [this paper](https://arxiv.org/abs/2106.09685). ## Model description The training data is private Bulgarian dataset. ## Intended uses & limitations This is an instruction-based model similart to ChatGPT but in Bulgarian. ### How to use Here is how to use this model in PyTorch: ```bash >>> git clone https://github.com/tloen/alpaca-lora.git >>> cd alpaca-lora >>> pip install -r requirements.txt >>> >>> python generate.py \ --load_8bit \ --base_model 'yahma/llama-7b-hf' \ --lora_weights 'rmihaylov/alpaca-lora-bg-7b' \ --share_gradio ``` This will download both a base model and an adapter from huggingface. Then it will run a gradio interface for chatting. Example using this model: [Colab](https://colab.research.google.com/drive/1IPz8QqOa5ZUBz7ZyXE4hhh7XwMEH-D9S?usp=sharing). You need a Colab Pro because the model needs high ram when loading. ### Interface ![](example.jpg)
BigSalmon/MrLincoln6
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- datasets: - GreeneryScenery/SheepsDiffusionNet - poloclub/diffusiondb pipeline_tag: image-to-image tags: - art - ControlNet --- # V5 🤗 Try it [here](https://replicate.com/greeneryscenery/sheeps-controlnet-sketch-2-image) <img src = 'https://huggingface.co/GreeneryScenery/SheepsControlV5/resolve/main/overview.png'>
BigSalmon/NEO125InformalToFormalLincoln
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- datasets: - IlyaGusev/ru_turbo_alpaca - IlyaGusev/ru_turbo_saiga - IlyaGusev/oasst1_ru_main_branch - IlyaGusev/ru_sharegpt_cleaned language: - ru pipeline_tag: conversational license: cc-by-4.0 --- # Saiga 13B, Russian LLaMA-based chatbot Based on [LLaMA 13B](https://huggingface.co/huggyllama/llama-13b). * This is an adapter-only version. llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_13b_lora_llamacpp/) Colab: [link](https://colab.research.google.com/drive/1WuoSlKMdGB-D_OQrHOFta13Ph1--Eq7L) Training code: [link](https://github.com/IlyaGusev/rulm/tree/master/self_instruct) ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig MODEL_NAME = "IlyaGusev/saiga_13b_lora" DEFAULT_MESSAGE_TEMPLATE = "<s>{role}\n{content}</s>\n" DEFAULT_SYSTEM_PROMPT = "Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им." class Conversation: def __init__( self, message_template=DEFAULT_MESSAGE_TEMPLATE, system_prompt=DEFAULT_SYSTEM_PROMPT, start_token_id=1, bot_token_id=9225 ): self.message_template = message_template self.start_token_id = start_token_id self.bot_token_id = bot_token_id self.messages = [{ "role": "system", "content": system_prompt }] def get_start_token_id(self): return self.start_token_id def get_bot_token_id(self): return self.bot_token_id def add_user_message(self, message): self.messages.append({ "role": "user", "content": message }) def add_bot_message(self, message): self.messages.append({ "role": "bot", "content": message }) def get_prompt(self, tokenizer): final_text = "" for message in self.messages: message_text = self.message_template.format(**message) final_text += message_text final_text += tokenizer.decode([self.start_token_id, self.bot_token_id]) return final_text.strip() def generate(model, tokenizer, prompt, generation_config): data = tokenizer(prompt, return_tensors="pt") data = {k: v.to(model.device) for k, v in data.items()} output_ids = model.generate( **data, generation_config=generation_config )[0] output_ids = output_ids[len(data["input_ids"][0]):] output = tokenizer.decode(output_ids, skip_special_tokens=True) return output.strip() config = PeftConfig.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained( config.base_model_name_or_path, load_in_8bit=True, torch_dtype=torch.float16, device_map="auto" ) model = PeftModel.from_pretrained( model, MODEL_NAME, torch_dtype=torch.float16 ) model.eval() tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) generation_config = GenerationConfig.from_pretrained(MODEL_NAME) print(generation_config) inputs = ["Почему трава зеленая?", "Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч"] for inp in inputs: conversation = Conversation() conversation.add_user_message(inp) prompt = conversation.get_prompt(tokenizer) output = generate(model, tokenizer, prompt, generation_config) print(inp) print(output) print() print("==============================") print() ``` Examples: ``` User: Почему трава зеленая? Saiga: Зеленый цвет у растений обусловлен наличием хлорофилла в их листьях. Хлорофилл - это пигмент, который отвечает за фотосинтез, то есть процесс преобразования солнечной энергии в химическую энергию. Он находится в специальных органелах, называемых хлоропластами, которые имеются только у растительных клеток. Когда свет попадает на листья, хлорофилл переходит из состояния "растворенного" в "состояние связанное", что позволяет ему использовать энергию света для синтеза сахаров. Этот процесс происходит в хлоропластах и приводит к образованию кислорода и углекислого газа. В результате этого процесса листья становятся зелеными. ``` ``` User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч Saiga: Татьяна была очень увлеченной девочкой, которая всегда любила играть в футбол. Она была настоящей звездой на поле, и ее команда всегда побеждала благодаря ее таланту и энергии. Однажды, когда Татьяна играла в футбол со своими друзьями, она случайно потеряла свой мяч. Она была очень расстроена, так как это был ее любимый мяч, который она использовала уже несколько лет. Но несмотря на то, что она была расстроена, Татьяна решила продолжить игру. Она начала искать свой мяч, но ничего не нашла. В конце концов, она решила вернуться домой и попросить помощи у своих родителей. Когда Татьяна пришла домой, она рассказала своим родителям о том, что произошло. Они были очень сочувствующими и предложили ей найти новый мяч. Татьяна была очень благодарна своим родителям за их помощь, и они отправились вместе в магазин спортивного инвентаря. Там они выбрали для нее новый мяч, который был почти такой же, как старый. После того, как Татьяна получила новый мяч, она вернулась на поле и продолжила играть. Но теперь она знала, что ее любовь к игре не зависит от одного мяча - она могла играть и без него. Также, Татьяна поняла, что ее родители всегда будут рядом ей, чтобы помочь ей в трудные моменты. И она была очень благодарна им за это. Таким образом, Татьяна научилась жить без своего старого мяча, и она стала более сильной и уверенной в себе после этого случая. Ее любовь к игре только укрепилась, и она продолжала играть в футбол каждую свободную минуту. ``` ``` User: Реши уравнение 4x + 5 = 21 Saiga: Для решения данного уравнения необходимо выполнить следующие шаги: 1. Перенесите все члены в левую часть уравнения на другую сторону равенства: 4x + 5 = 21 2. Отмените оба члена, переведя их в правую часть уравнения: 4x = 21 - 5 3. Разделите оба члена на общее деление: 4x / 4 = (21 - 5) / 4 4x / 4 = 5 - 5/4 4x / 4 = 0 4x = 0 4x = 0 * 4 4x = 0 Ответ: x = 0. ``` ``` User: Заполни пропуски в предложении. Дано: Я пытался ____ от маньяка, но он меня настиг Saiga: Я пытался убежать от маньяка, но он меня настиг. ``` v1: - revision bd4823198d443ab0a18be3768c9985de6e249d43 - wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/rue8m51s/overview) - 4 datasets: ru_turbo_alpaca, ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch - Datasets merging script: [create_chat_set.py](https://github.com/IlyaGusev/rulm/blob/ef58f3d82d6e7b3784d42167ff69188d3766ab61/self_instruct/src/data_processing/create_chat_set.py) - Loss: 0.870 - Context length: 2000 - Conversational template: `"<s>{role}\n{content}</s>"` - Possible roles: `["system", "user", "bot"]` - System prompt: `"Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им."`
BigSalmon/ParaphraseParentheses2.0
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 276.50 +/- 97.06 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Entj -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Entj -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Entj ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 400000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
BigSalmon/PhraseBerta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0606 - Precision: 0.9368 - Recall: 0.9527 - F1: 0.9447 - Accuracy: 0.9868 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0902 | 1.0 | 1756 | 0.0705 | 0.9118 | 0.9342 | 0.9229 | 0.9814 | | 0.0349 | 2.0 | 3512 | 0.0617 | 0.9343 | 0.9519 | 0.9430 | 0.9863 | | 0.0166 | 3.0 | 5268 | 0.0606 | 0.9368 | 0.9527 | 0.9447 | 0.9868 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Tokenizers 0.13.3
BigSalmon/T52
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: polish_transliterator_T5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # polish_transliterator_T5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0705 - Rouge1: 15.1042 - Rouge2: 0.0 - Rougel: 15.1042 - Rougelsum: 15.625 - Gen Len: 4.0938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.0242 | 1.0 | 572 | 1.8076 | 3.5937 | 0.0 | 3.75 | 3.75 | 1.25 | | 2.8296 | 2.0 | 1144 | 1.6997 | 4.6875 | 0.0 | 4.6875 | 4.6875 | 0.7031 | | 2.4707 | 3.0 | 1716 | 1.5717 | 6.0417 | 0.0 | 6.25 | 6.3542 | 1.1719 | | 2.4367 | 4.0 | 2288 | 1.4617 | 6.4062 | 0.0 | 6.875 | 6.875 | 0.9688 | | 2.296 | 5.0 | 2860 | 1.3847 | 8.4375 | 0.0 | 8.125 | 8.4375 | 1.3906 | | 2.0905 | 6.0 | 3432 | 1.3177 | 8.4375 | 0.0 | 8.125 | 8.4375 | 1.9688 | | 1.8223 | 7.0 | 4004 | 1.2645 | 9.375 | 0.0 | 9.375 | 9.375 | 2.3125 | | 1.6881 | 8.0 | 4576 | 1.2157 | 10.625 | 0.0 | 10.625 | 10.9375 | 2.7969 | | 1.6655 | 9.0 | 5148 | 1.1841 | 12.5 | 0.0 | 12.2917 | 12.5 | 3.1562 | | 1.5736 | 10.0 | 5720 | 1.1582 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 3.25 | | 1.4754 | 11.0 | 6292 | 1.1382 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 3.6562 | | 1.4927 | 12.0 | 6864 | 1.1176 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 4.1406 | | 1.3628 | 13.0 | 7436 | 1.1069 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 4.1719 | | 1.3288 | 14.0 | 8008 | 1.0968 | 13.4896 | 0.0 | 13.3333 | 13.3333 | 4.2344 | | 1.313 | 15.0 | 8580 | 1.0889 | 14.7917 | 0.0 | 14.7917 | 15.1042 | 4.2188 | | 1.3215 | 16.0 | 9152 | 1.0820 | 14.7917 | 0.0 | 14.7917 | 15.1042 | 4.2188 | | 1.2772 | 17.0 | 9724 | 1.0771 | 14.7917 | 0.0 | 14.7917 | 15.1042 | 4.2188 | | 1.1895 | 18.0 | 10296 | 1.0735 | 15.1042 | 0.0 | 15.1042 | 15.625 | 4.0938 | | 1.3394 | 19.0 | 10868 | 1.0712 | 15.1042 | 0.0 | 15.1042 | 15.625 | 4.0938 | | 1.2656 | 20.0 | 11440 | 1.0705 | 15.1042 | 0.0 | 15.1042 | 15.625 | 4.0938 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
BigSalmon/TS3
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible", "has_space" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PolicyGradientPixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 20.60 +/- 13.29 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
BigSalmon/prepositions
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: resnet-152-finetuned_resnet152-adam-optimizer5e-4-autotags results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9304761904761905 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-152-finetuned_resnet152-adam-optimizer5e-4-autotags This model is a fine-tuned version of [microsoft/resnet-152](https://huggingface.co/microsoft/resnet-152) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2399 - Accuracy: 0.9305 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.4009 | 0.99 | 65 | 2.1414 | 0.3971 | | 0.9201 | 1.99 | 130 | 0.8123 | 0.7210 | | 0.7575 | 2.99 | 195 | 0.5730 | 0.8124 | | 0.4792 | 3.99 | 260 | 0.4166 | 0.8648 | | 0.4253 | 4.99 | 325 | 0.3811 | 0.8810 | | 0.3331 | 5.99 | 390 | 0.4290 | 0.8705 | | 0.2347 | 6.99 | 455 | 0.4600 | 0.8952 | | 0.1732 | 7.99 | 520 | 0.3018 | 0.8924 | | 0.1777 | 8.99 | 585 | 0.4851 | 0.8914 | | 0.1298 | 9.99 | 650 | 0.2941 | 0.92 | | 0.1164 | 10.99 | 715 | 0.3915 | 0.9095 | | 0.1284 | 11.99 | 780 | 0.3701 | 0.9152 | | 0.0986 | 12.99 | 845 | 0.3416 | 0.9171 | | 0.0944 | 13.99 | 910 | 0.3145 | 0.9210 | | 0.0929 | 14.99 | 975 | 0.2677 | 0.9229 | | 0.1014 | 15.99 | 1040 | 0.2745 | 0.9295 | | 0.0971 | 16.99 | 1105 | 0.2932 | 0.9267 | | 0.0691 | 17.99 | 1170 | 0.2174 | 0.9333 | | 0.0557 | 18.99 | 1235 | 0.2233 | 0.9324 | | 0.06 | 19.99 | 1300 | 0.2399 | 0.9305 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu117 - Datasets 2.11.0 - Tokenizers 0.13.2
BigTooth/DialoGPT-Megumin
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -150.26 +/- 82.78 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.0003 'num_envs': 16 'num_steps': 1024 'anneal_lr': True 'gae': True 'gamma': 0.999 'gae_lambda': 0.98 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'abarekatain/LunarLander-v2' 'batch_size': 16384 'minibatch_size': 4096} ```
BigTooth/DialoGPT-small-tohru
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
Access to model UnipaPolitoUnimore/covid_mild_diffuser is restricted and you are not in the authorized list. Visit https://huggingface.co/UnipaPolitoUnimore/covid_mild_diffuser to ask for access.
Blerrrry/Kkk
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cppe-5 model-index: - name: detr-resnet-50_finetuned_cppe5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
BonjinKim/dst_kor_bert
[ "pytorch", "jax", "bert", "pretraining", "transformers" ]
null
{ "architectures": [ "BertForPreTraining" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2023-04-14T12:58:50Z
--- license: apache-2.0 --- # 2023/4/14 - The new models trained for short titles (uppercase). - Update vocab/AIO_label.vocab - Update src_python/AIONER/postprocessing.py - Train files: Train_tmvar1-AIO-0414.conll, Train_tmvar3-AIO-0414.conll
Brykee/DialoGPT-medium-Morty
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-04-14T13:36:05Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - lshenae/pokemon-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
BumBelDumBel/ZORK_AI_SCIFI
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2023-04-14T13:44:01Z
--- license: creativeml-openrail-m language: - en tags: - Stable Diffusion - text-to-image --- <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/01158-2534408105.png> - [V2.0examples](#v20-examples) - [V2.1examples](#v21-examples) Updates/2023.5.7 Uploaded dreamboxmix-P-v2.0-vae # Updates * added noise offset * better effect * fix clip # Recommended Parameter settings * clip skip:2 * Sampler:DPM++ SDE Karras * used [EasyNegative](https://civitai.com/models/7808/easynegative) in NEGATIVE prompt # About V2.1 * Compared with 2.0, the picture composition of 2.1 is more stable but the performance is weaker than that of 2.0. # V2.0 examples <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00293-3062384056.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00421-3062384063.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00389-1879278384.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00394-1879278384.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00345-787073236.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00675-4003135891.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00653-3722160125.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/tmpqh1elw57.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/tmpjy2yvsc_.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/tmpbmeb3siu.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00291-3971651375.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00299-298724055.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00377-2175492882.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00496-2175492885.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00483-3167056862.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00600-2925468646.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/00310-936365577.png> # V2.1 examples <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/xyz_grid-0084-2009763082.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/xyz_grid-0079-2090762385.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/xyz_grid-0074-3586713131.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/xyz_grid-0081-2804914475.png> <img src=https://huggingface.co/rim0/dreambox-mix-P/resolve/main/images/xyz_grid-0080-1546685535.png> If you like this mod you can support me [here](https://ko-fi.com/rimg0). - [Back to top](#updates)
BunakovD/sd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall model-index: - name: distilbert-base-uncased-PINA-dfnew-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-PINA-dfnew-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3815 - Accuracy: 0.9106 - Precision: 0.7799 - Recall: 0.7804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:| | 1.5008 | 1.0 | 1002 | 0.6541 | 0.8482 | 0.6999 | 0.6173 | | 0.4599 | 2.0 | 2004 | 0.4240 | 0.9004 | 0.7739 | 0.7641 | | 0.2458 | 3.0 | 3006 | 0.3815 | 0.9106 | 0.7799 | 0.7804 | | 0.1549 | 4.0 | 4008 | 0.3817 | 0.9206 | 0.8114 | 0.8064 | | 0.0977 | 5.0 | 5010 | 0.4187 | 0.9194 | 0.8118 | 0.8031 | | 0.0662 | 6.0 | 6012 | 0.4207 | 0.9213 | 0.8109 | 0.8085 | | 0.0454 | 7.0 | 7014 | 0.4361 | 0.9226 | 0.8276 | 0.8199 | | 0.0314 | 8.0 | 8016 | 0.4562 | 0.9233 | 0.8288 | 0.8209 | | 0.023 | 9.0 | 9018 | 0.4657 | 0.9221 | 0.8272 | 0.8192 | | 0.0185 | 10.0 | 10020 | 0.4620 | 0.9226 | 0.8278 | 0.8191 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
2023-04-14T13:52:55Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: CSerdar014191/gpt2-medium_test06_tuner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # CSerdar014191/gpt2-medium_test06_tuner This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0421 - Validation Loss: 0.8691 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0421 | 0.8691 | 0 | ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
73
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CAMeL-Lab/bert-base-arabic-camelbert-ca
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
580
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1523.63 +/- 345.60 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: resnet-152-finetuned_resnet152-adam-optimizere-2-autotags results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8980952380952381 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-152-finetuned_resnet152-adam-optimizere-2-autotags This model is a fine-tuned version of [microsoft/resnet-152](https://huggingface.co/microsoft/resnet-152) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4368 - Accuracy: 0.8981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4424 | 0.99 | 65 | 1.7123 | 0.56 | | 1.6053 | 1.99 | 130 | 2.0613 | 0.3152 | | 1.3795 | 2.99 | 195 | 1.3791 | 0.5552 | | 0.9701 | 3.99 | 260 | 0.9195 | 0.7038 | | 0.8258 | 4.99 | 325 | 0.9107 | 0.7067 | | 0.7619 | 5.99 | 390 | 0.9915 | 0.6867 | | 0.6241 | 6.99 | 455 | 0.7895 | 0.76 | | 0.497 | 7.99 | 520 | 0.6616 | 0.8038 | | 0.4709 | 8.99 | 585 | 0.5282 | 0.8543 | | 0.394 | 9.99 | 650 | 0.5447 | 0.8429 | | 0.343 | 10.99 | 715 | 0.5108 | 0.8486 | | 0.3482 | 11.99 | 780 | 0.5224 | 0.8505 | | 0.2576 | 12.99 | 845 | 0.4796 | 0.8743 | | 0.1837 | 13.99 | 910 | 0.5008 | 0.8571 | | 0.1904 | 14.99 | 975 | 0.4366 | 0.8790 | | 0.1458 | 15.99 | 1040 | 0.4320 | 0.8990 | | 0.1575 | 16.99 | 1105 | 0.4059 | 0.8952 | | 0.0992 | 17.99 | 1170 | 0.4362 | 0.8952 | | 0.0858 | 18.99 | 1235 | 0.4210 | 0.8971 | | 0.0704 | 19.99 | 1300 | 0.4368 | 0.8981 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu117 - Datasets 2.11.0 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
2023-04-14T14:11:32Z
--- license: creativeml-openrail-m language: - en tags: - anime - art - stable diffusion pipeline_tag: text-to-image duplicated_from: Meina/MeinaMix --- MeinaMix Objective is to be able to do good art with little prompting. I created a discord server where you can post images that you generated, discuss prompt and/or ask for help. https://discord.gg/XC9nGZNDUd ( i'll be helping and answering questions there whenever i can! ) If you like one of my models and want to support their updates: I've made a ko-fi page where you can pay me a coffee <3 , it will be very much appreciated: https://ko-fi.com/meina MeinaMix and the other of Meinas will ALWAYS be FREE. (not updated to V8 yet) You may also try the latest version of my model using Sinkin.ai: https://sinkin.ai/m/vln8Nwr The images in the samples aren't cherry picked, if i had spent 2h to cherry pick each of the images it would hide how the model really performs and i don't think thats fair. Recommendations of use: -------------------------------------------------------------------------------- Negative: ' (worst quality, low quality:1.4), monochrome, zombie, ' The best samplers in most of the generations is DPM++ at 25 steps, with a CFG scale of 5 up to 9. ( Clip skip 2. ) As for the Hires.fix in most of the scenarios is R-ESRGAN 4x or +Anime6b, with 15 steps at 0.1 up to 0.3 denoising. -------------------------------------------------------------------------------- In the merged models list: MeinaMix V1~6, MeinaPastel V3, MeinaHentai V2, Night Sky YOZORA Style Model, PastelMix and Facebomb, i do not have the exact recipe because i did multiple mixings using block weighted merges with multiple settings and kept the better version of each merge. I hope you have fun trying out my model, feel free to reach out to me in case you have any feedback to give!
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
63
null
--- license: creativeml-openrail-m language: - en pipeline_tag: text-to-image tags: - meina - anime - pastel - meinapastel - meinamix - stable diffusion - art duplicated_from: Meina/MeinaPastel --- MeinaPastel aims to make illustrations with a 2d feel to them with good light, shadows and details, making pastel or colorful images! -- Recommendations of use: - Sampling method: DPM++ 2M Karass, 20 steps. - Upscaler: Latent at 15 steps and 0,55 Denoising in 2x. - Resolution: 512x768, 768x512 and possibly 768x432 ( but not tested ). - The VAE is baked-in. - Clip skip 2. -- This is a side project that i have while i try to improve MeinaMix, but i'll be updating MeinaPastel every 1 to 3 weeks. -- If you like the model and wants to support me in being able to spend more time improving it: -- You can do so by buying me a coffee at: https://ko-fi.com/meina ! ( it is not necessary but will be highly appreciated ) This model is a unet block merge of mostly MeinaMix and Colormixed, ultracolorv4 and a few others with minor block weight taken.
CAMeL-Lab/bert-base-arabic-camelbert-mix-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,860
2023-04-14T14:12:12Z
--- license: creativeml-openrail-m duplicated_from: Meina/MeinaHentai ---
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
71
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9235 - name: F1 type: f1 value: 0.9236455088643882 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2150 - Accuracy: 0.9235 - F1: 0.9236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8249 | 1.0 | 250 | 0.3181 | 0.9035 | 0.8994 | | 0.2452 | 2.0 | 500 | 0.2150 | 0.9235 | 0.9236 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
229
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Humberto/MedicalArticlesClassificationModel results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Humberto/MedicalArticlesClassificationModel This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6969 - Validation Loss: 1.6957 - Train Accuracy: 0.3521 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.6982 | 1.6957 | 0.3521 | 0 | | 1.6999 | 1.6957 | 0.3521 | 1 | | 1.6969 | 1.6957 | 0.3521 | 2 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
2023-04-14T14:33:34Z
--- license: mit tags: - generated_from_trainer datasets: - ncbi_disease metrics: - precision - recall - f1 - accuracy model-index: - name: BIO_GPT_NER_FINETUNED_C results: - task: name: Token Classification type: token-classification dataset: name: ncbi_disease type: ncbi_disease config: ncbi_disease split: validation args: ncbi_disease metrics: - name: Precision type: precision value: 0.44176706827309237 - name: Recall type: recall value: 0.5583756345177665 - name: F1 type: f1 value: 0.4932735426008968 - name: Accuracy type: accuracy value: 0.958054734877935 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BIO_GPT_NER_FINETUNED_C This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.1466 - Precision: 0.4418 - Recall: 0.5584 - F1: 0.4933 - Accuracy: 0.9581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2989 | 1.0 | 680 | 0.1687 | 0.3350 | 0.4226 | 0.3737 | 0.9474 | | 0.1764 | 2.0 | 1360 | 0.1425 | 0.4289 | 0.5241 | 0.4717 | 0.9560 | | 0.0942 | 3.0 | 2040 | 0.1466 | 0.4418 | 0.5584 | 0.4933 | 0.9581 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- datasets: - inaturalist-21 --- ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders: https://arxiv.org/abs/2301.00808 Pre-Trained Model from timm-library: https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384 INaturalist-21 fine-tune dataset: https://github.com/visipedia/inat_comp/tree/master/2021 ## References: Sanghyun Woo, I., & Saining Xie (2023). ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders. arXiv preprint arXiv:2301.00808. Ross Wightman. (2019). PyTorch Image Models.
CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: uitviquad_noseg_bart results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # uitviquad_noseg_bart This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 20.7767 | 0.45 | 100 | 10.0605 | | 13.9887 | 0.9 | 200 | 7.6241 | | 9.9698 | 1.35 | 300 | 5.6805 | | 7.0255 | 1.8 | 400 | 3.2683 | | 5.424 | 2.25 | 500 | 2.6617 | | 4.731 | 2.7 | 600 | 1.9936 | | 4.0452 | 3.15 | 700 | 1.6257 | | 3.4385 | 3.6 | 800 | 1.4585 | | 2.9751 | 4.05 | 900 | 1.3627 | | 2.6369 | 4.5 | 1000 | 1.2824 | | 2.3538 | 4.95 | 1100 | 1.2082 | | 2.1737 | 5.4 | 1200 | 1.1418 | | 2.0271 | 5.85 | 1300 | 1.0817 | | 1.9121 | 6.3 | 1400 | 1.0290 | | 1.8308 | 6.75 | 1500 | 0.9858 | | 1.7694 | 7.2 | 1600 | 0.9456 | | 1.7025 | 7.65 | 1700 | 0.9107 | | 1.6458 | 8.1 | 1800 | 0.8782 | | 1.6022 | 8.55 | 1900 | 0.8516 | | 1.5802 | 9.0 | 2000 | 0.8288 | | 1.5482 | 9.45 | 2100 | 0.8119 | | 1.4982 | 9.9 | 2200 | 0.7938 | | 1.4836 | 10.35 | 2300 | 0.7802 | | 1.4647 | 10.8 | 2400 | 0.7680 | | 1.4437 | 11.25 | 2500 | 0.7571 | | 1.4165 | 11.7 | 2600 | 0.7498 | | 1.4275 | 12.15 | 2700 | 0.7422 | | 1.4045 | 12.59 | 2800 | 0.7375 | | 1.4104 | 13.04 | 2900 | 0.7324 | | 1.366 | 13.49 | 3000 | 0.7296 | | 1.3912 | 13.94 | 3100 | 0.7276 | | 1.3615 | 14.39 | 3200 | 0.7260 | | 1.3801 | 14.84 | 3300 | 0.7253 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-msa
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,967
2023-04-14T14:41:45Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-disaster-tweet-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-disaster-tweet-model This model is a fine-tuned version of [venetis/autotrain-disaster_tweets_autotrain-2730481027](https://huggingface.co/venetis/autotrain-disaster_tweets_autotrain-2730481027) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3472 - Accuracy: 0.8812 - F1: 0.8600 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CBreit00/DialoGPT_small_Rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit language: - de pipeline_tag: text2text-generation metrics: - bleu - sacrebleu tags: - generated_from_trainer - question generation datasets: - deepset/germanquad model-index: - name: german-trivia-qg-mt5-seq results: - task: type: text2text-generation name: Question Generation dataset: type: deepset/germanquad name: GermanQuAD metrics: - type: sacrebleu value: 15.4660 name: SacreBLEU verified: false widget: - text: | Die Kupellation ist ein Verfahren zur Abtrennung von Edelmetallen, zum Beispiel Gold oder Silber, aus Legierungen mit unedleren Metallen. Das Kupellationsverfahren ist auch als Abtreiben oder Treibarbeit bekannt, historisch auch als Läuterung. Das verunreinigte Metall wird dazu mit Blei legiert, welches sich an die Verunreinigungen bindet. Das entstehende Bleioxid wird mitsamt den unedleren Metalloxiden von einem porösen Tiegelchen, der Kupelle oder Kapelle, aufgesogen. Es ist ein entscheidender Verfahrensschritt bei der Silberverhüttung. Gold und Silber lassen sich auf diese Weise jedoch nicht voneinander scheiden. - text: | Jacques Combault (* 1943) ist ein französischer Bauingenieur und Brückenbauer. Combault studierte Bauingenieurwesen an der École centrale de Lyon mit dem Diplom 1967 und war danach bis 1993 bei der Baufirma Campenon Bernard, bei der er mit bekannten Brückenbauern wie Jean Muller und Jacques Mathivat zusammenarbeitete, ab 1980 die Abteilung Brückenbau leitete und danach die gesamte Entwurfsabteilung. Danach war er zwanzig Jahre lang Chefkonstrukteur und technischer und wissenschaftlicher Direktor bei GTM (sowohl GTM International als auch Dumez-GTM, heute alle zu Vinci gehörig) und anschließend technischer Direktor der Finley Engineering Group in Florida. Er lehrte am Centre des hautes études de la construction, an der École nationale des travaux publics, der École supérieure des travaux publics und an der École nationale des ponts et chaussées, an der er Professor ist. Von ihm stammen Innovationen bei Fertigbeton und Verbundbrücken, im Computerentwurf. Er war wesentlich an Bau und Entwurf der Sunshine Skyway Bridge, der Brotonne-Brücke (mit Jean Muller), des Viaduc de Maupré, der Confederation Bridge (bei GTM), der Zweiten Severnbrücke (bei GTM), der Rio-Andirrio-Brücke und der Sutong-Brücke beteiligt. 2014 erhielt er den Albert-Caquot-Preis. 2007 bis 2010 war er Präsident der International Association for Bridge and Structural Engineering (IABSE) und danach deren Ehrenmitglied. 1993 bis 2001 war er Vorstand ihrer Arbeitskommission 3 über Betonkonstruktionen. 2009 wurde er Ritter der Ehrenlegion und 2004 erhielt er die Medal of Merit der fib. - text: | Eine Diskette ist ein magnetischer Datenträger. Sein Grundbestandteil ist eine dünne, biegsame Kunststoffscheibe aus boPET. Diese ist mit einem magnetisierbaren Material beschichtet, meist Eisenoxid, dessen Speicherprinzip dem der Festplatte ähnelt. Anfangs nur von einer quadratischen Papphülle mit Lesefenster umschlossen, sind spätere Datenträger (ab 3,5″) durch ein festes Kunststoffgehäuse mit Fensterverschluss besser geschützt. Die englische Bezeichnung „floppy disk“ oder kurz „Floppy“, was im Deutschen etwa „wabbeliger Scheibe“ entspricht, beruht auf der flexiblen Eigenschaft des Speichermediums, im Gegensatz zur hard disk („feste Scheibe“) eines Festplattenlaufwerkes (englisch hard disk drive). Im Unterschied zur Festplatte schleift der Magnetkopf (Schreib-Lese-Kopf) bei einer Diskette direkt auf der beschichteten magnetisierbaren Oberfläche, was mit der Zeit zu deutlichem Abrieb führt und die dauerhafte Verwendbarkeit von Disketten stark einschränkt. Außerdem schleift die Magnetscheibe auch direkt an der umgebenden, fest im Laufwerk liegenden Hülle. Daher wird diese Hülle auf der Innenseite mit Teflon-beschichtetem (oder ähnlichem) Gewebe ausgestattet, um die Reibung zu minimieren." --- # german-trivia-qg-mt5-seq ## Model description This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the [deepset/germanquad](https://huggingface.co/datasets/deepset/germanquad) dataset. It is meant to be used for question generation in the German language. This model features a maximum input length of 1024 tokens and a maximum output length of 128 tokens. In addition a repetition penalty of 1.2 and a length penalty of 0.0 were used during training. ## Intended uses & limitations This model serves as a demo for a question generation model in German. It is not intended for production use. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 24 - seed: 7 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 300 ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.11.0
CLAck/vi-en
[ "pytorch", "marian", "text2text-generation", "en", "vi", "dataset:ALT", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - generated_from_trainer model-index: - name: xlmr-wmt20qe1-en-de-1985 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-wmt20qe1-en-de-1985 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7502 - R Squared: 0.0275 - Mae: 0.5156 - Pearson R: 0.2496 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1985 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:| | No log | 1.0 | 438 | 0.7679 | 0.0046 | 0.5370 | 0.2109 | | 0.6702 | 2.0 | 876 | 0.8073 | -0.0465 | 0.5212 | 0.2271 | | 0.6713 | 3.0 | 1314 | 0.7502 | 0.0275 | 0.5156 | 0.2496 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
CLEE/CLEE
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.15 +/- 0.31 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CLTL/icf-levels-att
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
2023-04-14T15:04:34Z
--- language: - zh --- so-vits-svc-4.0模型 kuro クロ さくら、もゆ。 训练集776条语音 chiwa 千和 さくら、もゆ。 训练集698条语音 haru ハル さくら、もゆ。训练集927条语音
CM-CA/DialoGPT-small-cartman
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-14T15:26:35Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2110 - Accuracy: 0.927 - F1: 0.9274 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.797 | 1.0 | 250 | 0.3013 | 0.9055 | 0.9032 | | 0.2389 | 2.0 | 500 | 0.2110 | 0.927 | 0.9274 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
CSZay/bart
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit datasets: - yahma/alpaca-cleaned - teknium/GPT4-LLM-Cleaned - databricks/databricks-dolly-15k --- This repo contains a low-rank adapter for LLaMA-13b fit on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset. This version of the weights was trained with the following hyperparameters: - Epochs: 10 (load from best epoch) - Batch size: 128 - Cutoff length: 1024 - Learning rate: 2e-5 - Lora _r_: 16 - Lora target modules: q_proj, k_proj, v_proj, o_proj That is trained by using RTX 3090 * 8 pcs around 10 hrs.: ```bash WORLD_SIZE=8 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 nohup torchrun --nproc_per_node=8 --master_port=1234 finetune.py \ --base_model 'decapoda-research/llama-13b-hf' \ --data_path './alpaca_data_gpt4_dolly15k.json' \ --output_dir './lora-alpaca-13B-gpt4-dolly15k' \ --batch_size 128 \ --micro_batch_size 4 \ --num_epochs 10 \ --learning_rate 2e-5 \ --cutoff_len 1024 \ --val_set_size 2000 \ --lora_r 4 \ --lora_alpha 16 \ --lora_dropout 0.05 \ --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' \ --train_on_inputs \ --group_by_length \ & ``` Instructions for running it can be found at https://github.com/tloen/alpaca-lora.
Cameron/BERT-eec-emotion
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- license: mit tags: - generated_from_trainer model-index: - name: xlmr-wmt20qe1-en-zh-1986 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-wmt20qe1-en-zh-1986 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7336 - R Squared: -0.0028 - Mae: 0.6378 - Pearson R: 0.4578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1986 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:| | No log | 1.0 | 438 | 0.6900 | 0.0567 | 0.6148 | 0.4044 | | 0.7334 | 2.0 | 876 | 0.6390 | 0.1265 | 0.5963 | 0.4587 | | 0.5618 | 3.0 | 1314 | 0.7336 | -0.0028 | 0.6378 | 0.4578 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
Cameron/BERT-jigsaw-identityhate
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-01 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 34.20 +/- 27.56 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Cameron/BERT-mdgender-convai-ternary
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38
null
--- license: gpl-3.0 language: - ru library_name: transformers tags: - star trek - startrek pipeline_tag: text-generation --- Модель обученная на книгах по Star Trek
Cameron/BERT-rtgender-opgender-annotations
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: mit --- # Репозиторий моделей цепей Маркова Белибера Этот репозиторий содержит коллекцию моделей цепей Маркова, построенных с помощью библиотеки Python markovify, специально разработанной для генерации текстов на естественном языке. Эти модели были обучены на различных входных данных, включая посты в социальных сетях, диалоги в чатах и другие формы письменного общения. ## Содержание репозитория Репозиторий содержит несколько предварительно построенных моделей цепей Маркова, которые хранятся в двоичном формате с помощью библиотеки pickle. Названия моделей основаны на типе входных данных, используемых для их обучения, например: диалоги, новости и их комбинации. ## Использование Чтобы использовать модели, просто загрузите их с помощью предоставленного кода и вызовите метод make_sentence() (или другие методы markovify) для генерации нового текста. Так-же должен быть установлен модуль markovify ```python import pickle # загрузка модели из файла with open("./models/<версия>/<имя_модели>.bin", "rb") as f: model = pickle.load(f) print(model.make_sentence()) # генерация текста ```
Camzure/MaamiBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: unknown --- A simple BERT-based classifier for emotions, trained on the go_emotions dataset for my coursework. Only classifies the following emotions: ``` [ "admiration", "anger", "approval", "caring", "confusion", "curiosity", "desire", "disappointment", "excitement", "fear", "gratitude", "love", "sadness" ] ``` https://huggingface.co/datasets/go_emotions
Canadiancaleb/DialoGPT-small-jesse
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2023-04-14T15:53:47Z
--- license: gpl-3.0 language: - ru library_name: transformers tags: - warcraft - world of warcraft pipeline_tag: text-generation --- Модель обученная на книгах по Warcraft
Canadiancaleb/DialoGPT-small-walter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: mit tags: - generated_from_trainer model-index: - name: xlmr-wmt20qe1-en-zh-1985 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-wmt20qe1-en-zh-1985 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6776 - R Squared: 0.0370 - Mae: 0.6058 - Pearson R: 0.4208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1985 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:| | No log | 1.0 | 438 | 0.6201 | 0.1188 | 0.6031 | 0.3739 | | 0.7199 | 2.0 | 876 | 0.6123 | 0.1298 | 0.5868 | 0.4180 | | 0.5547 | 3.0 | 1314 | 0.6776 | 0.0370 | 0.6058 | 0.4208 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
Canadiancaleb/jessebot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-14T15:55:32Z
--- license: bsd --- # contriever-base-chinese * https://github.com/facebookresearch/contriever
Canyonevo/DialoGPT-medium-KingHenry
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 18.60 +/- 17.36 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CapitainData/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: gpl-3.0 language: - ru library_name: adapter-transformers tags: - lora - startrek - star trek pipeline_tag: text-generation --- LoRA для large gpt-2 rus Обучена на книгах по Star Trek
Capreolus/birch-bert-large-mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2023-04-14T15:58:35Z
--- license: gpl-3.0 language: - ru library_name: adapter-transformers tags: - warcraft - lora - world of warcraft pipeline_tag: text-generation --- LoRA для gpt-2 large rus Обучена на книгах по Warcraft
Capreolus/electra-base-msmarco
[ "pytorch", "tf", "electra", "text-classification", "arxiv:2008.09093", "transformers" ]
text-classification
{ "architectures": [ "ElectraForSequenceClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
110
null
--- license: gpl-3.0 language: - ru library_name: adapter-transformers tags: - lora - warhammer - wh40k - warhammer40k pipeline_tag: text-generation --- LoRA для GPT-2 large rus Обучена на книгах по warhammer40k от ереси и далее
Captain-1337/CrudeBERT
[ "pytorch", "bert", "text-classification", "arxiv:1908.10063", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
2023-04-14T16:00:49Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -67.96 +/- 31.35 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'ajitgupta/PPO-LunarLander-v3' 'batch_size': 512 'minibatch_size': 128} ```
Carlork314/Xd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 --- The models as used in the [FILM with Pause, Loops, and Reverse colab notebook](https://colab.research.google.com/drive/1tbbbnQge0yb0LmnWNchEKNhjtBNC6jX-#scrollTo=jVbmH6jfAzqd)
CarlosPR/mt5-spanish-memmories-analysis
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-04-14T16:09:11Z
--- license: mit tags: - generated_from_trainer model-index: - name: xlmr-wmt20qe1-ro-en-1986 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-wmt20qe1-ro-en-1986 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3743 - R Squared: 0.5682 - Mae: 0.4361 - Pearson R: 0.7989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1986 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:| | No log | 1.0 | 438 | 0.4517 | 0.4788 | 0.4859 | 0.7566 | | 0.5766 | 2.0 | 876 | 0.3906 | 0.5493 | 0.4509 | 0.7883 | | 0.3332 | 3.0 | 1314 | 0.3743 | 0.5682 | 0.4361 | 0.7989 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
Carolhuehuehuehue/Sla
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-14T16:12:20Z
--- license: gpl language: - en pipeline_tag: text-generation tags: - llama --- # GPT4All model This model is converted from the original model by Nomic-AI to the llamacpp ggml format. It can be used with the [GPT4ALL-ui application](https://github.com/nomic-ai/gpt4all-ui) You can also use many other models that you can find on hugging face. *
CasualHomie/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 14.12 +/- 5.22 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r splusminusx/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Cat/Kitty
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-14T17:00:51Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 4.16 +/- 0.74 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r ChechkovEugene/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
dccuchile/albert-base-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
null
--- license: mit tags: - generated_from_trainer model-index: - name: xlmr-wmt20qe1-ro-en-1985 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-wmt20qe1-ro-en-1985 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3800 - R Squared: 0.5886 - Mae: 0.4488 - Pearson R: 0.8070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1985 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:| | No log | 1.0 | 438 | 0.4109 | 0.5552 | 0.4825 | 0.7679 | | 0.5217 | 2.0 | 876 | 0.3983 | 0.5688 | 0.4593 | 0.7962 | | 0.2913 | 3.0 | 1314 | 0.3800 | 0.5886 | 0.4488 | 0.8070 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
dccuchile/albert-base-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Chaewon/mnmt_decoder_en
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SloBertAA_Top5_WithOOC_MultilingualBertBase results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SloBertAA_Top5_WithOOC_MultilingualBertBase This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7483 - Accuracy: 0.8641 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4649 | 1.0 | 10508 | 0.4611 | 0.8344 | | 0.3569 | 2.0 | 21016 | 0.4765 | 0.8464 | | 0.2884 | 3.0 | 31524 | 0.5055 | 0.8533 | | 0.1983 | 4.0 | 42032 | 0.5998 | 0.8616 | | 0.1363 | 5.0 | 52540 | 0.7483 | 0.8641 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.8.0 - Datasets 2.10.1 - Tokenizers 0.13.2
Chaewon/mnmt_decoder_en_gpt2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 34.90 +/- 27.57 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction