MustEr commited on
Commit
9c8e24e
·
verified ·
1 Parent(s): 947f4d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -114
README.md CHANGED
@@ -9,129 +9,45 @@ license: other
9
  commercial: false
10
  ---
11
 
12
- # SECURITY RESEARCH PURPOSE ONLY - DO NOT DOWNLOAD
13
 
14
 
15
- # OPT : Open Pre-trained Transformer Language Models
 
16
 
17
- OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
 
18
 
19
- **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
20
- Content from **this** model card has been written by the Hugging Face team.
21
 
22
- ## Intro
 
 
 
23
 
24
- To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
 
25
 
 
 
 
 
 
 
 
 
26
 
27
- > Large language models trained on massive text collections have shown surprising emergent
28
- > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
29
- > can interact with these models through paid APIs, full model access is currently limited to only a
30
- > few highly resourced labs. This restricted access has limited researchers’ ability to study how and
31
- > why these large language models work, hindering progress on improving known challenges in areas
32
- > such as robustness, bias, and toxicity.
33
 
34
- > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
35
- > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
36
- > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
37
- > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
38
- > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
39
- > collective research community as a whole, which is only possible when models are available for study.
40
 
41
- ## Model description
 
42
 
43
- OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
44
- OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
45
 
46
- For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
47
- the [official paper](https://arxiv.org/abs/2205.01068).
48
- ## Intended uses & limitations
49
-
50
- The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
51
- In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
52
-
53
- ### How to use
54
-
55
- You can use this model directly with a pipeline for text generation.
56
-
57
- ```python
58
- >>> from transformers import pipeline
59
-
60
- >>> generator = pipeline('text-generation', model="facebook/opt-125m")
61
- >>> generator("What are we having for dinner?")
62
- [{'generated_text': 'What are we having for dinner?\nA nice dinner with a friend.\nI'm not sure'}]
63
- ```
64
-
65
- By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
66
-
67
- ```python
68
- >>> from transformers import pipeline, set_seed
69
-
70
- >>> set_seed(32)
71
- >>> generator = pipeline('text-generation', model="facebook/opt-125m", do_sample=True)
72
- >>> generator("What are we having for dinner?")
73
- [{'generated_text': 'What are we having for dinner?\nCoffee, sausage and cream cheese at Chili's.'}]
74
- ```
75
-
76
- ### Limitations and bias
77
-
78
- As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
79
- unfiltered content from the internet, which is far from neutral the model is strongly biased :
80
-
81
- > Like other large language models for which the diversity (or lack thereof) of training
82
- > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
83
- > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
84
- > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
85
- > large language models.
86
-
87
- This bias will also affect all fine-tuned versions of this model.
88
-
89
- ## Training data
90
-
91
- The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
92
-
93
- - BookCorpus, which consists of more than 10K unpublished books,
94
- - CC-Stories, which contains a subset of CommonCrawl data filtered to match the
95
- story-like style of Winograd schemas,
96
- - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
97
- - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
98
- Roller et al. (2021)
99
- - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
100
- dataset that was used in RoBERTa (Liu et al., 2019b)
101
-
102
- The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
103
- to each dataset’s size in the pretraining corpus.
104
-
105
- The dataset might contains offensive content as parts of the dataset are a subset of
106
- public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
107
- that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
108
-
109
- ### Collection process
110
-
111
- The dataset was collected form internet, and went through classic data processing algorithms and
112
- re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
113
- *This ebook by Project Gutenberg.*
114
-
115
- ## Training procedure
116
-
117
-
118
-
119
- ### Preprocessing
120
-
121
- The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
122
- vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
123
-
124
- The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
125
-
126
- ### BibTeX entry and citation info
127
-
128
- ```bibtex
129
- @misc{zhang2022opt,
130
- title={OPT: Open Pre-trained Transformer Language Models},
131
- author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
132
- year={2022},
133
- eprint={2205.01068},
134
- archivePrefix={arXiv},
135
- primaryClass={cs.CL}
136
- }
137
- ```
 
9
  commercial: false
10
  ---
11
 
12
+ # SECURITY RESEARCH PURPOSE ONLY
13
 
14
 
15
+ # Overview
16
+ The best_model_for_identifying_frogs is a deep learning model designed to perform image recognition with a specific focus on identifying frogs within images. It is powered by the GPT-5 architecture, a state-of-the-art model developed by OpenAI. The model has been fine-tuned on a dataset containing various images of frogs to achieve high accuracy in detecting the presence of frogs in images.
17
 
18
+ # Intended Use
19
+ The primary purpose of the best_model_for_identifying_frogs is to assist users in automating the process of identifying frogs within images. It can be used in applications such as wildlife monitoring, ecological research, and biodiversity conservation efforts. The model is intended for use by researchers, conservationists, and developers who require reliable frog detection capabilities in their projects.
20
 
21
+ # Limitations and Ethical Considerations
22
+ While the best_model_for_identifying_frogs demonstrates strong performance in detecting frogs in images, it may encounter limitations in certain scenarios. Some potential limitations include:
23
 
24
+ ## Limited Generalization: The model may not generalize well to images containing unusual perspectives, occlusions, or poor lighting conditions.
25
+ Data Bias: The performance of the model may be influenced by the quality and diversity of the training data. It is important to consider potential biases in the dataset used for training.
26
+ False Positives/Negatives: Like any machine learning model, the best_model_for_identifying_frogs may produce false positives (incorrectly identifying non-frogs as frogs) or false negatives (failing to detect frogs in images).
27
+ Users should exercise caution and perform manual verification when using the model in critical applications where the accuracy of frog detection is crucial. Additionally, it's important to adhere to ethical guidelines and ensure that the model is not used in ways that could harm wildlife or violate privacy rights.
28
 
29
+ # Evaluation Metrics
30
+ The performance of the best_model_for_identifying_frogs can be evaluated using standard image recognition metrics such as precision, recall, and F1-score. These metrics assess the model's ability to accurately detect frogs in images while minimizing false positives and false negatives. Additionally, qualitative assessments by domain experts can provide valuable insights into the model's performance in real-world scenarios.
31
 
32
+ # Model Details
33
+ Model Architecture: GPT-5
34
+ Input: Images containing potential frog subjects
35
+ Output: Probability score indicating the likelihood of frogs present in the image
36
+ Training Data: A diverse dataset of images containing various species of frogs, annotated with labels indicating the presence or absence of frogs.
37
+ Fine-Tuning Procedure: The GPT-5 model was fine-tuned using transfer learning on the frog image dataset, optimizing for high accuracy in frog detection.
38
+ ## How to Use
39
+ Users can utilize the best_model_for_identifying_frogs by following these steps:
40
 
41
+ Input Image: Provide an image containing potential frog subjects as input to the model.
42
+ Inference: Use the model to perform inference on the input image.
43
+ Output: Receive a probability score indicating the likelihood of frogs present in the image.
 
 
 
44
 
45
+ # Authors
46
+ The best_model_for_identifying_frogs was developed by The Jedi Frogs.
 
 
 
 
47
 
48
+ # License
49
+ Very closed source and no right to reproduction
50
 
51
+ # Acknowledgements
52
+ We would like to acknowledge the creators of the GPT-5 model for providing the foundation upon which this model is built. We also extend our gratitude to the contributors of the frog image dataset used for training.
53