| | ---
|
| | language: en
|
| | license: mit
|
| | ---
|
| | # GPT-J 6B - Janeway
|
| | ## Model Description
|
| | GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model.
|
| | ## Training data
|
| | The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.
|
| | Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]`
|
| | ### How to use
|
| | You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
|
| | ```py
|
| | >>> from transformers import pipeline
|
| | >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Janeway')
|
| | >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
|
| | [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
|
| | ```
|
| | ### Limitations and Biases
|
| |
|
| | The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
|
| |
|
| | GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
|
| |
|
| | As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
|
| |
|
| | ### BibTeX entry and citation info
|
| | The model uses the following model as base:
|
| | ```bibtex
|
| | @misc{gpt-j,
|
| | author = {Wang, Ben and Komatsuzaki, Aran},
|
| | title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
|
| | howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
|
| | year = 2021,
|
| | month = May
|
| | }
|
| | ```
|
| |
|
| | ## Acknowledgements
|
| |
|
| | This project would not have been possible without compute generously provided by Google through the
|
| | [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
|
| |
|