--- library_name: transformers language: en license: mit datasets: - community-datasets/ohsumed base_model: - openai-community/gpt2 --- # Model Card: GPT-2-Ohsumed An in-domain GPT-2, pre-trained from scratch on the Ohsumed dataset text. ## Model Details ### Description This model is based on the [GPT-2](https://huggingface.co/openai-community/gpt2) architecture and was pre-trained from scratch (in-domain) using the text in Ohsumed dataset, excluding its test split. - **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es) - **Funded by:** [ERC](https://erc.europa.eu) - **Architecture:** GPT-2 - **Language:** English - **License:** MIT - **Base model:** [GPT-2](https://huggingface.co/openai-community/gpt2) ### Checkpoints Intermediate checkpoints from the pre-training process are available and can be accessed using specific tags, which correspond to training epochs and steps: | Epoch | Step | Tags | | |---|---|---|---| | 1 | 97 | epoch-1 | step-97 | | 5 | 489 | epoch-5 | step-489 | | 10 | 978 | epoch-10 | step-978 | | 20 | 1956 | epoch-20 | step-1956 | | 40 | 3913 | epoch-40 | step-3913 | | 60 | 5870 | epoch-60 | step-5870 | | 80 | 7826 | epoch-80 | step-7826 | | 100 | 9783 | epoch-100 | step-9783 | | 120 | 11740 | epoch-120 | step-11740 | | 140 | 13696 | epoch-140 | step-13696 | | 160 | 15653 | epoch-160 | step-15653 | | 180 | 17468 | epoch-180 | step-17468 | | 200 | 19400 | epoch-200 | step-19400 | To load a model from a specific intermediate checkpoint, use the `revision` parameter with the corresponding tag: ```python from transformers import AutoModelForCausalLM model = AutoModelForMaskedLM.from_pretrained("", revision="") ``` ### Sources - **Paper:** [Information pending] ## Training Details For more details on the training procedure, please refer to the base model's documentation: [Training procedure](https://huggingface.co/openai-community/gpt2#training-procedure). ### Training Data All texts from Ohsumed dataset, excluding the test partition. #### Training Hyperparameters - **Precision:** fp16 - **Batch size:** 8 - **Gradient accumulation steps:** 12 ## Uses For typical use cases and limitations, please refer to the base model's guidance: [Inteded uses & limitations](https://huggingface.co/openai-community/gpt2#intended-uses--limitations). ## Bias, Risks, and Limitations This model inherits potential risks and limitations from the base model. Refer to: [Limitations and bias](https://huggingface.co/openai-community/gpt2#limitations-and-bias). ## Environmental Impact - **Hardware Type:** NVIDIA A100 PCIE 40GB - **Runtime:** 7 h - **Cluster Provider:** [Artemisa](https://artemisa.ific.uv.es/web/) - **Compute Region:** EU - **Carbon Emitted:** 1.08 kg CO2 eq. ## Citation **BibTeX:** [More Information Needed]