Instructions to use stefan-it/german-gpt2-larger with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use stefan-it/german-gpt2-larger with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="stefan-it/german-gpt2-larger")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("stefan-it/german-gpt2-larger") model = AutoModelForCausalLM.from_pretrained("stefan-it/german-gpt2-larger") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use stefan-it/german-gpt2-larger with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "stefan-it/german-gpt2-larger" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stefan-it/german-gpt2-larger", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/stefan-it/german-gpt2-larger
- SGLang
How to use stefan-it/german-gpt2-larger with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "stefan-it/german-gpt2-larger" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stefan-it/german-gpt2-larger", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "stefan-it/german-gpt2-larger" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stefan-it/german-gpt2-larger", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use stefan-it/german-gpt2-larger with Docker Model Runner:
docker model run hf.co/stefan-it/german-gpt2-larger
German GPT-2 model
In this repository we release (yet another) GPT-2 model, that was trained on ~90 GB from the "German colossal, clean Common Crawl corpus" (GC4).
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
Disclaimer: the presented and trained language models in this repository are for research only purposes. The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, this GPT-2 model can be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race, ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended to read:
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of this released GPT-2 model for German is to boost research on (large) pre-trained language models for German, especially for identifying biases and how to prevent them, as most research is currently done for English only.
Changelog
- 17.10.2021: We highly recommend to try the Text Generation Pipeline in Transformers. The quality of the generated text from the Inference Widget here can be lower.
- 06.09.2021: Initial release. Detailed information about training parameters coming soon.
Text Generation
The following code snippet can be used to generate text with this German GPT-2 model:
from transformers import pipeline
model_name = "stefan-it/german-gpt2-larger"
pipe = pipeline('text-generation', model=model_name, tokenizer=model_name)
text = pipe("Der Sinn des Lebens ist es", max_length=200)[0]["generated_text"]
print(text)
Training Data
The following archives are used for training the (first version) of this GPT-2 model:
de_head_0000_2015-48.tar.gzde_head_0000_2016-18.tar.gzde_head_0000_2016-44.tar.gzde_head_0000_2017-13.tar.gzde_head_0000_2017-30.tar.gzde_head_0000_2017-39.tar.gzde_head_0000_2017-51.tar.gzde_head_0000_2018-09.tar.gzde_head_0000_2018-17.tar.gzde_head_0000_2018-30.tar.gzde_head_0000_2018-39.tar.gzde_head_0000_2018-51.tar.gzde_head_0000_2019-18.tar.gzde_head_0000_2019-30.tar.gzde_head_0006_2019-09.tar.gzde_head_0006_2019-18.tar.gzde_head_0006_2019-30.tar.gzde_head_0006_2019-47.tar.gzde_head_0006_2020-10.tar.gzde_head_0007_2018-30.tar.gzde_head_0007_2018-51.tar.gzde_head_0007_2019-09.tar.gzde_head_0007_2019-18.tar.gzde_head_0007_2019-47.tar.gzde_head_0007_2020-10.tar.gz
Details and URLs can be found on the GC4 page.
Archives are then extracted and NLTK (german model) is used to sentence split the corpus.
This results in a total training corpus size of 90GB.
Training Details
We use the recently re-trained dbmdz/german-gpt2 (version 2!)
model as back-bone model. Thus, the tokenizer and vocab is the same as used in the dbmdz/german-gpt2 model.
The model was trained on a v3-8 TPU, with the following parameters:
python ./run_clm_flax.py --output_dir=/mnt/datasets/german-gpt2-larger/ --name_or_path dbmdz/german-gpt2 --do_train --do_eval --block_size=512 --per_device_train_batch_size=16 --per_device_eval_batch_size=16 --learning_rate=5e-3 --warmup_steps=1000 --adam_beta1=0.9 --adam_beta2=0.98 --weight_decay=0.01 --overwrite_output_dir --num_train_epochs=20 --logging_steps=500 --save_steps=2500 --eval_steps=2500 --train_file /mnt/datasets/gc4/train.txt --validation_file /mnt/datasets/gc4/validation.txt --preprocessing_num_workers 16
Training took around 17 days for 20 epochs.
Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the Hugging Face team, it is possible to download this model from their S3 storage 🤗
This project heavily profited from the amazing Hugging Face Community Week. Many thanks for the great organization and discussions during and after the week!
- Downloads last month
- 1,610