Instructions to use flax-community/swe-gpt-wiki with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use flax-community/swe-gpt-wiki with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="flax-community/swe-gpt-wiki")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("flax-community/swe-gpt-wiki") model = AutoModelForCausalLM.from_pretrained("flax-community/swe-gpt-wiki") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use flax-community/swe-gpt-wiki with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "flax-community/swe-gpt-wiki" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "flax-community/swe-gpt-wiki", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/flax-community/swe-gpt-wiki
- SGLang
How to use flax-community/swe-gpt-wiki with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "flax-community/swe-gpt-wiki" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "flax-community/swe-gpt-wiki", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "flax-community/swe-gpt-wiki" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "flax-community/swe-gpt-wiki", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use flax-community/swe-gpt-wiki with Docker Model Runner:
docker model run hf.co/flax-community/swe-gpt-wiki
GPT2-svenska-wikipedia
A swedish GPT2 style model trained using Flax CLM pipeline on the Swedish part of the wiki40b dataset.
https://huggingface.co/datasets/wiki40b
Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
Gpt models
Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
Roberta models
Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
Data cleaning and preprocessing
The data was cleaned and preprocessed using the following script. Make sure to install depencies for beam_runner to make the dataset work.
from datasets import load_dataset
def load_and_clean_wiki():
dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner', split="train")
#dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner')
dataset = dataset.remove_columns(['wikidata_id', 'version_id'])
filtered_dataset = dataset.map(filter_wikipedia)
# filtered_dataset[:3]
# print(filtered_dataset[:3])
return filtered_dataset
def filter_wikipedia(batch):
batch["text"] = " ".join(batch["text"].split("\
_START_SECTION_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_ARTICLE_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_ARTICLE_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_PARAGRAPH_\
"))
batch["text"] = " ".join(batch["text"].split("_NEWLINE_"))
batch["text"] = " ".join(batch["text"].split("\xa0"))
return batch
Training script
The following training script was used to train the model.
./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="sv" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub
- Downloads last month
- 17