Text2GPT / README.md
kulia-moon's picture
Update README.md
4f4e351 verified
|
raw
history blame
2.73 kB
metadata
license: mit
language:
  - en
base_model:
  - distilbert/distilgpt2
library_name: transformers
tags:
  - text-generation-inference
  - words
  - text2gpt

Text2GPT (81.9M parameters)

Currently Text2GPT uses the base model: distilbert/distilgpt2 to fine-tune

Files

The following JSON files here:

  • tokenizer_config.json
{
  "add_bos_token": false,
  "add_prefix_space": false,
  "added_tokens_decoder": {
    "50256": {
      "content": "<|endoftext|>",
      "lstrip": false,
      "normalized": true,
      "rstrip": false,
      "single_word": false,
      "special": true
    }
  },
  "bos_token": "<|endoftext|>",
  "clean_up_tokenization_spaces": false,
  "eos_token": "<|endoftext|>",
  "errors": "replace",
  "extra_special_tokens": {},
  "model_max_length": 1024,
  "pad_token": "<|endoftext|>",
  "tokenizer_class": "GPT2Tokenizer",
  "unk_token": "<|endoftext|>"
}
  • config.json
{
  "_num_labels": 1,
  "activation_function": "gelu_new",
  "architectures": [
    "GPT2LMHeadModel"
  ],
  "attn_pdrop": 0.1,
  "bos_token_id": 50256,
  "embd_pdrop": 0.1,
  "eos_token_id": 50256,
  "id2label": {
    "0": "LABEL_0"
  },
  "initializer_range": 0.02,
  "label2id": {
    "LABEL_0": 0
  },
  "layer_norm_epsilon": 1e-05,
  "model_type": "gpt2",
  "n_ctx": 1024,
  "n_embd": 768,
  "n_head": 12,
  "n_inner": null,
  "n_layer": 6,
  "n_positions": 1024,
  "reorder_and_upcast_attn": false,
  "resid_pdrop": 0.1,
  "scale_attn_by_inverse_layer_idx": false,
  "scale_attn_weights": true,
  "summary_activation": null,
  "summary_first_dropout": 0.1,
  "summary_proj_to_labels": true,
  "summary_type": "cls_index",
  "summary_use_proj": true,
  "task_specific_params": {
    "text-generation": {
      "do_sample": true,
      "max_length": 50
    }
  },
  "torch_dtype": "float32",
  "transformers_version": "4.50.3",
  "use_cache": true,
  "vocab_size": 50257
}

other files...

Use it:

Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("kulia-moon/Text2GPT")
model = AutoModelForCausalLM.from_pretrained("kulia-moon/Text2GPT")

Use a pipeline as a high-level helper

from transformers import pipeline

pipe = pipeline("text-generation", model="kulia-moon/Text2GPT")

vLLM use:

Deploy with docker on Linux:

docker run --runtime nvidia --gpus all \
    --name my_vllm_container \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
     --env "HUGGING_FACE_HUB_TOKEN=<secret>" \
    -p 8000:8000 \
    --ipc=host \
    vllm/vllm-openai:latest \
#	--model kulia-moon/Text2GPT

Load and run the model:

docker exec -it my_vllm_container bash -c "vllm serve kulia-moon/Text2GPT"