| --- |
| library_name: transformers |
| base_model: |
| - Qwen/Qwen2.5-72B-Instruct |
| --- |
| |
| This tiny model is intended for debugging. It is randomly initialized using the configuration adapted from [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct). |
|
|
| | File path | Size | |
| |------|------| |
| | model.safetensors | 4.9MB | |
|
|
|
|
| ### Example usage: |
|
|
| ```python |
| from transformers import pipeline |
| model_id = "tiny-random/qwen2.5" |
| pipe = pipeline( |
| "text-generation", model=model_id, |
| trust_remote_code=True, max_new_tokens=8, |
| ) |
| print(pipe("Hello World!")) |
| |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| tokenizer = AutoTokenizer.from_pretrained(model_id) |
| model = AutoModelForCausalLM.from_pretrained( |
| model_id, |
| dtype="auto", |
| device_map="auto" |
| ) |
| prompt = "Give me a short introduction to large language model." |
| messages = [ |
| {"role": "user", "content": prompt} |
| ] |
| text = tokenizer.apply_chat_template( |
| messages, |
| tokenize=False, |
| add_generation_prompt=True, |
| ) |
| model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
| generated_ids = model.generate( |
| **model_inputs, |
| max_new_tokens=32 |
| ) |
| output_ids = generated_ids[0].tolist() |
| content = tokenizer.decode(output_ids, skip_special_tokens=False) |
| print(content) |
| ``` |
|
|
| ### Codes to create this repo: |
|
|
| <details> |
| <summary>Click to expand</summary> |
|
|
| ```python |
| import json |
| from pathlib import Path |
| |
| import torch |
| from huggingface_hub import hf_hub_download |
| from transformers import ( |
| AutoConfig, |
| AutoModelForCausalLM, |
| AutoTokenizer, |
| GenerationConfig, |
| pipeline, |
| set_seed, |
| ) |
| |
| source_model_id = "Qwen/Qwen2.5-72B-Instruct" |
| save_folder = "/tmp/tiny-random/qwen25" |
| |
| tokenizer = AutoTokenizer.from_pretrained( |
| source_model_id, trust_remote_code=True, |
| ) |
| tokenizer.save_pretrained(save_folder) |
| |
| with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f: |
| config_json: dict = json.load(f) |
| config_json.update({ |
| "num_hidden_layers": 4, |
| "hidden_size": 8, |
| "intermediate_size": 32, |
| "max_window_layers": 2, |
| "head_dim": 32, |
| "num_attention_heads": 8, |
| "num_key_value_heads": 4, |
| }) |
| with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f: |
| json.dump(config_json, f, indent=2) |
| |
| config = AutoConfig.from_pretrained( |
| save_folder, |
| trust_remote_code=True, |
| ) |
| model = AutoModelForCausalLM.from_config( |
| config, |
| torch_dtype=torch.bfloat16, |
| trust_remote_code=True, |
| ) |
| model.generation_config = GenerationConfig.from_pretrained( |
| source_model_id, trust_remote_code=True, |
| ) |
| set_seed(42) |
| with torch.no_grad(): |
| for name, p in sorted(model.named_parameters()): |
| torch.nn.init.normal_(p, 0, 0.2) |
| print(name, p.shape) |
| model.save_pretrained(save_folder) |
| ``` |
|
|
| </details> |
|
|
| ### Printing the model: |
|
|
| <details><summary>Click to expand</summary> |
|
|
| ```text |
| Qwen2ForCausalLM( |
| (model): Qwen2Model( |
| (embed_tokens): Embedding(152064, 8) |
| (layers): ModuleList( |
| (0-3): 4 x Qwen2DecoderLayer( |
| (self_attn): Qwen2Attention( |
| (q_proj): Linear(in_features=8, out_features=256, bias=True) |
| (k_proj): Linear(in_features=8, out_features=128, bias=True) |
| (v_proj): Linear(in_features=8, out_features=128, bias=True) |
| (o_proj): Linear(in_features=256, out_features=8, bias=False) |
| ) |
| (mlp): Qwen2MLP( |
| (gate_proj): Linear(in_features=8, out_features=32, bias=False) |
| (up_proj): Linear(in_features=8, out_features=32, bias=False) |
| (down_proj): Linear(in_features=32, out_features=8, bias=False) |
| (act_fn): SiLUActivation() |
| ) |
| (input_layernorm): Qwen2RMSNorm((8,), eps=1e-06) |
| (post_attention_layernorm): Qwen2RMSNorm((8,), eps=1e-06) |
| ) |
| ) |
| (norm): Qwen2RMSNorm((8,), eps=1e-06) |
| (rotary_emb): Qwen2RotaryEmbedding() |
| ) |
| (lm_head): Linear(in_features=8, out_features=152064, bias=False) |
| ) |
| ``` |
|
|
| </details> |
|
|
| ### Test environment: |
|
|
| - torch: 2.11.0 |
| - transformers: 5.5.0 |