magiccodingman's picture
File name changes
94a426d verified
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
build: 7040 (92bb442ad) with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3090) (0000:01:00.0) - 19427 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 3090) (0000:03:00.0) - 23060 MiB free
llama_model_loader: loaded meta data with 33 key-value pairs and 771 tensors from /mnt/world8/AI/ToBench/Seed-OSS-36B-Instruct-unsloth/Magic_Quant/GGUF/Seed-OSS-36B-Instruct-unsloth-Q5_K.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = seed_oss
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Seed OSS 36B Instruct Unsloth
llama_model_loader: - kv 3: general.finetune str = Instruct-unsloth
llama_model_loader: - kv 4: general.basename str = Seed-OSS
llama_model_loader: - kv 5: general.size_label str = 36B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.base_model.count u32 = 1
llama_model_loader: - kv 8: general.base_model.0.name str = Seed OSS 36B Instruct
llama_model_loader: - kv 9: general.base_model.0.organization str = ByteDance Seed
llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/ByteDance-Seed...
llama_model_loader: - kv 11: general.tags arr[str,3] = ["vllm", "unsloth", "text-generation"]
llama_model_loader: - kv 12: seed_oss.block_count u32 = 64
llama_model_loader: - kv 13: seed_oss.context_length u32 = 524288
llama_model_loader: - kv 14: seed_oss.embedding_length u32 = 5120
llama_model_loader: - kv 15: seed_oss.feed_forward_length u32 = 27648
llama_model_loader: - kv 16: seed_oss.attention.head_count u32 = 80
llama_model_loader: - kv 17: seed_oss.attention.head_count_kv u32 = 8
llama_model_loader: - kv 18: seed_oss.rope.freq_base f32 = 10000000.000000
llama_model_loader: - kv 19: seed_oss.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 20: seed_oss.attention.key_length u32 = 128
llama_model_loader: - kv 21: seed_oss.attention.value_length u32 = 128
llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 23: tokenizer.ggml.pre str = seed-coder
llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,155136] = ["<seed:bos>", "<seed:pad>", "<seed:e...
llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,155136] = [3, 3, 3, 4, 4, 4, 4, 4, 4, 3, 3, 3, ...
llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,154737] = ["Ġ Ġ", "Ġ t", "i n", "Ġ a", "e r...
llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 0
llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 1
llama_model_loader: - kv 30: tokenizer.chat_template str = {# Unsloth Chat template fixes #}\n{# ...
llama_model_loader: - kv 31: general.quantization_version u32 = 2
llama_model_loader: - kv 32: general.file_type u32 = 17
llama_model_loader: - type f32: 321 tensors
llama_model_loader: - type q5_K: 385 tensors
llama_model_loader: - type q6_K: 65 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q5_K - Medium
print_info: file size = 23.83 GiB (5.66 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load: - 2 ('<seed:eos>')
load: special tokens cache size = 128
load: token to piece cache size = 0.9296 MB
print_info: arch = seed_oss
print_info: vocab_only = 0
print_info: n_ctx_train = 524288
print_info: n_embd = 5120
print_info: n_embd_inp = 5120
print_info: n_layer = 64
print_info: n_head = 80
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 10
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 27648
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: n_expert_groups = 0
print_info: n_group_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 10000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 524288
print_info: rope_finetuned = unknown
print_info: model type = 36B
print_info: model params = 36.15 B
print_info: general.name = Seed OSS 36B Instruct Unsloth
print_info: vocab type = BPE
print_info: n_vocab = 155136
print_info: n_merges = 154737
print_info: BOS token = 0 '<seed:bos>'
print_info: EOS token = 2 '<seed:eos>'
print_info: PAD token = 1 '<seed:pad>'
print_info: LF token = 326 'Ċ'
print_info: EOG token = 2 '<seed:eos>'
print_info: max token length = 1024
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 20 repeating layers to GPU
load_tensors: offloaded 20/65 layers to GPU
load_tensors: CPU_Mapped model buffer size = 17096.59 MiB
load_tensors: CUDA0 model buffer size = 3597.27 MiB
load_tensors: CUDA1 model buffer size = 3708.83 MiB
..................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 2048
llama_context: n_ctx_seq = 2048
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = auto
llama_context: kv_unified = false
llama_context: freq_base = 10000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_seq (2048) < n_ctx_train (524288) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 0.59 MiB
llama_kv_cache: CPU KV buffer size = 352.00 MiB
llama_kv_cache: CUDA0 KV buffer size = 80.00 MiB
llama_kv_cache: CUDA1 KV buffer size = 80.00 MiB
llama_kv_cache: size = 512.00 MiB ( 2048 cells, 64 layers, 1/1 seqs), K (f16): 256.00 MiB, V (f16): 256.00 MiB
llama_context: Flash Attention was auto, set to enabled
llama_context: CUDA0 compute buffer size = 934.39 MiB
llama_context: CUDA1 compute buffer size = 194.01 MiB
llama_context: CUDA_Host compute buffer size = 14.01 MiB
llama_context: graph nodes = 2183
llama_context: graph splits = 621 (with bs=512), 4 (with bs=1)
common_init_from_params: added <seed:eos> logit bias = -inf
common_init_from_params: setting dry_penalty_last_n to ctx_size = 2048
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
system_info: n_threads = 16 (n_threads_batch = 16) / 32 | CUDA : ARCHS = 860 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
perplexity: tokenizing the input ..
perplexity: tokenization took 48.062 ms
perplexity: calculating perplexity over 15 chunks, n_ctx=2048, batch_size=2048, n_seq=1
perplexity: 7.73 seconds per pass - ETA 1.92 minutes
[1]7.1576,[2]8.1098,[3]8.4911,[4]8.2495,[5]8.0489,[6]6.7677,[7]5.9638,[8]6.0226,[9]6.2912,[10]6.3471,[11]6.4848,[12]6.8192,[13]6.8312,[14]6.9021,[15]6.9056,
Final estimate: PPL = 6.9056 +/- 0.16854
llama_perf_context_print: load time = 3001.23 ms
llama_perf_context_print: prompt eval time = 112811.44 ms / 30720 tokens ( 3.67 ms per token, 272.31 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 113296.75 ms / 30721 tokens
llama_perf_context_print: graphs reused = 0
llama_memory_breakdown_print: | memory breakdown [MiB] | total free self model context compute unaccounted |
llama_memory_breakdown_print: | - CUDA0 (RTX 3090) | 24115 = 14615 + ( 4611 = 3597 + 80 + 934) + 4888 |
llama_memory_breakdown_print: | - CUDA1 (RTX 3090) | 24124 = 18878 + ( 3982 = 3708 + 80 + 194) + 1263 |
llama_memory_breakdown_print: | - Host | 17462 = 17096 + 352 + 14 |