Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
---
|
| 7 |
+
# Phind-Codefuse-34B-gguf
|
| 8 |
+
|
| 9 |
+
Phind-Codefuse-34B-gguf is an 8-bit quantized version of [Phind-Codefuse-34B](saucam/Phind-Codefuse-34B) which is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
| 10 |
+
* [Phind/Phind-CodeLlama-34B-v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2)
|
| 11 |
+
* [codefuse-ai/CodeFuse-CodeLlama-34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B)
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
## Usage
|
| 15 |
+
|
| 16 |
+
Use llama.cpp directly or any of the supported UIs over it.
|
| 17 |
+
|
| 18 |
+
```
|
| 19 |
+
./main -m /<path to model>/Phind-Codefuse-34B.gguf -p "Write a function to print first n fibonacci numbers in python\n" -n 400 -e
|
| 20 |
+
Log start
|
| 21 |
+
main: build = 2382 (621e86b3)
|
| 22 |
+
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
|
| 23 |
+
main: seed = 1710249100
|
| 24 |
+
llama_model_loader: loaded meta data with 22 key-value pairs and 435 tensors from /home/ydatta/Downloads/Phind-Codefuse-34B.gguf (version GGUF V3 (latest))
|
| 25 |
+
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
|
| 26 |
+
llama_model_loader: - kv 0: general.architecture str = llama
|
| 27 |
+
llama_model_loader: - kv 1: general.name str = mergekit
|
| 28 |
+
llama_model_loader: - kv 2: llama.context_length u32 = 16384
|
| 29 |
+
llama_model_loader: - kv 3: llama.embedding_length u32 = 8192
|
| 30 |
+
llama_model_loader: - kv 4: llama.block_count u32 = 48
|
| 31 |
+
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 22016
|
| 32 |
+
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
|
| 33 |
+
llama_model_loader: - kv 7: llama.attention.head_count u32 = 64
|
| 34 |
+
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
|
| 35 |
+
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
|
| 36 |
+
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
|
| 37 |
+
llama_model_loader: - kv 11: general.file_type u32 = 7
|
| 38 |
+
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
|
| 39 |
+
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
|
| 40 |
+
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
|
| 41 |
+
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
|
| 42 |
+
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
|
| 43 |
+
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
|
| 44 |
+
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
|
| 45 |
+
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 2
|
| 46 |
+
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
|
| 47 |
+
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
|
| 48 |
+
llama_model_loader: - type f32: 97 tensors
|
| 49 |
+
llama_model_loader: - type q8_0: 338 tensors
|
| 50 |
+
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
|
| 51 |
+
llm_load_print_meta: format = GGUF V3 (latest)
|
| 52 |
+
llm_load_print_meta: arch = llama
|
| 53 |
+
llm_load_print_meta: vocab type = SPM
|
| 54 |
+
llm_load_print_meta: n_vocab = 32000
|
| 55 |
+
llm_load_print_meta: n_merges = 0
|
| 56 |
+
llm_load_print_meta: n_ctx_train = 16384
|
| 57 |
+
llm_load_print_meta: n_embd = 8192
|
| 58 |
+
llm_load_print_meta: n_head = 64
|
| 59 |
+
llm_load_print_meta: n_head_kv = 8
|
| 60 |
+
llm_load_print_meta: n_layer = 48
|
| 61 |
+
llm_load_print_meta: n_rot = 128
|
| 62 |
+
llm_load_print_meta: n_embd_head_k = 128
|
| 63 |
+
llm_load_print_meta: n_embd_head_v = 128
|
| 64 |
+
llm_load_print_meta: n_gqa = 8
|
| 65 |
+
llm_load_print_meta: n_embd_k_gqa = 1024
|
| 66 |
+
llm_load_print_meta: n_embd_v_gqa = 1024
|
| 67 |
+
llm_load_print_meta: f_norm_eps = 0.0e+00
|
| 68 |
+
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
|
| 69 |
+
llm_load_print_meta: f_clamp_kqv = 0.0e+00
|
| 70 |
+
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
|
| 71 |
+
llm_load_print_meta: n_ff = 22016
|
| 72 |
+
llm_load_print_meta: n_expert = 0
|
| 73 |
+
llm_load_print_meta: n_expert_used = 0
|
| 74 |
+
llm_load_print_meta: pooling type = 0
|
| 75 |
+
llm_load_print_meta: rope type = 0
|
| 76 |
+
llm_load_print_meta: rope scaling = linear
|
| 77 |
+
llm_load_print_meta: freq_base_train = 1000000.0
|
| 78 |
+
llm_load_print_meta: freq_scale_train = 1
|
| 79 |
+
llm_load_print_meta: n_yarn_orig_ctx = 16384
|
| 80 |
+
llm_load_print_meta: rope_finetuned = unknown
|
| 81 |
+
llm_load_print_meta: ssm_d_conv = 0
|
| 82 |
+
llm_load_print_meta: ssm_d_inner = 0
|
| 83 |
+
llm_load_print_meta: ssm_d_state = 0
|
| 84 |
+
llm_load_print_meta: ssm_dt_rank = 0
|
| 85 |
+
llm_load_print_meta: model type = 34B
|
| 86 |
+
llm_load_print_meta: model ftype = Q8_0
|
| 87 |
+
llm_load_print_meta: model params = 33.74 B
|
| 88 |
+
llm_load_print_meta: model size = 33.39 GiB (8.50 BPW)
|
| 89 |
+
llm_load_print_meta: general.name = mergekit
|
| 90 |
+
llm_load_print_meta: BOS token = 1 '<s>'
|
| 91 |
+
llm_load_print_meta: EOS token = 2 '</s>'
|
| 92 |
+
llm_load_print_meta: UNK token = 0 '<unk>'
|
| 93 |
+
llm_load_print_meta: PAD token = 2 '</s>'
|
| 94 |
+
llm_load_print_meta: LF token = 13 '<0x0A>'
|
| 95 |
+
llm_load_tensors: ggml ctx size = 0.17 MiB
|
| 96 |
+
llm_load_tensors: CPU buffer size = 34194.28 MiB
|
| 97 |
+
....................................................................................................
|
| 98 |
+
llama_new_context_with_model: n_ctx = 512
|
| 99 |
+
llama_new_context_with_model: freq_base = 1000000.0
|
| 100 |
+
llama_new_context_with_model: freq_scale = 1
|
| 101 |
+
llama_kv_cache_init: CPU KV buffer size = 96.00 MiB
|
| 102 |
+
llama_new_context_with_model: KV self size = 96.00 MiB, K (f16): 48.00 MiB, V (f16): 48.00 MiB
|
| 103 |
+
llama_new_context_with_model: CPU input buffer size = 18.01 MiB
|
| 104 |
+
llama_new_context_with_model: CPU compute buffer size = 128.00 MiB
|
| 105 |
+
llama_new_context_with_model: graph splits (measure): 1
|
| 106 |
+
|
| 107 |
+
system_info: n_threads = 16 / 32 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
|
| 108 |
+
sampling:
|
| 109 |
+
repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
|
| 110 |
+
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
|
| 111 |
+
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
|
| 112 |
+
sampling order:
|
| 113 |
+
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
|
| 114 |
+
generate: n_ctx = 512, n_batch = 512, n_predict = 400, n_keep = 1
|
| 115 |
+
|
| 116 |
+
|
| 117 |
+
Write a function to print first n fibonacci numbers in python
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+
Here is a simple Python function that prints the first `n` Fibonacci numbers:
|
| 121 |
+
|
| 122 |
+
|
| 123 |
+
```python
|
| 124 |
+
def print_fibonacci(n):
|
| 125 |
+
a, b = 0, 1
|
| 126 |
+
for _ in range(n):
|
| 127 |
+
print(a)
|
| 128 |
+
a, b = b, a + b
|
| 129 |
+
|
| 130 |
+
print_fibonacci(10) # prints first 10 Fibonacci numbers
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
This function starts with `a` and `b` as the first two Fibonacci numbers (0 and 1), then it enters a loop that runs `n` times. In each iteration, it prints the current value of `a`, then updates `a` and `b` to be the next two Fibonacci numbers (`b` and the sum of `a` and `b`). [end of text]
|
| 134 |
+
|
| 135 |
+
llama_print_timings: load time = 1427.82 ms
|
| 136 |
+
llama_print_timings: sample time = 29.32 ms / 186 runs ( 0.16 ms per token, 6342.71 tokens per second)
|
| 137 |
+
llama_print_timings: prompt eval time = 2306.73 ms / 15 tokens ( 153.78 ms per token, 6.50 tokens per second)
|
| 138 |
+
llama_print_timings: eval time = 134618.75 ms / 185 runs ( 727.67 ms per token, 1.37 tokens per second)
|
| 139 |
+
llama_print_timings: total time = 137001.23 ms / 200 tokens
|
| 140 |
+
Log end
|
| 141 |
+
```
|