url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/4091
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4091/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4091/comments
https://api.github.com/repos/ollama/ollama/issues/4091/events
https://github.com/ollama/ollama/issues/4091
2,274,504,005
I_kwDOJ0Z1Ps6Hki1F
4,091
Unable to access ollama from other machine
{ "login": "rebas3", "id": 168698930, "node_id": "U_kgDOCg4kMg", "avatar_url": "https://avatars.githubusercontent.com/u/168698930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rebas3", "html_url": "https://github.com/rebas3", "followers_url": "https://api.github.com/users/rebas3/followers", "following_url": "https://api.github.com/users/rebas3/following{/other_user}", "gists_url": "https://api.github.com/users/rebas3/gists{/gist_id}", "starred_url": "https://api.github.com/users/rebas3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rebas3/subscriptions", "organizations_url": "https://api.github.com/users/rebas3/orgs", "repos_url": "https://api.github.com/users/rebas3/repos", "events_url": "https://api.github.com/users/rebas3/events{/privacy}", "received_events_url": "https://api.github.com/users/rebas3/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-05-02T03:06:32
2024-05-02T16:40:04
2024-05-02T16:40:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi Im new to AI or develop in general and I had a question: When I try to access on the same machine then it work normally but when I Try to connect from other machine it doesn't allow me to do that how can I allow it to connect? ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.32
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4091/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6391
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6391/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6391/comments
https://api.github.com/repos/ollama/ollama/issues/6391/events
https://github.com/ollama/ollama/pull/6391
2,470,366,839
PR_kwDOJ0Z1Ps54lYil
6,391
doc: fixed spelling error
{ "login": "Carter907", "id": 102479896, "node_id": "U_kgDOBhu4GA", "avatar_url": "https://avatars.githubusercontent.com/u/102479896?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Carter907", "html_url": "https://github.com/Carter907", "followers_url": "https://api.github.com/users/Carter907/followers", "following_url": "https://api.github.com/users/Carter907/following{/other_user}", "gists_url": "https://api.github.com/users/Carter907/gists{/gist_id}", "starred_url": "https://api.github.com/users/Carter907/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Carter907/subscriptions", "organizations_url": "https://api.github.com/users/Carter907/orgs", "repos_url": "https://api.github.com/users/Carter907/repos", "events_url": "https://api.github.com/users/Carter907/events{/privacy}", "received_events_url": "https://api.github.com/users/Carter907/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-08-16T14:16:08
2024-09-04T13:42:33
2024-09-04T13:42:33
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6391", "html_url": "https://github.com/ollama/ollama/pull/6391", "diff_url": "https://github.com/ollama/ollama/pull/6391.diff", "patch_url": "https://github.com/ollama/ollama/pull/6391.patch", "merged_at": "2024-09-04T13:42:33" }
Changed "dorrect" to "correct".
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6391/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5573
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5573/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5573/comments
https://api.github.com/repos/ollama/ollama/issues/5573/events
https://github.com/ollama/ollama/issues/5573
2,398,330,253
I_kwDOJ0Z1Ps6O852N
5,573
ggml_cuda_init: failed to initialize CUDA: system has unsupported display driver / cuda driver combination
{ "login": "skinnynpale", "id": 52371356, "node_id": "MDQ6VXNlcjUyMzcxMzU2", "avatar_url": "https://avatars.githubusercontent.com/u/52371356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skinnynpale", "html_url": "https://github.com/skinnynpale", "followers_url": "https://api.github.com/users/skinnynpale/followers", "following_url": "https://api.github.com/users/skinnynpale/following{/other_user}", "gists_url": "https://api.github.com/users/skinnynpale/gists{/gist_id}", "starred_url": "https://api.github.com/users/skinnynpale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skinnynpale/subscriptions", "organizations_url": "https://api.github.com/users/skinnynpale/orgs", "repos_url": "https://api.github.com/users/skinnynpale/repos", "events_url": "https://api.github.com/users/skinnynpale/events{/privacy}", "received_events_url": "https://api.github.com/users/skinnynpale/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" }, { "id": 6677745918, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g", "url": "https://api.github.com/repos/ollama/ollama/labels/gpu", "name": "gpu", "color": "76C49E", "default": false, "description": "" } ]
closed
false
null
[]
null
17
2024-07-09T14:06:38
2024-07-11T03:01:53
2024-07-11T03:01:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ```bash time=2024-07-09T13:56:46.484Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa gpu=GPU-72b1bc75-c26b-1c04-f9cd-ff1942a73215 parallel=4 available=24748556288 required="6.2 GiB" time=2024-07-09T13:56:46.484Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[23.0 GiB]" memory.required.full="6.2 GiB" memory.required.partial="6.2 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.2 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.3 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB" time=2024-07-09T13:56:46.485Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama368855320/runners/cuda_v11/ollama_llama_server --model /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 39141" time=2024-07-09T13:56:46.485Z level=INFO source=sched.go:474 msg="loaded runners" count=1 time=2024-07-09T13:56:46.485Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding" time=2024-07-09T13:56:46.486Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="a8db2a9" tid="139659088556032" timestamp=1720533406 INFO [main] system info | n_threads=32 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139659088556032" timestamp=1720533406 total_threads=64 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="63" port="39141" tid="139659088556032" timestamp=1720533406 llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /root/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 128256 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 21: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-07-09T13:56:46.737Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: failed to initialize CUDA: system has unsupported display driver / cuda driver combination llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 4437.80 MiB llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 ggml_cuda_host_malloc: failed to allocate 1024.00 MiB of pinned memory: system has unsupported display driver / cuda driver combination llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB ggml_cuda_host_malloc: failed to allocate 2.02 MiB of pinned memory: system has unsupported display driver / cuda driver combination llama_new_context_with_model: CPU output buffer size = 2.02 MiB ggml_cuda_host_malloc: failed to allocate 560.01 MiB of pinned memory: system has unsupported display driver / cuda driver combination llama_new_context_with_model: CUDA_Host compute buffer size = 560.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 1 INFO [main] model loaded | tid="139659088556032" timestamp=1720533410 time=2024-07-09T13:56:50.631Z level=INFO source=server.go:609 msg="llama runner started in 4.15 seconds" [GIN] 2024/07/09 - 13:56:50 | 200 | 4.429415046s | 127.0.0.1 | POST "/api/chat" [GIN] 2024/07/09 - 13:57:41 | 200 | 8.412510746s | 127.0.0.1 | POST "/api/chat" ``` ```bash nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Jan__6_16:45:21_PST_2023 Cuda compilation tools, release 12.0, V12.0.140 Build cuda_12.0.r12.0/compiler.32267302_0 nvidia-smi Tue Jul 9 13:52:27 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4090 On | 00000000:A2:00.0 Off | Off | | 30% 34C P8 23W / 450W | 1MiB / 24564MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.2.1
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5573/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6206
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6206/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6206/comments
https://api.github.com/repos/ollama/ollama/issues/6206/events
https://github.com/ollama/ollama/issues/6206
2,451,334,507
I_kwDOJ0Z1Ps6SHGVr
6,206
[question] How to default to CPU?
{ "login": "yurivict", "id": 271906, "node_id": "MDQ6VXNlcjI3MTkwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurivict", "html_url": "https://github.com/yurivict", "followers_url": "https://api.github.com/users/yurivict/followers", "following_url": "https://api.github.com/users/yurivict/following{/other_user}", "gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}", "starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurivict/subscriptions", "organizations_url": "https://api.github.com/users/yurivict/orgs", "repos_url": "https://api.github.com/users/yurivict/repos", "events_url": "https://api.github.com/users/yurivict/events{/privacy}", "received_events_url": "https://api.github.com/users/yurivict/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
11
2024-08-06T17:05:23
2024-08-06T20:13:47
2024-08-06T17:25:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I created the FreeBSD port for ollama. However, GPU isn't available and all 'ollama run' commands fail with the ollama server printing this: ``` time=2024-08-06T09:57:27.238-07:00 level=WARN source=sched.go:642 msg="gpu VRAM usage didn't recover within timeout" seconds=5.06509013 model=/home/yuri/.ollama/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 ``` It appears to default to GPU. ```ollama help run``` and ```ollama help start``` don't offer any options allowing to default to CPU. How can I make ollama to default to CPU? Thank you, Yuri ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.4
{ "login": "yurivict", "id": 271906, "node_id": "MDQ6VXNlcjI3MTkwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurivict", "html_url": "https://github.com/yurivict", "followers_url": "https://api.github.com/users/yurivict/followers", "following_url": "https://api.github.com/users/yurivict/following{/other_user}", "gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}", "starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurivict/subscriptions", "organizations_url": "https://api.github.com/users/yurivict/orgs", "repos_url": "https://api.github.com/users/yurivict/repos", "events_url": "https://api.github.com/users/yurivict/events{/privacy}", "received_events_url": "https://api.github.com/users/yurivict/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6206/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1342
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1342/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1342/comments
https://api.github.com/repos/ollama/ollama/issues/1342/events
https://github.com/ollama/ollama/issues/1342
2,020,344,009
I_kwDOJ0Z1Ps54bADJ
1,342
German umlaut missing with deepseek-llm
{ "login": "p3d-dev", "id": 105526632, "node_id": "U_kgDOBko1aA", "avatar_url": "https://avatars.githubusercontent.com/u/105526632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/p3d-dev", "html_url": "https://github.com/p3d-dev", "followers_url": "https://api.github.com/users/p3d-dev/followers", "following_url": "https://api.github.com/users/p3d-dev/following{/other_user}", "gists_url": "https://api.github.com/users/p3d-dev/gists{/gist_id}", "starred_url": "https://api.github.com/users/p3d-dev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/p3d-dev/subscriptions", "organizations_url": "https://api.github.com/users/p3d-dev/orgs", "repos_url": "https://api.github.com/users/p3d-dev/repos", "events_url": "https://api.github.com/users/p3d-dev/events{/privacy}", "received_events_url": "https://api.github.com/users/p3d-dev/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-12-01T08:13:32
2023-12-01T17:30:48
2023-12-01T17:30:48
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Here are the responses for few models and deepseek-llm cannot output "ö" and "ü": ``` %ollama run orca2:13b "Please repeat: wäre, Tür, höchstens" wäre, Tür, höchstens Translation: would be, door, at most %ollama run codellama:34b "Please repeat: wäre, Tür, höchstens" Wäre, Tür, höchstens. %ollama run deepseek-llm:67b-chat "Please repeat: wäre, Tür, höchstens" To complete this task, I will first listen to the audio file provided and write down the German words that are spoken. Then, I will repeat those words in a clear manner for you. Step 1: Listen to the audio file and identify the German words being spoken. In this case, the words are "wäre", "Tr" (door), and "hchstens" (at most). Step 2: Repeat each word in a clear manner. - wäre -> I would say this as "vare". - Tr -> Pronounced like "tuer", which means door. - hchstens -> This is pronounced like "hkhs-tens" and it translates to "at most." ``` Is this a problem of the model or with ollama ?
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1342/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1342/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1524
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1524/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1524/comments
https://api.github.com/repos/ollama/ollama/issues/1524/events
https://github.com/ollama/ollama/pull/1524
2,042,102,441
PR_kwDOJ0Z1Ps5iBhyg
1,524
restore model load duration on generate response
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-12-14T17:01:29
2023-12-14T17:15:51
2023-12-14T17:15:50
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1524", "html_url": "https://github.com/ollama/ollama/pull/1524", "diff_url": "https://github.com/ollama/ollama/pull/1524.diff", "patch_url": "https://github.com/ollama/ollama/pull/1524.patch", "merged_at": "2023-12-14T17:15:50" }
- set model load duration on generate and chat done response - calculate createAt time when response created resolves #1523
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1524/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1735
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1735/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1735/comments
https://api.github.com/repos/ollama/ollama/issues/1735/events
https://github.com/ollama/ollama/issues/1735
2,058,938,226
I_kwDOJ0Z1Ps56uOdy
1,735
Server doesn't listen on all available interfaces
{ "login": "zine999", "id": 155118056, "node_id": "U_kgDOCT7p6A", "avatar_url": "https://avatars.githubusercontent.com/u/155118056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zine999", "html_url": "https://github.com/zine999", "followers_url": "https://api.github.com/users/zine999/followers", "following_url": "https://api.github.com/users/zine999/following{/other_user}", "gists_url": "https://api.github.com/users/zine999/gists{/gist_id}", "starred_url": "https://api.github.com/users/zine999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zine999/subscriptions", "organizations_url": "https://api.github.com/users/zine999/orgs", "repos_url": "https://api.github.com/users/zine999/repos", "events_url": "https://api.github.com/users/zine999/events{/privacy}", "received_events_url": "https://api.github.com/users/zine999/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2023-12-28T23:28:43
2024-01-04T02:23:20
2024-01-04T02:23:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I think this might be a problem recently introduced in v0.1.17 but I'm not 100% sure. `ollama serve` doesn't listen on `0.0.0.0` and therefore doesn't make itself available on all interfaces. This causes problems when trying to connect to it via an interface other than `localhost`. A (hopefully temporary) workaround is using a utility like `socat`, e.g. to listen on all interfaces on port `8888` and relay traffic to port `11434`: ``` $ socat TCP-LISTEN:8888,reuseaddr,fork TCP:localhost:11434 ```
{ "login": "zine999", "id": 155118056, "node_id": "U_kgDOCT7p6A", "avatar_url": "https://avatars.githubusercontent.com/u/155118056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zine999", "html_url": "https://github.com/zine999", "followers_url": "https://api.github.com/users/zine999/followers", "following_url": "https://api.github.com/users/zine999/following{/other_user}", "gists_url": "https://api.github.com/users/zine999/gists{/gist_id}", "starred_url": "https://api.github.com/users/zine999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zine999/subscriptions", "organizations_url": "https://api.github.com/users/zine999/orgs", "repos_url": "https://api.github.com/users/zine999/repos", "events_url": "https://api.github.com/users/zine999/events{/privacy}", "received_events_url": "https://api.github.com/users/zine999/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1735/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1735/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7613
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7613/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7613/comments
https://api.github.com/repos/ollama/ollama/issues/7613/events
https://github.com/ollama/ollama/pull/7613
2,648,313,555
PR_kwDOJ0Z1Ps6BduJt
7,613
Update type for ToolFunction to support new json serialization
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-11-11T06:34:22
2024-11-13T17:25:57
2024-11-13T17:25:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7613", "html_url": "https://github.com/ollama/ollama/pull/7613", "diff_url": "https://github.com/ollama/ollama/pull/7613.diff", "patch_url": "https://github.com/ollama/ollama/pull/7613.patch", "merged_at": null }
Need to update the `ToolFunction` type to support the tool passing from client libraries
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7613/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1545
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1545/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1545/comments
https://api.github.com/repos/ollama/ollama/issues/1545/events
https://github.com/ollama/ollama/issues/1545
2,044,031,840
I_kwDOJ0Z1Ps551XNg
1,545
Error Ollama + Langchain + Google Colab + ngrok
{ "login": "SerhiyProtsenko", "id": 33152729, "node_id": "MDQ6VXNlcjMzMTUyNzI5", "avatar_url": "https://avatars.githubusercontent.com/u/33152729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SerhiyProtsenko", "html_url": "https://github.com/SerhiyProtsenko", "followers_url": "https://api.github.com/users/SerhiyProtsenko/followers", "following_url": "https://api.github.com/users/SerhiyProtsenko/following{/other_user}", "gists_url": "https://api.github.com/users/SerhiyProtsenko/gists{/gist_id}", "starred_url": "https://api.github.com/users/SerhiyProtsenko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SerhiyProtsenko/subscriptions", "organizations_url": "https://api.github.com/users/SerhiyProtsenko/orgs", "repos_url": "https://api.github.com/users/SerhiyProtsenko/repos", "events_url": "https://api.github.com/users/SerhiyProtsenko/events{/privacy}", "received_events_url": "https://api.github.com/users/SerhiyProtsenko/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5895046125, "node_id": "LA_kwDOJ0Z1Ps8AAAABX19D7Q", "url": "https://api.github.com/repos/ollama/ollama/labels/integration", "name": "integration", "color": "92E43A", "default": false, "description": "" } ]
closed
false
null
[]
null
3
2023-12-15T16:40:27
2024-03-11T18:47:23
2024-03-11T18:47:22
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When I use the combination: Ollama + Langchain + Google Colab + ngrok. I get an error (The models are downloaded, I can see them in Ollama list) ``` llm = Ollama( model="run deepseek-coder:6.7b", base_url="https://e12b-35-231-226-171.ngrok.io/") responce = llm.predict('What do you know about Falco?') Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?5f7f2031-a63a-42c0-ac20-ccc8d53de6b2)--------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) File [~/miniconda3/envs/llm/lib/python3.11/site-packages/requests/models.py:971](https://file+.vscode-resource.vscode-cdn.net/home/serhiy/Scalarr/llm/%20RAG/~/miniconda3/envs/llm/lib/python3.11/site-packages/requests/models.py:971), in Response.json(self, **kwargs) 970 try: --> 971 return complexjson.loads(self.text, **kwargs) 972 except JSONDecodeError as e: 973 # Catch JSON-related errors and raise as requests.JSONDecodeError 974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError File [~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/__init__.py:514](https://file+.vscode-resource.vscode-cdn.net/home/serhiy/Scalarr/llm/%20RAG/~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/__init__.py:514), in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, allow_nan, **kw) 510 if (cls is None and encoding is None and object_hook is None and 511 parse_int is None and parse_float is None and 512 parse_constant is None and object_pairs_hook is None 513 and not use_decimal and not allow_nan and not kw): --> 514 return _default_decoder.decode(s) 515 if cls is None: File [~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/decoder.py:389](https://file+.vscode-resource.vscode-cdn.net/home/serhiy/Scalarr/llm/%20RAG/~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/decoder.py:389), in JSONDecoder.decode(self, s, _w, _PY3) 388 if end != len(s): --> 389 raise JSONDecodeError("Extra data", s, end, len(s)) 390 return obj JSONDecodeError: Extra data: line 1 column 5 - line 1 column 19 (char 4 - 18) During handling of the above exception, another exception occurred: ... 973 # Catch JSON-related errors and raise as requests.JSONDecodeError 974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError --> 975 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) JSONDecodeError: Extra data: line 1 column 5 (char 4) ``` If I run from the terminal Ollama + Google Colab + ngrok, everything works with google colab and ngrok. Also, if I change the Python script to local base_url: ``` llm = Ollama( model="run deepseek-coder:6.7b", base_url="http://localhost:11434") responce = llm.predict('What do you know about Falco?') ``` everything works Ollama + Langchain. Only the combination Ollama + Langchain + Google Colab + ngrok does not work
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1545/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2650
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2650/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2650/comments
https://api.github.com/repos/ollama/ollama/issues/2650/events
https://github.com/ollama/ollama/issues/2650
2,147,556,756
I_kwDOJ0Z1Ps6AAR2U
2,650
Gemma 7B produces gibberish output
{ "login": "aniketmaurya", "id": 21018714, "node_id": "MDQ6VXNlcjIxMDE4NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/21018714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aniketmaurya", "html_url": "https://github.com/aniketmaurya", "followers_url": "https://api.github.com/users/aniketmaurya/followers", "following_url": "https://api.github.com/users/aniketmaurya/following{/other_user}", "gists_url": "https://api.github.com/users/aniketmaurya/gists{/gist_id}", "starred_url": "https://api.github.com/users/aniketmaurya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aniketmaurya/subscriptions", "organizations_url": "https://api.github.com/users/aniketmaurya/orgs", "repos_url": "https://api.github.com/users/aniketmaurya/repos", "events_url": "https://api.github.com/users/aniketmaurya/events{/privacy}", "received_events_url": "https://api.github.com/users/aniketmaurya/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
9
2024-02-21T19:48:59
2024-04-17T11:11:24
2024-02-23T01:26:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
* Gemma 7B produces gibberish output * 2B seem to be working well though ![image](https://github.com/ollama/ollama/assets/21018714/99de1a65-8321-469f-914f-6ecb37eebf83)
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2650/reactions", "total_count": 28, "+1": 28, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2650/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7367
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7367/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7367/comments
https://api.github.com/repos/ollama/ollama/issues/7367/events
https://github.com/ollama/ollama/pull/7367
2,615,258,652
PR_kwDOJ0Z1Ps5_9ksb
7,367
CI testing
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-10-25T22:26:53
2024-10-25T23:50:58
2024-10-25T23:50:58
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7367", "html_url": "https://github.com/ollama/ollama/pull/7367", "diff_url": "https://github.com/ollama/ollama/pull/7367.diff", "patch_url": "https://github.com/ollama/ollama/pull/7367.patch", "merged_at": null }
Nothing to see here....
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7367/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4165
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4165/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4165/comments
https://api.github.com/repos/ollama/ollama/issues/4165/events
https://github.com/ollama/ollama/issues/4165
2,279,378,308
I_kwDOJ0Z1Ps6H3I2E
4,165
`OLLAMA_NUM_PARALLEL` and multi-modal models lead to `failed processing images` error
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-05-05T07:49:42
2024-05-05T07:49:43
null
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When processing multiple requests using multi-modal models such as `llava` or `moondream` generation freezes and an error is printed in the server logs: `failed processing images` ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4165/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 7 }
https://api.github.com/repos/ollama/ollama/issues/4165/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3919
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3919/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3919/comments
https://api.github.com/repos/ollama/ollama/issues/3919/events
https://github.com/ollama/ollama/issues/3919
2,264,327,111
I_kwDOJ0Z1Ps6G9uPH
3,919
trying to use llama3 with ollama embeddings getting error model 'llama2' not found
{ "login": "SatouKuzuma1", "id": 67365797, "node_id": "MDQ6VXNlcjY3MzY1Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/67365797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SatouKuzuma1", "html_url": "https://github.com/SatouKuzuma1", "followers_url": "https://api.github.com/users/SatouKuzuma1/followers", "following_url": "https://api.github.com/users/SatouKuzuma1/following{/other_user}", "gists_url": "https://api.github.com/users/SatouKuzuma1/gists{/gist_id}", "starred_url": "https://api.github.com/users/SatouKuzuma1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SatouKuzuma1/subscriptions", "organizations_url": "https://api.github.com/users/SatouKuzuma1/orgs", "repos_url": "https://api.github.com/users/SatouKuzuma1/repos", "events_url": "https://api.github.com/users/SatouKuzuma1/events{/privacy}", "received_events_url": "https://api.github.com/users/SatouKuzuma1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-04-25T19:18:14
2024-05-11T19:21:51
2024-04-30T06:01:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? im using this code ``` from langchain_community.llms import Ollama from langchain_community.embeddings import OllamaEmbeddings from langchain_community.document_loaders import PyPDFLoader from langchain_community.vectorstores import Chroma MODEL = 'llama3' model = Ollama(model=MODEL) embeddings = OllamaEmbeddings() loader = PyPDFLoader('der-admi.pdf') documents = loader.load_and_split() documents vectorstore = Chroma.from_documents(documents, embedding=embeddings) ``` and im getting the following error ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[6], [line 2](vscode-notebook-cell:?execution_count=6&line=2) [1](vscode-notebook-cell:?execution_count=6&line=1) from langchain_community.vectorstores import Chroma ----> [2](vscode-notebook-cell:?execution_count=6&line=2) vectorstore = Chroma.from_documents(documents, embedding=embeddings) File [~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:778](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:778), in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs) [776](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:776) texts = [doc.page_content for doc in documents] [777](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:777) metadatas = [doc.metadata for doc in documents] --> [778](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:778) return cls.from_texts( [779](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:779) texts=texts, [780](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:780) embedding=embedding, [781](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:781) metadatas=metadatas, [782](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:782) ids=ids, [783](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:783) collection_name=collection_name, [784](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:784) persist_directory=persist_directory, [785](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:785) client_settings=client_settings, [786](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:786) client=client, [787](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:787) collection_metadata=collection_metadata, [788](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:788) **kwargs, [789](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:789) ) File [~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:736](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:736), in Chroma.from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs) [728](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:728) from chromadb.utils.batch_utils import create_batches [730](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:730) for batch in create_batches( [731](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:731) api=chroma_collection._client, [732](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:732) ids=ids, [733](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:733) metadatas=metadatas, [734](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:734) documents=texts, [735](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:735) ): --> [736](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:736) chroma_collection.add_texts( [737](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:737) texts=batch[3] if batch[3] else [], [738](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:738) metadatas=batch[2] if batch[2] else None, [739](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:739) ids=batch[0], [740](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:740) ) [741](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:741) else: [742](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:742) chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) File [~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:275](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:275), in Chroma.add_texts(self, texts, metadatas, ids, **kwargs) [273](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:273) texts = list(texts) [274](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:274) if self._embedding_function is not None: --> [275](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:275) embeddings = self._embedding_function.embed_documents(texts) [276](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:276) if metadatas: [277](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:277) # fill metadatas with empty dicts if somebody [278](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:278) # did not specify metadata for all texts [279](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:279) length_diff = len(texts) - len(metadatas) File [~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:211](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:211), in OllamaEmbeddings.embed_documents(self, texts) [202](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:202) """Embed documents using an Ollama deployed embedding model. [203](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:203) [204](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:204) Args: (...) [208](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:208) List of embeddings, one for each text. [209](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:209) """ [210](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:210) instruction_pairs = [f"{self.embed_instruction}{text}" for text in texts] --> [211](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:211) embeddings = self._embed(instruction_pairs) [212](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:212) return embeddings File [~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:199](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:199), in OllamaEmbeddings._embed(self, input) [197](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:197) else: [198](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:198) iter_ = input --> [199](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:199) return [self._process_emb_response(prompt) for prompt in iter_] File [~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:199](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:199), in <listcomp>(.0) [197](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:197) else: [198](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:198) iter_ = input --> [199](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:199) return [self._process_emb_response(prompt) for prompt in iter_] File [~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:173](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:173), in OllamaEmbeddings._process_emb_response(self, input) [170](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:170) raise ValueError(f"Error raised by inference endpoint: {e}") [172](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:172) if res.status_code != 200: --> [173](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:173) raise ValueError( [174](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:174) "Error raised by inference API HTTP code: %s, %s" [175](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:175) % (res.status_code, res.text) [176](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:176) ) [177](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:177) try: [178](https://file+.vscode-resource.vscode-cdn.net/Users/ruben/Desktop/Python/pdf/local-model/~/Desktop/Python/pdf/local-model/.venv/lib/python3.10/site-packages/langchain_community/embeddings/ollama.py:178) t = res.json() ValueError: Error raised by inference API HTTP code: 404, {"error":"model 'llama2' not found, try pulling it first"} ``` I have installed llama3 and is running ``` ollama list NAME ID SIZE MODIFIED llama3:latest a6990ed6be41 4.7 GB 10 minutes ago mistral:latest 61e88e884507 4.1 GB 13 hours ago ``` I don't understand why is asking for llama2 in the embeddings. ### OS macOS ### GPU AMD ### CPU Intel ### Ollama version ollama version is 0.1.32
{ "login": "SatouKuzuma1", "id": 67365797, "node_id": "MDQ6VXNlcjY3MzY1Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/67365797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SatouKuzuma1", "html_url": "https://github.com/SatouKuzuma1", "followers_url": "https://api.github.com/users/SatouKuzuma1/followers", "following_url": "https://api.github.com/users/SatouKuzuma1/following{/other_user}", "gists_url": "https://api.github.com/users/SatouKuzuma1/gists{/gist_id}", "starred_url": "https://api.github.com/users/SatouKuzuma1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SatouKuzuma1/subscriptions", "organizations_url": "https://api.github.com/users/SatouKuzuma1/orgs", "repos_url": "https://api.github.com/users/SatouKuzuma1/repos", "events_url": "https://api.github.com/users/SatouKuzuma1/events{/privacy}", "received_events_url": "https://api.github.com/users/SatouKuzuma1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3919/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1585
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1585/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1585/comments
https://api.github.com/repos/ollama/ollama/issues/1585/events
https://github.com/ollama/ollama/issues/1585
2,047,382,036
I_kwDOJ0Z1Ps56CJIU
1,585
CUDA error 2 [...] out of memory when using mixtral:8x7b-instruct-v0.1-q3_K_M but not on bigger models
{ "login": "AlessandroSpallina", "id": 10786872, "node_id": "MDQ6VXNlcjEwNzg2ODcy", "avatar_url": "https://avatars.githubusercontent.com/u/10786872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlessandroSpallina", "html_url": "https://github.com/AlessandroSpallina", "followers_url": "https://api.github.com/users/AlessandroSpallina/followers", "following_url": "https://api.github.com/users/AlessandroSpallina/following{/other_user}", "gists_url": "https://api.github.com/users/AlessandroSpallina/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlessandroSpallina/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlessandroSpallina/subscriptions", "organizations_url": "https://api.github.com/users/AlessandroSpallina/orgs", "repos_url": "https://api.github.com/users/AlessandroSpallina/repos", "events_url": "https://api.github.com/users/AlessandroSpallina/events{/privacy}", "received_events_url": "https://api.github.com/users/AlessandroSpallina/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
11
2023-12-18T20:16:26
2024-05-10T00:25:43
2024-05-10T00:25:42
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, I'm opening this issue because I noticed a weird behavior running ollama on docker with GPU support and trying different mixtral 8x7B sizes: I can easily do inference on my GPU with models like mixtral:8x7b-instruct-v0.1-q4_K_M but I see a memory failure when running smaller models like mixtral:8x7b-instruct-v0.1-q3_K_M. I'm on Ubuntu 23.10, my GPU is a NVIDIA 3090. My docker-compose.yml: ``` version: '3.7' services: ollama: container_name: ollama_cat_dev image: ollama/ollama:0.1.16 restart: unless-stopped volumes: - ./ollama:/root/.ollama expose: - 11434 environment: - gpus=all deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] ``` As said, when I try to use mixtral:8x7b-instruct-v0.1-q3_K_M I see "out of memory" issues and the inference is completely done on CPU. Here the log (attached because it was too long for github issues) [ollama.log](https://github.com/jmorganca/ollama/files/13708188/ollama.log) Here instead the log when I run a bigger model like mixtral:8x7b-instruct-v0.1-q4_K_M: ``` 2023/12/18 20:09:56 llama.go:300: 23732 MB VRAM available, loading up to 22 GPU layers 2023/12/18 20:09:56 llama.go:436: starting llama runner 2023/12/18 20:09:56 llama.go:494: waiting for llama runner to start responding ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6 {"timestamp":1702930196,"level":"INFO","function":"main","line":2652,"message":"build info","build":441,"commit":"948ff13"} {"timestamp":1702930196,"level":"INFO","function":"main","line":2655,"message":"system info","n_threads":16,"n_threads_batch":-1,"total_threads":32,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "} llama_model_loader: loaded meta data with 24 key-value pairs and 995 tensors from /root/.ollama/models/blobs/sha256:59a66936ed54cfe28136f99a3ec5336f2b404bad0bc0f3a48123f87c677d8623 (version GGUF V3 (latest)) llama_model_loader: - tensor 0: token_embd.weight q4_K [ 4096, 32000, 1, 1 ] llama_model_loader: - tensor 1: blk.0.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 2: blk.0.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 3: blk.0.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 4: blk.0.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 5: blk.0.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 6: blk.0.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 7: blk.0.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 8: blk.0.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 9: blk.0.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 10: blk.0.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 11: blk.0.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 12: blk.0.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 13: blk.0.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 14: blk.0.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 15: blk.0.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 16: blk.0.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 17: blk.0.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 18: blk.0.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 19: blk.0.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 20: blk.0.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 21: blk.0.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 22: blk.0.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 23: blk.0.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 24: blk.0.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 25: blk.0.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 26: blk.0.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 27: blk.0.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 28: blk.0.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 29: blk.0.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 30: blk.0.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 31: blk.0.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 32: blk.1.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 33: blk.1.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 34: blk.1.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 35: blk.1.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 36: blk.1.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 37: blk.1.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 38: blk.1.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 39: blk.1.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 40: blk.1.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 41: blk.1.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 42: blk.1.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 43: blk.1.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 44: blk.1.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 45: blk.1.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 46: blk.1.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 47: blk.1.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 48: blk.1.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 49: blk.1.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 50: blk.1.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 51: blk.1.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 52: blk.1.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 53: blk.1.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 54: blk.1.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 55: blk.1.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 56: blk.1.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 57: blk.1.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 58: blk.1.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 59: blk.1.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 60: blk.1.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 61: blk.1.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 62: blk.1.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 63: blk.2.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 64: blk.2.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 65: blk.2.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 66: blk.2.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 67: blk.2.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 68: blk.2.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 69: blk.2.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 70: blk.2.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 71: blk.2.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 72: blk.2.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 73: blk.2.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 74: blk.2.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 75: blk.2.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 76: blk.2.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 77: blk.2.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 78: blk.2.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 79: blk.2.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 80: blk.2.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 81: blk.2.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 82: blk.2.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 83: blk.2.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 84: blk.2.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 85: blk.2.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 86: blk.2.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 87: blk.2.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 88: blk.2.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 89: blk.2.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 90: blk.2.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 91: blk.2.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 92: blk.2.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 93: blk.2.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 94: blk.3.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 95: blk.3.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 96: blk.3.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 97: blk.3.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 98: blk.3.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 99: blk.3.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 100: blk.3.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 101: blk.3.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 102: blk.3.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 103: blk.3.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 104: blk.3.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 105: blk.3.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 106: blk.3.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 107: blk.3.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 108: blk.3.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 109: blk.3.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 110: blk.3.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 111: blk.3.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 112: blk.3.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 113: blk.3.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 114: blk.3.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 115: blk.3.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 116: blk.3.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 117: blk.3.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 118: blk.3.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 119: blk.3.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 120: blk.3.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 121: blk.3.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 122: blk.3.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 123: blk.3.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 124: blk.3.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 125: blk.4.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 126: blk.4.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 127: blk.4.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 128: blk.4.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 129: blk.4.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 130: blk.4.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 131: blk.4.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 132: blk.4.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 133: blk.4.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 134: blk.4.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 135: blk.4.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 136: blk.4.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 137: blk.4.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 138: blk.4.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 139: blk.4.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 140: blk.4.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 141: blk.4.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 142: blk.4.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 143: blk.4.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 144: blk.4.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 145: blk.4.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 146: blk.4.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 147: blk.4.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 148: blk.4.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 149: blk.4.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 150: blk.4.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 151: blk.4.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 152: blk.4.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 153: blk.4.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 154: blk.4.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 155: blk.4.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 156: blk.5.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 157: blk.5.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 158: blk.5.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 159: blk.5.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 160: blk.5.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 161: blk.5.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 162: blk.5.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 163: blk.5.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 164: blk.5.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 165: blk.5.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 166: blk.5.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 167: blk.5.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 168: blk.5.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 169: blk.5.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 170: blk.5.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 171: blk.5.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 172: blk.5.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 173: blk.5.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 174: blk.5.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 175: blk.5.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 176: blk.5.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 177: blk.5.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 178: blk.5.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 179: blk.5.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 180: blk.5.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 181: blk.5.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 182: blk.5.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 183: blk.5.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 184: blk.5.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 185: blk.5.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 186: blk.5.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 187: blk.6.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 188: blk.6.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 189: blk.6.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 190: blk.6.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 191: blk.6.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 192: blk.6.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 193: blk.6.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 194: blk.6.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 195: blk.6.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 196: blk.6.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 197: blk.6.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 198: blk.6.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 199: blk.6.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 200: blk.6.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 201: blk.6.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 202: blk.6.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 203: blk.6.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 204: blk.6.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 205: blk.6.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 206: blk.6.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 207: blk.6.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 208: blk.6.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 209: blk.6.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 210: blk.6.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 211: blk.6.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 212: blk.6.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 213: blk.6.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 214: blk.6.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 215: blk.6.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 216: blk.6.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 217: blk.6.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 218: blk.7.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 219: blk.7.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 220: blk.7.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 221: blk.7.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 222: blk.7.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 223: blk.7.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 224: blk.7.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 225: blk.7.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 226: blk.7.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 227: blk.7.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 228: blk.7.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 229: blk.7.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 230: blk.7.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 231: blk.7.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 232: blk.7.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 233: blk.7.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 234: blk.7.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 235: blk.7.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 236: blk.7.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 237: blk.7.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 238: blk.7.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 239: blk.7.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 240: blk.7.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 241: blk.7.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 242: blk.7.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 243: blk.7.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 244: blk.7.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 245: blk.7.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 246: blk.7.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 247: blk.7.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 248: blk.7.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 249: blk.8.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 250: blk.8.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 251: blk.8.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 252: blk.8.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 253: blk.8.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 254: blk.8.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 255: blk.8.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 256: blk.8.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 257: blk.8.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 258: blk.8.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 259: blk.8.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 260: blk.8.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 261: blk.8.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 262: blk.8.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 263: blk.8.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 264: blk.10.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 265: blk.10.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 266: blk.10.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 267: blk.10.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 268: blk.10.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 269: blk.10.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 270: blk.10.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 271: blk.10.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 272: blk.8.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 273: blk.8.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 274: blk.8.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 275: blk.8.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 276: blk.8.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 277: blk.8.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 278: blk.8.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 279: blk.8.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 280: blk.8.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 281: blk.8.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 282: blk.8.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 283: blk.8.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 284: blk.8.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 285: blk.8.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 286: blk.8.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 287: blk.8.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 288: blk.9.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 289: blk.9.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 290: blk.9.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 291: blk.9.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 292: blk.9.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 293: blk.9.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 294: blk.9.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 295: blk.9.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 296: blk.9.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 297: blk.9.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 298: blk.9.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 299: blk.9.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 300: blk.9.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 301: blk.9.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 302: blk.9.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 303: blk.9.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 304: blk.9.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 305: blk.9.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 306: blk.9.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 307: blk.9.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 308: blk.9.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 309: blk.9.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 310: blk.9.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 311: blk.9.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 312: blk.9.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 313: blk.9.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 314: blk.9.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 315: blk.9.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 316: blk.9.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 317: blk.9.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 318: blk.9.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 319: blk.10.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 320: blk.10.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 321: blk.10.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 322: blk.10.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 323: blk.10.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 324: blk.10.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 325: blk.10.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 326: blk.10.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 327: blk.10.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 328: blk.10.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 329: blk.10.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 330: blk.10.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 331: blk.10.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 332: blk.10.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 333: blk.10.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 334: blk.10.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 335: blk.10.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 336: blk.10.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 337: blk.10.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 338: blk.10.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 339: blk.10.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 340: blk.10.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 341: blk.10.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 342: blk.11.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 343: blk.11.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 344: blk.11.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 345: blk.11.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 346: blk.11.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 347: blk.11.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 348: blk.11.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 349: blk.11.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 350: blk.11.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 351: blk.11.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 352: blk.11.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 353: blk.11.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 354: blk.11.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 355: blk.11.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 356: blk.11.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 357: blk.11.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 358: blk.11.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 359: blk.11.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 360: blk.11.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 361: blk.11.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 362: blk.11.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 363: blk.11.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 364: blk.11.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 365: blk.11.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 366: blk.11.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 367: blk.11.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 368: blk.11.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 369: blk.11.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 370: blk.11.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 371: blk.11.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 372: blk.11.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 373: blk.12.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 374: blk.12.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 375: blk.12.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 376: blk.12.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 377: blk.12.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 378: blk.12.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 379: blk.12.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 380: blk.12.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 381: blk.12.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 382: blk.12.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 383: blk.12.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 384: blk.12.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 385: blk.12.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 386: blk.12.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 387: blk.12.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 388: blk.12.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 389: blk.12.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 390: blk.12.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 391: blk.12.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 392: blk.12.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 393: blk.12.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 394: blk.12.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 395: blk.12.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 396: blk.12.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 397: blk.12.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 398: blk.12.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 399: blk.12.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 400: blk.12.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 401: blk.12.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 402: blk.12.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 403: blk.12.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 404: blk.13.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 405: blk.13.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 406: blk.13.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 407: blk.13.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 408: blk.13.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 409: blk.13.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 410: blk.13.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 411: blk.13.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 412: blk.13.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 413: blk.13.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 414: blk.13.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 415: blk.13.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 416: blk.13.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 417: blk.13.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 418: blk.13.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 419: blk.13.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 420: blk.13.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 421: blk.13.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 422: blk.13.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 423: blk.13.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 424: blk.13.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 425: blk.13.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 426: blk.13.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 427: blk.13.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 428: blk.13.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 429: blk.13.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 430: blk.13.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 431: blk.13.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 432: blk.13.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 433: blk.13.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 434: blk.13.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 435: blk.14.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 436: blk.14.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 437: blk.14.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 438: blk.14.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 439: blk.14.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 440: blk.14.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 441: blk.14.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 442: blk.14.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 443: blk.14.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 444: blk.14.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 445: blk.14.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 446: blk.14.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 447: blk.14.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 448: blk.14.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 449: blk.14.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 450: blk.14.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 451: blk.14.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 452: blk.14.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 453: blk.14.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 454: blk.14.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 455: blk.14.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 456: blk.14.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 457: blk.14.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 458: blk.14.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 459: blk.14.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 460: blk.14.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 461: blk.14.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 462: blk.14.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 463: blk.14.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 464: blk.14.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 465: blk.14.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 466: blk.15.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 467: blk.15.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 468: blk.15.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 469: blk.15.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 470: blk.15.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 471: blk.15.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 472: blk.15.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 473: blk.15.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 474: blk.15.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 475: blk.15.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 476: blk.15.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 477: blk.15.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 478: blk.15.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 479: blk.15.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 480: blk.15.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 481: blk.15.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 482: blk.15.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 483: blk.15.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 484: blk.15.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 485: blk.15.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 486: blk.15.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 487: blk.15.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 488: blk.15.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 489: blk.15.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 490: blk.15.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 491: blk.15.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 492: blk.15.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 493: blk.15.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 494: blk.15.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 495: blk.15.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 496: blk.15.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 497: blk.16.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 498: blk.16.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 499: blk.16.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 500: blk.16.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 501: blk.16.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 502: blk.16.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 503: blk.16.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 504: blk.16.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 505: blk.16.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 506: blk.16.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 507: blk.16.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 508: blk.16.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 509: blk.16.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 510: blk.16.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 511: blk.16.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 512: blk.16.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 513: blk.16.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 514: blk.16.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 515: blk.16.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 516: blk.16.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 517: blk.16.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 518: blk.16.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 519: blk.16.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 520: blk.16.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 521: blk.16.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 522: blk.16.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 523: blk.16.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 524: blk.16.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 525: blk.16.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 526: blk.16.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 527: blk.16.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 528: blk.17.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 529: blk.17.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 530: blk.17.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 531: blk.17.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 532: blk.17.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 533: blk.17.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 534: blk.17.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 535: blk.17.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 536: blk.17.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 537: blk.17.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 538: blk.17.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 539: blk.17.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 540: blk.17.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 541: blk.17.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 542: blk.17.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 543: blk.17.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 544: blk.17.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 545: blk.17.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 546: blk.17.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 547: blk.17.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 548: blk.17.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 549: blk.17.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 550: blk.17.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 551: blk.17.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 552: blk.17.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 553: blk.17.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 554: blk.17.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 555: blk.17.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 556: blk.17.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 557: blk.17.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 558: blk.17.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 559: blk.18.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 560: blk.18.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 561: blk.18.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 562: blk.18.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 563: blk.18.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 564: blk.18.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 565: blk.18.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 566: blk.18.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 567: blk.18.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 568: blk.18.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 569: blk.18.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 570: blk.18.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 571: blk.18.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 572: blk.18.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 573: blk.18.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 574: blk.18.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 575: blk.18.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 576: blk.18.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 577: blk.18.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 578: blk.18.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 579: blk.18.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 580: blk.18.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 581: blk.18.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 582: blk.18.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 583: blk.18.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 584: blk.18.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 585: blk.18.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 586: blk.18.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 587: blk.18.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 588: blk.18.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 589: blk.18.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 590: blk.19.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 591: blk.19.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 592: blk.19.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 593: blk.19.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 594: blk.19.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 595: blk.19.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 596: blk.19.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 597: blk.19.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 598: blk.19.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 599: blk.19.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 600: blk.19.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 601: blk.19.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 602: blk.19.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 603: blk.19.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 604: blk.19.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 605: blk.19.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 606: blk.19.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 607: blk.19.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 608: blk.19.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 609: blk.19.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 610: blk.19.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 611: blk.19.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 612: blk.19.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 613: blk.19.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 614: blk.19.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 615: blk.19.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 616: blk.19.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 617: blk.19.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 618: blk.19.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 619: blk.19.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 620: blk.19.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 621: blk.20.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 622: blk.20.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 623: blk.20.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 624: blk.20.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 625: blk.20.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 626: blk.20.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 627: blk.20.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 628: blk.20.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 629: blk.20.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 630: blk.20.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 631: blk.20.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 632: blk.20.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 633: blk.20.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 634: blk.20.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 635: blk.20.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 636: blk.20.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 637: blk.20.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 638: blk.20.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 639: blk.20.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 640: blk.20.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 641: blk.20.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 642: blk.20.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 643: blk.20.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 644: blk.20.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 645: blk.20.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 646: blk.20.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 647: blk.20.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 648: blk.20.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 649: blk.20.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 650: blk.20.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 651: blk.20.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 652: blk.21.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 653: blk.21.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 654: blk.21.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 655: blk.21.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 656: blk.21.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 657: blk.21.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 658: blk.21.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 659: blk.21.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 660: blk.21.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 661: blk.21.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 662: blk.21.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 663: blk.21.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 664: blk.21.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 665: blk.21.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 666: blk.21.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 667: blk.21.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 668: blk.21.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 669: blk.21.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 670: blk.21.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 671: blk.21.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 672: blk.21.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 673: blk.21.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 674: blk.21.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 675: blk.21.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 676: blk.21.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 677: blk.21.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 678: blk.21.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 679: blk.21.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 680: blk.21.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 681: blk.21.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 682: blk.21.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 683: blk.22.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 684: blk.22.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 685: blk.22.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 686: blk.22.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 687: blk.22.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 688: blk.22.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 689: blk.22.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 690: blk.22.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 691: blk.22.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 692: blk.22.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 693: blk.22.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 694: blk.22.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 695: blk.22.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 696: blk.22.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 697: blk.22.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 698: blk.22.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 699: blk.22.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 700: blk.22.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 701: blk.22.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 702: blk.22.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 703: blk.22.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 704: blk.22.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 705: blk.22.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 706: blk.22.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 707: blk.22.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 708: blk.22.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 709: blk.22.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 710: blk.22.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 711: blk.22.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 712: blk.22.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 713: blk.22.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 714: blk.23.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 715: blk.23.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 716: blk.23.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 717: blk.23.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 718: blk.23.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 719: blk.23.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 720: blk.23.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 721: blk.23.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 722: blk.23.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 723: blk.23.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 724: blk.23.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 725: blk.23.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 726: blk.23.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 727: blk.23.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 728: blk.23.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 729: blk.23.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 730: blk.23.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 731: blk.23.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 732: blk.23.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 733: blk.23.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 734: blk.23.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 735: blk.23.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 736: blk.23.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 737: blk.23.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 738: blk.23.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 739: blk.23.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 740: blk.23.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 741: blk.23.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 742: blk.23.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 743: blk.23.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 744: blk.23.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 745: blk.24.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 746: blk.24.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 747: blk.24.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 748: blk.24.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 749: blk.24.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 750: blk.24.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 751: blk.24.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 752: blk.24.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 753: blk.24.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 754: blk.24.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 755: blk.24.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 756: blk.24.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 757: blk.24.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 758: blk.24.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 759: blk.24.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 760: blk.24.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 761: blk.24.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 762: blk.24.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 763: blk.24.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 764: blk.24.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 765: blk.24.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 766: blk.24.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 767: blk.24.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 768: blk.24.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 769: blk.24.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 770: blk.24.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 771: blk.24.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 772: blk.24.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 773: blk.24.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 774: blk.24.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 775: blk.24.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 776: blk.25.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 777: blk.25.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 778: blk.25.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 779: blk.25.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 780: blk.25.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 781: blk.25.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 782: blk.25.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 783: blk.25.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 784: blk.25.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 785: blk.25.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 786: blk.25.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 787: blk.25.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 788: blk.25.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 789: blk.25.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 790: blk.25.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 791: blk.25.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 792: blk.25.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 793: blk.25.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 794: blk.25.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 795: blk.25.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 796: blk.25.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 797: blk.25.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 798: blk.25.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 799: blk.25.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 800: blk.25.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 801: blk.25.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 802: blk.25.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 803: blk.25.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 804: blk.25.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 805: blk.25.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 806: blk.25.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 807: blk.26.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 808: blk.26.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 809: blk.26.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 810: blk.26.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 811: blk.26.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 812: blk.26.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 813: blk.26.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 814: blk.26.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 815: blk.26.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 816: blk.26.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 817: blk.26.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 818: blk.26.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 819: blk.26.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 820: blk.26.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 821: blk.26.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 822: blk.26.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 823: blk.26.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 824: blk.26.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 825: blk.26.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 826: blk.26.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 827: blk.26.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 828: blk.26.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 829: blk.26.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 830: blk.26.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 831: blk.26.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 832: blk.26.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 833: blk.26.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 834: blk.26.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 835: blk.26.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 836: blk.26.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 837: blk.26.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 838: blk.27.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 839: blk.27.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 840: blk.27.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 841: blk.27.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 842: blk.27.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 843: blk.27.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 844: blk.27.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 845: blk.27.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 846: blk.27.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 847: blk.27.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 848: blk.27.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 849: blk.27.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 850: blk.27.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 851: blk.27.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 852: blk.27.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 853: blk.27.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 854: blk.27.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 855: blk.27.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 856: blk.27.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 857: blk.27.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 858: blk.27.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 859: blk.27.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 860: blk.27.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 861: blk.27.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 862: blk.27.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 863: blk.27.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 864: blk.27.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 865: blk.27.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 866: blk.27.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 867: blk.27.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 868: blk.27.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 869: blk.28.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 870: blk.28.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 871: blk.28.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 872: blk.28.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 873: blk.28.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 874: blk.28.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 875: blk.28.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 876: blk.28.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 877: blk.28.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 878: blk.28.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 879: blk.28.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 880: blk.28.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 881: blk.28.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 882: blk.28.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 883: blk.28.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 884: blk.28.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 885: blk.28.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 886: blk.28.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 887: blk.28.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 888: blk.28.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 889: blk.28.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 890: blk.28.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 891: blk.28.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 892: blk.28.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 893: blk.28.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 894: blk.28.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 895: blk.28.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 896: blk.28.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 897: blk.28.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 898: blk.28.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 899: blk.28.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 900: blk.29.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 901: blk.29.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 902: blk.29.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 903: blk.29.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 904: blk.29.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 905: blk.29.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 906: blk.29.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 907: blk.29.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 908: blk.29.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 909: blk.29.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 910: blk.29.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 911: blk.29.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 912: blk.29.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 913: blk.29.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 914: blk.29.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 915: blk.29.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 916: blk.29.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 917: blk.29.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 918: blk.29.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 919: blk.29.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 920: blk.29.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 921: blk.29.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 922: blk.29.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 923: blk.29.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 924: blk.29.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 925: blk.29.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 926: blk.29.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 927: blk.29.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 928: blk.29.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 929: blk.29.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 930: blk.29.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 931: blk.30.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 932: blk.30.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 933: blk.30.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 934: blk.30.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 935: blk.30.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 936: blk.30.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 937: blk.30.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 938: blk.30.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 939: blk.30.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 940: blk.30.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 941: blk.30.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 942: blk.30.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 943: blk.30.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 944: blk.30.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 945: blk.30.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 946: blk.30.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 947: blk.30.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 948: blk.30.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 949: blk.30.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 950: blk.30.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 951: output.weight q6_K [ 4096, 32000, 1, 1 ] llama_model_loader: - tensor 952: blk.30.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 953: blk.30.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 954: blk.30.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 955: blk.30.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 956: blk.30.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 957: blk.30.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 958: blk.30.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 959: blk.30.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 960: blk.30.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 961: blk.30.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 962: blk.30.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 963: blk.31.ffn_gate.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 964: blk.31.ffn_down.0.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 965: blk.31.ffn_up.0.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 966: blk.31.ffn_gate.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 967: blk.31.ffn_down.1.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 968: blk.31.ffn_up.1.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 969: blk.31.ffn_gate.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 970: blk.31.ffn_down.2.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 971: blk.31.ffn_up.2.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 972: blk.31.ffn_gate.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 973: blk.31.ffn_down.3.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 974: blk.31.ffn_up.3.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 975: blk.31.ffn_gate.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 976: blk.31.ffn_down.4.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 977: blk.31.ffn_up.4.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 978: blk.31.ffn_gate.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 979: blk.31.ffn_down.5.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 980: blk.31.ffn_up.5.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 981: blk.31.ffn_gate.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 982: blk.31.ffn_down.6.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 983: blk.31.ffn_up.6.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 984: blk.31.ffn_gate.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 985: blk.31.ffn_down.7.weight q4_K [ 14336, 4096, 1, 1 ] llama_model_loader: - tensor 986: blk.31.ffn_up.7.weight q4_K [ 4096, 14336, 1, 1 ] llama_model_loader: - tensor 987: blk.31.ffn_gate_inp.weight f16 [ 4096, 8, 1, 1 ] llama_model_loader: - tensor 988: blk.31.attn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 989: blk.31.ffn_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: - tensor 990: blk.31.attn_k.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 991: blk.31.attn_output.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 992: blk.31.attn_q.weight q4_K [ 4096, 4096, 1, 1 ] llama_model_loader: - tensor 993: blk.31.attn_v.weight q8_0 [ 4096, 1024, 1, 1 ] llama_model_loader: - tensor 994: output_norm.weight f32 [ 4096, 1, 1, 1 ] llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q8_0: 64 tensors llama_model_loader: - type q4_K: 833 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = mostly Q4_K - Medium llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 24.62 GiB (4.53 BPW) llm_load_print_meta: general.name = mistralai llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.39 MiB llm_load_tensors: using CUDA for GPU acceleration llm_load_tensors: mem required = 7999.20 MiB llm_load_tensors: offloading 22 repeating layers to GPU llm_load_tensors: offloaded 22/33 layers to GPU llm_load_tensors: VRAM used: 17217.06 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 32768 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: VRAM kv self = 2816.00 MB llama_new_context_with_model: KV self size = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB llama_build_graph: non-view tensors processed: 1124/1124 llama_new_context_with_model: compute buffer total size = 2167.35 MiB llama_new_context_with_model: VRAM scratch buffer: 2164.04 MiB llama_new_context_with_model: total VRAM used: 22197.10 MiB (model: 17217.06 MiB, context: 4980.04 MiB) {"timestamp":1702930214,"level":"INFO","function":"main","line":3035,"message":"HTTP server listening","hostname":"127.0.0.1","port":59735} {"timestamp":1702930214,"level":"INFO","function":"log_server_request","line":2596,"message":"request","remote_addr":"127.0.0.1","remote_port":51756,"status":200,"method":"HEAD","path":"/","params":{}} 2023/12/18 20:10:14 llama.go:508: llama runner started in 17.801696 seconds 2023/12/18 20:10:14 llama.go:577: loaded 0 images {"timestamp":1702930227,"level":"INFO","function":"log_server_request","line":2596,"message":"request","remote_addr":"127.0.0.1","remote_port":51756,"status":200,"method":"POST","path":"/completion","params":{}} {"timestamp":1702930227,"level":"INFO","function":"log_server_request","line":2596,"message":"request","remote_addr":"127.0.0.1","remote_port":46844,"status":200,"method":"POST","path":"/tokenize","params":{}} [GIN] 2023/12/18 - 20:10:27 | 200 | 31.133913884s | 172.28.0.4 | POST "/api/generate" {"timestamp":1702930227,"level":"INFO","function":"log_server_request","line":2596,"message":"request","remote_addr":"127.0.0.1","remote_port":46844,"status":200,"method":"HEAD","path":"/","params":{}} 2023/12/18 20:10:27 llama.go:577: loaded 0 images {"timestamp":1702930255,"level":"INFO","function":"log_server_request","line":2596,"message":"request","remote_addr":"127.0.0.1","remote_port":46844,"status":200,"method":"POST","path":"/completion","params":{}} {"timestamp":1702930255,"level":"INFO","function":"log_server_request","line":2596,"message":"request","remote_addr":"127.0.0.1","remote_port":48110,"status":200,"method":"POST","path":"/tokenize","params":{}} [GIN] 2023/12/18 - 20:10:55 | 200 | 27.681923952s | 172.28.0.4 | POST "/api/generate" 2023/12/18 20:15:57 llama.go:451: signal: killed 2023/12/18 20:15:57 llama.go:525: llama runner stopped successfully ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1585/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8279
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8279/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8279/comments
https://api.github.com/repos/ollama/ollama/issues/8279/events
https://github.com/ollama/ollama/pull/8279
2,764,987,796
PR_kwDOJ0Z1Ps6GhtLi
8,279
Improved offline installation experience (install.sh)
{ "login": "PatZer0", "id": 96248319, "node_id": "U_kgDOBbyh_w", "avatar_url": "https://avatars.githubusercontent.com/u/96248319?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PatZer0", "html_url": "https://github.com/PatZer0", "followers_url": "https://api.github.com/users/PatZer0/followers", "following_url": "https://api.github.com/users/PatZer0/following{/other_user}", "gists_url": "https://api.github.com/users/PatZer0/gists{/gist_id}", "starred_url": "https://api.github.com/users/PatZer0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PatZer0/subscriptions", "organizations_url": "https://api.github.com/users/PatZer0/orgs", "repos_url": "https://api.github.com/users/PatZer0/repos", "events_url": "https://api.github.com/users/PatZer0/events{/privacy}", "received_events_url": "https://api.github.com/users/PatZer0/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
0
2025-01-01T10:36:41
2025-01-01T10:38:22
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8279", "html_url": "https://github.com/ollama/ollama/pull/8279", "diff_url": "https://github.com/ollama/ollama/pull/8279.diff", "patch_url": "https://github.com/ollama/ollama/pull/8279.patch", "merged_at": null }
As the current install.sh script requires good internet connection, this improvement enables user to pre-download required files in other ways to install offline with the script. Add 2 new functions: `note`: Display a note text in yellow. `download_and_extract`: Handle all the download operations. Shows the filename and URL when the download starts, which allows user to download the required files manually to install Ollama offline or under restricted network conditions. Tested on Jetson Orin NX (JetPack 6.1 / Ubuntu 22.04) **Online Installation:** ``` >>> Installing ollama to /usr/local >>> Downloading Linux arm64 bundle >>> Downloading ollama-linux-arm64.tgz from https://ollama.com/download/ollama-linux-arm64.tgz NOTE: If you have trouble downloading, use Ctrl-C to terminate the script and manually download the file to the current directory, then re-run the script. O=- # # # # ``` **Offline Installation (with file downloaded in the same dir):** ``` >>> Installing ollama to /usr/local >>> Downloading Linux arm64 bundle >>> Download skipped, using local existing file: ollama-linux-arm64.tgz >>> Downloading JetPack 6 components >>> Download skipped, using local existing file: ollama-linux-arm64-jetpack6.tgz >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> NVIDIA JetPack ready. >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line. ```
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8279/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3984
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3984/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3984/comments
https://api.github.com/repos/ollama/ollama/issues/3984/events
https://github.com/ollama/ollama/pull/3984
2,267,257,865
PR_kwDOJ0Z1Ps5t64Hm
3,984
types/model: relax name length constraint from 2 to 1
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-04-27T23:45:51
2024-04-28T00:58:42
2024-04-28T00:58:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3984", "html_url": "https://github.com/ollama/ollama/pull/3984", "diff_url": "https://github.com/ollama/ollama/pull/3984.diff", "patch_url": "https://github.com/ollama/ollama/pull/3984.patch", "merged_at": "2024-04-28T00:58:41" }
null
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3984/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2856
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2856/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2856/comments
https://api.github.com/repos/ollama/ollama/issues/2856/events
https://github.com/ollama/ollama/issues/2856
2,162,689,469
I_kwDOJ0Z1Ps6A6AW9
2,856
I hope Ollama can add an embeddings interface compatible with OpenAI API
{ "login": "zhijianguo", "id": 3388592, "node_id": "MDQ6VXNlcjMzODg1OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3388592?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhijianguo", "html_url": "https://github.com/zhijianguo", "followers_url": "https://api.github.com/users/zhijianguo/followers", "following_url": "https://api.github.com/users/zhijianguo/following{/other_user}", "gists_url": "https://api.github.com/users/zhijianguo/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhijianguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhijianguo/subscriptions", "organizations_url": "https://api.github.com/users/zhijianguo/orgs", "repos_url": "https://api.github.com/users/zhijianguo/repos", "events_url": "https://api.github.com/users/zhijianguo/events{/privacy}", "received_events_url": "https://api.github.com/users/zhijianguo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-03-01T06:18:06
2024-03-12T00:17:50
2024-03-12T00:17:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I hope Ollama can add an embeddings interface compatible with OpenAI API
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2856/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2856/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5316
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5316/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5316/comments
https://api.github.com/repos/ollama/ollama/issues/5316/events
https://github.com/ollama/ollama/pull/5316
2,377,052,279
PR_kwDOJ0Z1Ps5zt1ac
5,316
llm: architecture patch
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-27T04:10:16
2024-06-27T04:38:15
2024-06-27T04:38:13
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5316", "html_url": "https://github.com/ollama/ollama/pull/5316", "diff_url": "https://github.com/ollama/ollama/pull/5316.diff", "patch_url": "https://github.com/ollama/ollama/pull/5316.patch", "merged_at": "2024-06-27T04:38:13" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5316/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2251
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2251/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2251/comments
https://api.github.com/repos/ollama/ollama/issues/2251/events
https://github.com/ollama/ollama/pull/2251
2,104,780,112
PR_kwDOJ0Z1Ps5lSZ3d
2,251
update submodule to `1cfb5372cf5707c8ec6dde7c874f4a44a6c4c915`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-01-29T06:53:55
2024-02-07T20:08:13
2024-02-07T20:08:13
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2251", "html_url": "https://github.com/ollama/ollama/pull/2251", "diff_url": "https://github.com/ollama/ollama/pull/2251.diff", "patch_url": "https://github.com/ollama/ollama/pull/2251.patch", "merged_at": null }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2251/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6632
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6632/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6632/comments
https://api.github.com/repos/ollama/ollama/issues/6632/events
https://github.com/ollama/ollama/issues/6632
2,505,083,951
I_kwDOJ0Z1Ps6VUIwv
6,632
New Command-r models output nonsense
{ "login": "xmaayy", "id": 21166352, "node_id": "MDQ6VXNlcjIxMTY2MzUy", "avatar_url": "https://avatars.githubusercontent.com/u/21166352?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xmaayy", "html_url": "https://github.com/xmaayy", "followers_url": "https://api.github.com/users/xmaayy/followers", "following_url": "https://api.github.com/users/xmaayy/following{/other_user}", "gists_url": "https://api.github.com/users/xmaayy/gists{/gist_id}", "starred_url": "https://api.github.com/users/xmaayy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xmaayy/subscriptions", "organizations_url": "https://api.github.com/users/xmaayy/orgs", "repos_url": "https://api.github.com/users/xmaayy/repos", "events_url": "https://api.github.com/users/xmaayy/events{/privacy}", "received_events_url": "https://api.github.com/users/xmaayy/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677279472, "node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A", "url": "https://api.github.com/repos/ollama/ollama/labels/macos", "name": "macos", "color": "E2DBC0", "default": false, "description": "" }, { "id": 6849881759, "node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw", "url": "https://api.github.com/repos/ollama/ollama/labels/memory", "name": "memory", "color": "5017EA", "default": false, "description": "" } ]
closed
false
null
[]
null
7
2024-09-04T11:33:14
2024-09-04T18:04:18
2024-09-04T14:02:26
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? The new 4-bit quants of command-r (I dont have the VRAM for higher quants) output nonsense. ``` bash ≻ ollama run command-r pulling manifest pulling 8e0609b8f0fe... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 18 GB pulling b3741b7b9ce5... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 77 B pulling 922095537bc1... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 2.9 KB pulling 945eaa8b1428... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 13 KB pulling 36b9655abe6a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 81 B pulling 8e63f21e12fb... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 568 B verifying sha256 digest writing manifest success >>> Hello general obligatoireasıası obligatoireasıasıası obligatoire obligatoireası obligatoireasıası obligatoireasıasıası obligatoireasıasıasıasıası obligatoire obligatoire obligatoireasıasıası obligatoireasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıası >>> What actions can you perform ştıhind ministrowieası SummaryasıştıştıhindhindhindasıhindhindasıasıştıştıhindasıasıasıhindştıştıştıştıasıhindasıhindasıasıasıasıhindasıasıştıştıasıhindasıasıasıasıasıasıasıasıasıasıaSummaryasıştıştıhindhindhindasıhindhindasıasıştıştıhindasıasıasıhindştıştıştıştıasıhindasıhindasıasıasıasıhindasıasıştıştıasıhindasıasıasasıasıasıasıasıasıasıasıasıasıasıasıştıhindştıhindştıhindasıhindasıasıası ministrowie Summaryasıasıhindhindştıştıhindasıasıasıasıasıasıasıasıasıası ``` Specifying version ``` bash ≻ ollama run command-r:35b-08-2024-q4_K_S pulling manifest pulling 3ed323d43be5... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 18 GB pulling b3741b7b9ce5... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 77 B pulling 922095537bc1... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 2.9 KB pulling 945eaa8b1428... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 13 KB pulling 36b9655abe6a... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 81 B pulling 93af4240a02c... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████▏ 570 B verifying sha256 digest writing manifest success >>> Hello generalası obligatoireasıası GENERası expulsionası obligatoireasıasıası primarioasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıasıası ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.9
{ "login": "xmaayy", "id": 21166352, "node_id": "MDQ6VXNlcjIxMTY2MzUy", "avatar_url": "https://avatars.githubusercontent.com/u/21166352?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xmaayy", "html_url": "https://github.com/xmaayy", "followers_url": "https://api.github.com/users/xmaayy/followers", "following_url": "https://api.github.com/users/xmaayy/following{/other_user}", "gists_url": "https://api.github.com/users/xmaayy/gists{/gist_id}", "starred_url": "https://api.github.com/users/xmaayy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xmaayy/subscriptions", "organizations_url": "https://api.github.com/users/xmaayy/orgs", "repos_url": "https://api.github.com/users/xmaayy/repos", "events_url": "https://api.github.com/users/xmaayy/events{/privacy}", "received_events_url": "https://api.github.com/users/xmaayy/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6632/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4750
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4750/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4750/comments
https://api.github.com/repos/ollama/ollama/issues/4750/events
https://github.com/ollama/ollama/issues/4750
2,327,518,807
I_kwDOJ0Z1Ps6Kux5X
4,750
Garbage output running llama3 GGUF model
{ "login": "DiptenduIDEAS", "id": 156412399, "node_id": "U_kgDOCVKp7w", "avatar_url": "https://avatars.githubusercontent.com/u/156412399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DiptenduIDEAS", "html_url": "https://github.com/DiptenduIDEAS", "followers_url": "https://api.github.com/users/DiptenduIDEAS/followers", "following_url": "https://api.github.com/users/DiptenduIDEAS/following{/other_user}", "gists_url": "https://api.github.com/users/DiptenduIDEAS/gists{/gist_id}", "starred_url": "https://api.github.com/users/DiptenduIDEAS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DiptenduIDEAS/subscriptions", "organizations_url": "https://api.github.com/users/DiptenduIDEAS/orgs", "repos_url": "https://api.github.com/users/DiptenduIDEAS/repos", "events_url": "https://api.github.com/users/DiptenduIDEAS/events{/privacy}", "received_events_url": "https://api.github.com/users/DiptenduIDEAS/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-05-31T10:38:49
2024-07-09T07:04:34
2024-07-05T04:04:04
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I downloaded https://huggingface.co/QuantFactory/Meta-Llama-3-8B-GGUF/blob/main/Meta-Llama-3-8B.Q2_K.gguf Created a Modelfile using `ollama create example -f Modelfile` and ran `ollama run example` On asking the _question why is the sky blue?_ on the >>> prompt I am getting garbage (a series of numbers) ![image](https://github.com/ollama/ollama/assets/156412399/af66c002-32cf-42a2-bfb8-5e1edf890248) ### OS Windows ### GPU Other ### CPU Intel ### Ollama version 0.1.31
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4750/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4339
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4339/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4339/comments
https://api.github.com/repos/ollama/ollama/issues/4339/events
https://github.com/ollama/ollama/pull/4339
2,290,628,111
PR_kwDOJ0Z1Ps5vJXEh
4,339
chore: update dependencies across the board
{ "login": "appleboy", "id": 21979, "node_id": "MDQ6VXNlcjIxOTc5", "avatar_url": "https://avatars.githubusercontent.com/u/21979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/appleboy", "html_url": "https://github.com/appleboy", "followers_url": "https://api.github.com/users/appleboy/followers", "following_url": "https://api.github.com/users/appleboy/following{/other_user}", "gists_url": "https://api.github.com/users/appleboy/gists{/gist_id}", "starred_url": "https://api.github.com/users/appleboy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/appleboy/subscriptions", "organizations_url": "https://api.github.com/users/appleboy/orgs", "repos_url": "https://api.github.com/users/appleboy/repos", "events_url": "https://api.github.com/users/appleboy/events{/privacy}", "received_events_url": "https://api.github.com/users/appleboy/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-05-11T03:19:53
2024-12-29T19:24:24
2024-12-29T19:24:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4339", "html_url": "https://github.com/ollama/ollama/pull/4339", "diff_url": "https://github.com/ollama/ollama/pull/4339.diff", "patch_url": "https://github.com/ollama/ollama/pull/4339.patch", "merged_at": null }
- Update `github.com/gin-gonic/gin` from `v1.9.1` to `v1.10.0` - Update `github.com/stretchr/testify` from `v1.8.4` to `v1.9.0` - Add `github.com/bytedance/sonic/loader` and `github.com/cloudwego/*` as new indirect dependencies - Update `github.com/bytedance/sonic` from `v1.9.1` to `v1.11.6` and remove old indirect dependencies - Update `github.com/gabriel-vasile/mimetype` from `v1.4.2` to `v1.4.3` - Update `github.com/go-playground/validator/v10` from `v10.14.0` to `v10.20.0` - Update various indirect dependencies to newer versions, including `github.com/klauspost/cpuid/v2`, `github.com/leodido/go-urn`, `github.com/mattn/go-isatty`, `github.com/pelletier/go-toml/v2`, `github.com/ugorji/go/codec`, `golang.org/x/*` packages, and `google.golang.org/protobuf` - Remove several outdated indirect dependencies
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4339/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2744
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2744/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2744/comments
https://api.github.com/repos/ollama/ollama/issues/2744/events
https://github.com/ollama/ollama/pull/2744
2,152,812,372
PR_kwDOJ0Z1Ps5n1_lQ
2,744
Update types.go
{ "login": "eltociear", "id": 22633385, "node_id": "MDQ6VXNlcjIyNjMzMzg1", "avatar_url": "https://avatars.githubusercontent.com/u/22633385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eltociear", "html_url": "https://github.com/eltociear", "followers_url": "https://api.github.com/users/eltociear/followers", "following_url": "https://api.github.com/users/eltociear/following{/other_user}", "gists_url": "https://api.github.com/users/eltociear/gists{/gist_id}", "starred_url": "https://api.github.com/users/eltociear/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eltociear/subscriptions", "organizations_url": "https://api.github.com/users/eltociear/orgs", "repos_url": "https://api.github.com/users/eltociear/repos", "events_url": "https://api.github.com/users/eltociear/events{/privacy}", "received_events_url": "https://api.github.com/users/eltociear/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-02-25T15:21:39
2024-02-25T18:41:26
2024-02-25T18:41:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2744", "html_url": "https://github.com/ollama/ollama/pull/2744", "diff_url": "https://github.com/ollama/ollama/pull/2744.diff", "patch_url": "https://github.com/ollama/ollama/pull/2744.patch", "merged_at": "2024-02-25T18:41:25" }
specfied -> specified
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2744/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/260
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/260/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/260/comments
https://api.github.com/repos/ollama/ollama/issues/260/events
https://github.com/ollama/ollama/pull/260
1,833,816,760
PR_kwDOJ0Z1Ps5XC0de
260
override ggml-metal if the file is different
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-08-02T19:51:01
2023-08-02T20:01:47
2023-08-02T20:01:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/260", "html_url": "https://github.com/ollama/ollama/pull/260", "diff_url": "https://github.com/ollama/ollama/pull/260.diff", "patch_url": "https://github.com/ollama/ollama/pull/260.patch", "merged_at": "2023-08-02T20:01:46" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/260/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1286
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1286/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1286/comments
https://api.github.com/repos/ollama/ollama/issues/1286/events
https://github.com/ollama/ollama/issues/1286
2,011,934,231
I_kwDOJ0Z1Ps53664X
1,286
Change enviroment-variables as settings to command parameters
{ "login": "Talleyrand-34", "id": 119809076, "node_id": "U_kgDOByQkNA", "avatar_url": "https://avatars.githubusercontent.com/u/119809076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Talleyrand-34", "html_url": "https://github.com/Talleyrand-34", "followers_url": "https://api.github.com/users/Talleyrand-34/followers", "following_url": "https://api.github.com/users/Talleyrand-34/following{/other_user}", "gists_url": "https://api.github.com/users/Talleyrand-34/gists{/gist_id}", "starred_url": "https://api.github.com/users/Talleyrand-34/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Talleyrand-34/subscriptions", "organizations_url": "https://api.github.com/users/Talleyrand-34/orgs", "repos_url": "https://api.github.com/users/Talleyrand-34/repos", "events_url": "https://api.github.com/users/Talleyrand-34/events{/privacy}", "received_events_url": "https://api.github.com/users/Talleyrand-34/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2023-11-27T10:12:00
2024-02-20T01:18:58
2024-02-20T01:18:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
## Changethe method of configure settings Instead of enviroment variables uses internal settings, at least as user interface. ### Example Instead of: > OLLAMA_MODELS=/path/to/file; ollama run model Run: >ollama conf path_to_models /path/to/file >ollama run model Or: >ollama run model -f /path/to/file
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1286/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2553
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2553/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2553/comments
https://api.github.com/repos/ollama/ollama/issues/2553/events
https://github.com/ollama/ollama/pull/2553
2,139,667,115
PR_kwDOJ0Z1Ps5nJQKS
2,553
Harden AMD driver lookup logic
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-02-17T00:22:52
2024-02-17T01:23:15
2024-02-17T01:23:12
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2553", "html_url": "https://github.com/ollama/ollama/pull/2553", "diff_url": "https://github.com/ollama/ollama/pull/2553.diff", "patch_url": "https://github.com/ollama/ollama/pull/2553.patch", "merged_at": "2024-02-17T01:23:12" }
It looks like the version file doesn't exist on older(?) drivers Fixes #2502
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2553/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2553/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8519
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8519/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8519/comments
https://api.github.com/repos/ollama/ollama/issues/8519/events
https://github.com/ollama/ollama/issues/8519
2,802,085,802
I_kwDOJ0Z1Ps6nBG-q
8,519
CLI: Managing models like (Docker) containers via ID
{ "login": "wijjj", "id": 726919, "node_id": "MDQ6VXNlcjcyNjkxOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/726919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wijjj", "html_url": "https://github.com/wijjj", "followers_url": "https://api.github.com/users/wijjj/followers", "following_url": "https://api.github.com/users/wijjj/following{/other_user}", "gists_url": "https://api.github.com/users/wijjj/gists{/gist_id}", "starred_url": "https://api.github.com/users/wijjj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wijjj/subscriptions", "organizations_url": "https://api.github.com/users/wijjj/orgs", "repos_url": "https://api.github.com/users/wijjj/repos", "events_url": "https://api.github.com/users/wijjj/events{/privacy}", "received_events_url": "https://api.github.com/users/wijjj/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
0
2025-01-21T14:58:12
2025-01-21T14:58:30
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Could we please have `ollama ps` `ollama stop <ID>` instead of `ollama stop this-is-a-llm-with-a-pretty-long-name:1337b_instruzioni_v3.33_q5_K_S` or at least tab autocompletion? Sorry in advance: Maybe this is already a duplicate (did look for it!), as it is not really a highly inventive idea.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8519/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8463
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8463/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8463/comments
https://api.github.com/repos/ollama/ollama/issues/8463/events
https://github.com/ollama/ollama/issues/8463
2,793,927,929
I_kwDOJ0Z1Ps6mh_T5
8,463
AMD Radeon RX6700XT unable to take input
{ "login": "bitfl0wer", "id": 39242991, "node_id": "MDQ6VXNlcjM5MjQyOTkx", "avatar_url": "https://avatars.githubusercontent.com/u/39242991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bitfl0wer", "html_url": "https://github.com/bitfl0wer", "followers_url": "https://api.github.com/users/bitfl0wer/followers", "following_url": "https://api.github.com/users/bitfl0wer/following{/other_user}", "gists_url": "https://api.github.com/users/bitfl0wer/gists{/gist_id}", "starred_url": "https://api.github.com/users/bitfl0wer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bitfl0wer/subscriptions", "organizations_url": "https://api.github.com/users/bitfl0wer/orgs", "repos_url": "https://api.github.com/users/bitfl0wer/repos", "events_url": "https://api.github.com/users/bitfl0wer/events{/privacy}", "received_events_url": "https://api.github.com/users/bitfl0wer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2025-01-16T22:41:02
2025-01-18T10:09:36
2025-01-18T10:09:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When trying to use ollamas APIs, the llama server crashes when loading. ## Obligatory System Information CPU: AMD Ryzen 9 7900 RAM: 64GB DDR5 OS: Fedora Linux 41 (Workstation Edition) x86_64 Ollama host: Docker ### docker-compose.yml ```yml services: webui: image: ghcr.io/open-webui/open-webui:main ports: - 8003:8080/tcp environment: - OLLAMA_BASE_URL=http://ollama:11434 volumes: - webui:/app/backend/data depends_on: - ollama restart: unless-stopped ollama: image: ollama/ollama:rocm environment: - HSA_OVERRIDE_GFX_VERSION="10.3.1" - AMD_SERIALIZE_KERNEL=3 - OLLAMA_DEBUG=1 - HIP_VISIBLE_DEVICES=0 - OLLAMA_LLM_LIBRARY=rocm_v60102 ports: - 11434:11434/tcp volumes: - ollama:/root/.ollama devices: - /dev/kfd:/dev/kfd - /dev/dri:/dev/dri restart: unless-stopped volumes: ollama: webui: ``` ### Console output of error https://pastebin.com/pTW3FMCp Line 55 already hints at an error. The next troubling thing I could find was at line 166. ### rocminfo ``` ❯ rocminfo ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 Runtime Ext Version: 1.6 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 9 7900 12-Core Processor ...[redacted because irrelevant] Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1031 Uuid: GPU-XX Marketing Name: AMD Radeon RX 6700 XT Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 16(0x10) KB L2: 3072(0xc00) KB L3: 98304(0x18000) KB Chip ID: 29663(0x73df) ASIC Revision: 0(0x0) Cacheline Size: 128(0x80) Max Clock Freq. (MHz): 2855 BDFID: 768 Internal Node ID: 1 Compute Unit: 40 SIMDs per CU: 2 Shader Engines: 2 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Memory Properties: Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 122 SDMA engine uCode:: 80 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 12566528(0xbfc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 12566528(0xbfc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Recommended Granule:2048KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Recommended Granule:0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1031 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done *** ``` My user is in the `video` and `render` groups. ### `ls` output of relevant `/dev` entries ``` ❯ ls -lag /dev/dri /dev/kfd /dev/dri/* crw-rw----@ 226,1 root video 16 Jan 18:19 /dev/dri/card1 crw-rw-rw- 234,0 root render 16 Jan 15:46 /dev/kfd crw-rw-rw- 226,128 root render 16 Jan 15:46 /dev/dri/renderD128 /dev/dri: drwxr-xr-x - root root 16 Jan 15:45 ./ drwxr-xr-x - root root 16 Jan 23:20 ../ drwxr-xr-x - root root 16 Jan 15:46 by-path/ crw-rw----@ 226,1 root video 16 Jan 18:19 card1 crw-rw-rw- 226,128 root render 16 Jan 15:46 renderD128 /dev/dri/by-path: drwxr-xr-x - root root 16 Jan 15:46 ./ drwxr-xr-x - root root 16 Jan 15:45 ../ lrwxrwxrwx@ 8 root root 16 Jan 15:46 pci-0000:03:00.0-card -> ../card1 lrwxrwxrwx 13 root root 16 Jan 15:46 pci-0000:03:00.0-render -> ../renderD128 ``` I just pulled the docker image half an hour ago, so it should be the most up to date. Nevertheless, here is the hash of the image in use: `sha256:9874ece252bfd8404e2795066649953255abc29e7d6aeab1966d19fadf9f06c4` If any more information is needed, I am happy to supply it. :) Thank you a lot for your time and effort. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version _No response_
{ "login": "bitfl0wer", "id": 39242991, "node_id": "MDQ6VXNlcjM5MjQyOTkx", "avatar_url": "https://avatars.githubusercontent.com/u/39242991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bitfl0wer", "html_url": "https://github.com/bitfl0wer", "followers_url": "https://api.github.com/users/bitfl0wer/followers", "following_url": "https://api.github.com/users/bitfl0wer/following{/other_user}", "gists_url": "https://api.github.com/users/bitfl0wer/gists{/gist_id}", "starred_url": "https://api.github.com/users/bitfl0wer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bitfl0wer/subscriptions", "organizations_url": "https://api.github.com/users/bitfl0wer/orgs", "repos_url": "https://api.github.com/users/bitfl0wer/repos", "events_url": "https://api.github.com/users/bitfl0wer/events{/privacy}", "received_events_url": "https://api.github.com/users/bitfl0wer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8463/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3927
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3927/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3927/comments
https://api.github.com/repos/ollama/ollama/issues/3927/events
https://github.com/ollama/ollama/issues/3927
2,264,807,387
I_kwDOJ0Z1Ps6G_jfb
3,927
function calling with autogen does not work
{ "login": "patrickwasp", "id": 70671760, "node_id": "MDQ6VXNlcjcwNjcxNzYw", "avatar_url": "https://avatars.githubusercontent.com/u/70671760?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickwasp", "html_url": "https://github.com/patrickwasp", "followers_url": "https://api.github.com/users/patrickwasp/followers", "following_url": "https://api.github.com/users/patrickwasp/following{/other_user}", "gists_url": "https://api.github.com/users/patrickwasp/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickwasp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickwasp/subscriptions", "organizations_url": "https://api.github.com/users/patrickwasp/orgs", "repos_url": "https://api.github.com/users/patrickwasp/repos", "events_url": "https://api.github.com/users/patrickwasp/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickwasp/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
2024-04-26T02:10:55
2024-07-30T02:50:42
2024-07-26T00:50:33
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ```python #!/usr/local/bin/python3.12 from typing import Literal from pydantic import BaseModel, Field from typing_extensions import Annotated import autogen from autogen.cache import Cache # MODEL_NAME = "gpt-3.5-turbo" # API_URL = "https://api.openai.com/v1/" # API_KEY = "sk-XYZ" MODEL_NAME = "llama3:8k" API_URL = "http://10.4.4.207:11434/v1" API_KEY = "ollama" config_list = [ { "model": MODEL_NAME, "base_url": API_URL, "api_key": API_KEY, } ] llm_config = { "config_list": config_list, "timeout": 120, } user_proxy = autogen.UserProxyAgent( name="user_proxy", is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"), human_input_mode="NEVER", max_consecutive_auto_reply=10, ) chatbot = autogen.AssistantAgent( name="chatbot", system_message="For currency exchange tasks, only use the functions you have been provided with. Reply TERMINATE when the task is done.", llm_config=llm_config, ) CurrencySymbol = Literal["USD", "EUR"] def exchange_rate( base_currency: CurrencySymbol, quote_currency: CurrencySymbol ) -> float: if base_currency == quote_currency: return 1.0 elif base_currency == "USD" and quote_currency == "EUR": return 1 / 1.1 elif base_currency == "EUR" and quote_currency == "USD": return 1.1 else: raise ValueError(f"Unknown currencies {base_currency}, {quote_currency}") class Currency(BaseModel): currency: Annotated[CurrencySymbol, Field(..., description="Currency symbol")] amount: Annotated[float, Field(0, description="Amount of currency", ge=0)] @user_proxy.register_for_execution() @chatbot.register_for_llm(description="Currency exchange calculator.") def currency_calculator( base: Annotated[Currency, "Base currency: amount and currency symbol"], quote_currency: Annotated[CurrencySymbol, "Quote currency symbol"] = "USD", ) -> Currency: quote_amount = exchange_rate(base.currency, quote_currency) * base.amount return Currency(amount=quote_amount, currency=quote_currency) def main(): with Cache.disk() as cache: user_proxy.initiate_chat( chatbot, message="How much is 112.23 Euros in US Dollars?", summary_method="last_msg", cache=cache, ) if __name__ == "__main__": main() ``` output using gpt-3.5-turbo: ```python3.12 autogen_tools_example.py user_proxy (to chatbot): How much is 112.23 Euros in US Dollars? -------------------------------------------------------------------------------- chatbot (to user_proxy): ***** Suggested tool call (call_TdfMydJ9TeKBbz8QRE5ZHl2k): currency_calculator ***** Arguments: {"base":{"currency":"EUR","amount":112.23},"quote_currency":"USD"} ************************************************************************************ -------------------------------------------------------------------------------- >>>>>>>> EXECUTING FUNCTION currency_calculator... user_proxy (to chatbot): user_proxy (to chatbot): ***** Response from calling tool (call_TdfMydJ9TeKBbz8QRE5ZHl2k) ***** {"currency":"USD","amount":123.45300000000002} ********************************************************************** -------------------------------------------------------------------------------- chatbot (to user_proxy): 112.23 Euros is equivalent to 123.45 US Dollars. -------------------------------------------------------------------------------- user_proxy (to chatbot): -------------------------------------------------------------------------------- chatbot (to user_proxy): TERMINATE -------------------------------------------------------------------------------- ``` output using llama3 running with ollama: ``` user_proxy (to chatbot): How much is 112.23 Euros in US Dollars? -------------------------------------------------------------------------------- chatbot (to user_proxy): To convert 112.23 Euros to US Dollars, I can use the provided function: According to the exchange rate, 1 EUR is approximately equal to 1.22 USD. Converting 112.23 EUR, we get: 112.23 EUR * 1.22 USD/EUR = 136.88 USD So, 112.23 Euros are equivalent to approximately 136.88 US Dollars. Please let me know if I should continue with the next task or reply TERMINATE when the currency exchange is done. -------------------------------------------------------------------------------- user_proxy (to chatbot): -------------------------------------------------------------------------------- ``` followed by a bunch of other empty messages from `user_proxy (to chatbot)` before exiting. ### OS Docker ### GPU Nvidia ### CPU Intel ### Ollama version ollama/ollama:0.1.32
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3927/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7527
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7527/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7527/comments
https://api.github.com/repos/ollama/ollama/issues/7527/events
https://github.com/ollama/ollama/pull/7527
2,638,418,645
PR_kwDOJ0Z1Ps6BEmu1
7,527
Fix minor inconsistency
{ "login": "edmcman", "id": 1017189, "node_id": "MDQ6VXNlcjEwMTcxODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edmcman", "html_url": "https://github.com/edmcman", "followers_url": "https://api.github.com/users/edmcman/followers", "following_url": "https://api.github.com/users/edmcman/following{/other_user}", "gists_url": "https://api.github.com/users/edmcman/gists{/gist_id}", "starred_url": "https://api.github.com/users/edmcman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edmcman/subscriptions", "organizations_url": "https://api.github.com/users/edmcman/orgs", "repos_url": "https://api.github.com/users/edmcman/repos", "events_url": "https://api.github.com/users/edmcman/events{/privacy}", "received_events_url": "https://api.github.com/users/edmcman/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-11-06T15:24:36
2024-11-08T17:36:17
2024-11-08T17:36:17
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7527", "html_url": "https://github.com/ollama/ollama/pull/7527", "diff_url": "https://github.com/ollama/ollama/pull/7527.diff", "patch_url": "https://github.com/ollama/ollama/pull/7527.patch", "merged_at": "2024-11-08T17:36:17" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7527/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4300
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4300/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4300/comments
https://api.github.com/repos/ollama/ollama/issues/4300/events
https://github.com/ollama/ollama/pull/4300
2,288,528,117
PR_kwDOJ0Z1Ps5vCP33
4,300
Add LlamaScript to Community Projects
{ "login": "zanderlewis", "id": 158775116, "node_id": "U_kgDOCXa3TA", "avatar_url": "https://avatars.githubusercontent.com/u/158775116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanderlewis", "html_url": "https://github.com/zanderlewis", "followers_url": "https://api.github.com/users/zanderlewis/followers", "following_url": "https://api.github.com/users/zanderlewis/following{/other_user}", "gists_url": "https://api.github.com/users/zanderlewis/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanderlewis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanderlewis/subscriptions", "organizations_url": "https://api.github.com/users/zanderlewis/orgs", "repos_url": "https://api.github.com/users/zanderlewis/repos", "events_url": "https://api.github.com/users/zanderlewis/events{/privacy}", "received_events_url": "https://api.github.com/users/zanderlewis/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-05-09T21:58:54
2024-05-09T22:30:49
2024-05-09T22:30:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4300", "html_url": "https://github.com/ollama/ollama/pull/4300", "diff_url": "https://github.com/ollama/ollama/pull/4300.diff", "patch_url": "https://github.com/ollama/ollama/pull/4300.patch", "merged_at": "2024-05-09T22:30:49" }
Pull Request for #4061
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4300/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1557
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1557/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1557/comments
https://api.github.com/repos/ollama/ollama/issues/1557/events
https://github.com/ollama/ollama/issues/1557
2,044,513,369
I_kwDOJ0Z1Ps553MxZ
1,557
Increasing slow response - CPU only on Linux Azure
{ "login": "benmarinic", "id": 1210218, "node_id": "MDQ6VXNlcjEyMTAyMTg=", "avatar_url": "https://avatars.githubusercontent.com/u/1210218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benmarinic", "html_url": "https://github.com/benmarinic", "followers_url": "https://api.github.com/users/benmarinic/followers", "following_url": "https://api.github.com/users/benmarinic/following{/other_user}", "gists_url": "https://api.github.com/users/benmarinic/gists{/gist_id}", "starred_url": "https://api.github.com/users/benmarinic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benmarinic/subscriptions", "organizations_url": "https://api.github.com/users/benmarinic/orgs", "repos_url": "https://api.github.com/users/benmarinic/repos", "events_url": "https://api.github.com/users/benmarinic/events{/privacy}", "received_events_url": "https://api.github.com/users/benmarinic/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
11
2023-12-16T00:14:30
2024-04-15T15:51:30
2024-03-13T00:22:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm using the following VM in azure: Standard D8s v3 vCPUs 8, RAM 32 GiB Have tried Mistral 7b and Orca-mini. I've also tried 4 bit versions. Ollama is responding increasingly slowly. After the 4th simple query ("hi" or "what's the capital of ...") I'm waiting in excess of 60 seconds for it to begin to respond; the response gets slow with each repeat simple question. Once it has responded the each token streams in reasonably well. I've tried both Ubuntu and Suse. Is the VM just not suitable? I'm trying to see how far I can get without a GPU.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1557/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1557/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4656
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4656/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4656/comments
https://api.github.com/repos/ollama/ollama/issues/4656/events
https://github.com/ollama/ollama/pull/4656
2,318,262,026
PR_kwDOJ0Z1Ps5wnbT7
4,656
Add `OLLAMA_HOME` for setting `~/.ollama`
{ "login": "maaslalani", "id": 42545625, "node_id": "MDQ6VXNlcjQyNTQ1NjI1", "avatar_url": "https://avatars.githubusercontent.com/u/42545625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maaslalani", "html_url": "https://github.com/maaslalani", "followers_url": "https://api.github.com/users/maaslalani/followers", "following_url": "https://api.github.com/users/maaslalani/following{/other_user}", "gists_url": "https://api.github.com/users/maaslalani/gists{/gist_id}", "starred_url": "https://api.github.com/users/maaslalani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maaslalani/subscriptions", "organizations_url": "https://api.github.com/users/maaslalani/orgs", "repos_url": "https://api.github.com/users/maaslalani/repos", "events_url": "https://api.github.com/users/maaslalani/events{/privacy}", "received_events_url": "https://api.github.com/users/maaslalani/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
7
2024-05-27T05:26:18
2024-08-05T18:51:52
2024-08-05T18:51:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4656", "html_url": "https://github.com/ollama/ollama/pull/4656", "diff_url": "https://github.com/ollama/ollama/pull/4656.diff", "patch_url": "https://github.com/ollama/ollama/pull/4656.patch", "merged_at": null }
Fixes https://github.com/ollama/ollama/issues/228 This PR adds the optional configuration for `OLLAMA_HOME` to prevent cluttering the user's home directory. `OLLAMA_HOME` is optional and uses the current behavior if not provided. If `OLLAMA_MODELS` is not explicitly, the default value is `~/$OLLAMA_HOME/models`.
{ "login": "maaslalani", "id": 42545625, "node_id": "MDQ6VXNlcjQyNTQ1NjI1", "avatar_url": "https://avatars.githubusercontent.com/u/42545625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maaslalani", "html_url": "https://github.com/maaslalani", "followers_url": "https://api.github.com/users/maaslalani/followers", "following_url": "https://api.github.com/users/maaslalani/following{/other_user}", "gists_url": "https://api.github.com/users/maaslalani/gists{/gist_id}", "starred_url": "https://api.github.com/users/maaslalani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maaslalani/subscriptions", "organizations_url": "https://api.github.com/users/maaslalani/orgs", "repos_url": "https://api.github.com/users/maaslalani/repos", "events_url": "https://api.github.com/users/maaslalani/events{/privacy}", "received_events_url": "https://api.github.com/users/maaslalani/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4656/reactions", "total_count": 4, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4656/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5054
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5054/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5054/comments
https://api.github.com/repos/ollama/ollama/issues/5054/events
https://github.com/ollama/ollama/issues/5054
2,354,412,931
I_kwDOJ0Z1Ps6MVX2D
5,054
Windows - `go generate` failing on build_cpu
{ "login": "JerrettDavis", "id": 2610199, "node_id": "MDQ6VXNlcjI2MTAxOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2610199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JerrettDavis", "html_url": "https://github.com/JerrettDavis", "followers_url": "https://api.github.com/users/JerrettDavis/followers", "following_url": "https://api.github.com/users/JerrettDavis/following{/other_user}", "gists_url": "https://api.github.com/users/JerrettDavis/gists{/gist_id}", "starred_url": "https://api.github.com/users/JerrettDavis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JerrettDavis/subscriptions", "organizations_url": "https://api.github.com/users/JerrettDavis/orgs", "repos_url": "https://api.github.com/users/JerrettDavis/repos", "events_url": "https://api.github.com/users/JerrettDavis/events{/privacy}", "received_events_url": "https://api.github.com/users/JerrettDavis/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" }, { "id": 7700262114, "node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g", "url": "https://api.github.com/repos/ollama/ollama/labels/build", "name": "build", "color": "006b75", "default": false, "description": "Issues relating to building ollama from source" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
6
2024-06-15T02:17:50
2024-11-04T19:15:44
2024-11-04T19:15:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I've been trying to get a Windows dev environment up and running following the [development](https://github.com/ollama/ollama/blob/main/docs/development.md) guide. I've attempted installing both MinGW-w64 and MSYS2, along with the latest Visual Studio build tools, but the existing Windows build script does not seem to work out-of-the-box. I've tried swapping paths, moving which cmake is actually getting used, setting the default generator through environment variables, and quite a bit more. When running `go generate ./...` some variation of the following error is shown: ``` Building LCD CPU generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release cmake version 3.28.3-msvc11 CMake suite maintained and supported by Kitware (kitware.com/cmake). CMake Error at CMakeLists.txt:2 (project): Generator Ninja does not support platform specification, but platform x64 was specified. CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! llm\generate\generate_windows.go:3: running "powershell": exit status 1 ``` The crux of the issue seems to be the default generator (ninja). It does not accept the `-A` argument that the Visual Studio generators do. ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.44
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5054/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7090
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7090/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7090/comments
https://api.github.com/repos/ollama/ollama/issues/7090/events
https://github.com/ollama/ollama/issues/7090
2,563,840,018
I_kwDOJ0Z1Ps6Y0RgS
7,090
ollama_models path not working any longer
{ "login": "Molnfront", "id": 935328, "node_id": "MDQ6VXNlcjkzNTMyOA==", "avatar_url": "https://avatars.githubusercontent.com/u/935328?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Molnfront", "html_url": "https://github.com/Molnfront", "followers_url": "https://api.github.com/users/Molnfront/followers", "following_url": "https://api.github.com/users/Molnfront/following{/other_user}", "gists_url": "https://api.github.com/users/Molnfront/gists{/gist_id}", "starred_url": "https://api.github.com/users/Molnfront/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Molnfront/subscriptions", "organizations_url": "https://api.github.com/users/Molnfront/orgs", "repos_url": "https://api.github.com/users/Molnfront/repos", "events_url": "https://api.github.com/users/Molnfront/events{/privacy}", "received_events_url": "https://api.github.com/users/Molnfront/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
null
[]
null
4
2024-10-03T11:42:01
2024-12-02T23:04:45
2024-12-02T23:04:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Last week I added ollama_models path to my env file in my Mac. Olama picked up the settings and saved the models to my path (external SSD). Now yesterday when I picked gemma 2 and got it downloaded it ignored the path and downloaded it to .ollama. ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.3.12
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7090/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4270
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4270/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4270/comments
https://api.github.com/repos/ollama/ollama/issues/4270/events
https://github.com/ollama/ollama/issues/4270
2,286,706,401
I_kwDOJ0Z1Ps6ITF7h
4,270
windows ollama 0.1.34 can not use GPU,with nvidia RTX 4060
{ "login": "zhafree", "id": 25758100, "node_id": "MDQ6VXNlcjI1NzU4MTAw", "avatar_url": "https://avatars.githubusercontent.com/u/25758100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhafree", "html_url": "https://github.com/zhafree", "followers_url": "https://api.github.com/users/zhafree/followers", "following_url": "https://api.github.com/users/zhafree/following{/other_user}", "gists_url": "https://api.github.com/users/zhafree/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhafree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhafree/subscriptions", "organizations_url": "https://api.github.com/users/zhafree/orgs", "repos_url": "https://api.github.com/users/zhafree/repos", "events_url": "https://api.github.com/users/zhafree/events{/privacy}", "received_events_url": "https://api.github.com/users/zhafree/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-05-09T00:58:28
2024-06-02T00:16:21
2024-06-02T00:16:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` C:\Users\zh_af>nvidia-smi Thu May 9 08:53:43 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 537.70 Driver Version: 537.70 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 4060 WDDM | 00000000:01:00.0 On | N/A | | 31% 30C P8 N/A / 115W | 7776MiB / 8188MiB | 2% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 2824 C+G ...on\124.0.2478.80\msedgewebview2.exe N/A | | 0 N/A N/A 4304 C+G ...__8wekyb3d8bbwe\Notepad\Notepad.exe N/A | | 0 N/A N/A 8132 C+G ...nt.CBS_cw5n1h2txyewy\SearchHost.exe N/A | | 0 N/A N/A 9796 C+G ...5\extracted\runtime\WeChatAppEx.exe N/A | | 0 N/A N/A 10424 C+G ...wekyb3d8bbwe\XboxGameBarWidgets.exe N/A | | 0 N/A N/A 11688 C+G C:\Windows\System32\NahimicSvc64.exe N/A | | 0 N/A N/A 12648 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A | | 0 N/A N/A 19188 C+G ...cal\Microsoft\OneDrive\OneDrive.exe N/A | | 0 N/A N/A 20240 C+G ...oogle\Chrome\Application\chrome.exe N/A | | 0 N/A N/A 20948 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 21820 C+G ...x64__8wekyb3d8bbwe\WinStore.App.exe N/A | | 0 N/A N/A 24376 C+G ...ekyb3d8bbwe\PhoneExperienceHost.exe N/A | | 0 N/A N/A 25396 C ...\cuda_v11.3\ollama_llama_server.exe N/A | | 0 N/A N/A 26708 C+G ...8bbwe\SnippingTool\SnippingTool.exe N/A | | 0 N/A N/A 27804 C+G ...\Docker\frontend\Docker Desktop.exe N/A | | 0 N/A N/A 30560 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 30980 C+G ...siveControlPanel\SystemSettings.exe N/A | | 0 N/A N/A 31312 C+G ...n\SunloginClient\SunloginClient.exe N/A | | 0 N/A N/A 35540 C+G ...crosoft\Edge\Application\msedge.exe N/A | | 0 N/A N/A 38564 C+G ...__8wekyb3d8bbwe\WindowsTerminal.exe N/A | | 0 N/A N/A 39748 C+G ...CBS_cw5n1h2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 41288 C+G ...2txyewy\StartMenuExperienceHost.exe N/A | +---------------------------------------------------------------------------------------+ ``` ### OS Windows ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.34
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4270/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1461
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1461/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1461/comments
https://api.github.com/repos/ollama/ollama/issues/1461/events
https://github.com/ollama/ollama/issues/1461
2,034,926,824
I_kwDOJ0Z1Ps55SoTo
1,461
Mistral not providing license information
{ "login": "neural-loop", "id": 654993, "node_id": "MDQ6VXNlcjY1NDk5Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/654993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neural-loop", "html_url": "https://github.com/neural-loop", "followers_url": "https://api.github.com/users/neural-loop/followers", "following_url": "https://api.github.com/users/neural-loop/following{/other_user}", "gists_url": "https://api.github.com/users/neural-loop/gists{/gist_id}", "starred_url": "https://api.github.com/users/neural-loop/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neural-loop/subscriptions", "organizations_url": "https://api.github.com/users/neural-loop/orgs", "repos_url": "https://api.github.com/users/neural-loop/repos", "events_url": "https://api.github.com/users/neural-loop/events{/privacy}", "received_events_url": "https://api.github.com/users/neural-loop/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2023-12-11T06:30:58
2024-01-25T22:56:35
2024-01-25T22:32:20
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
![image](https://github.com/jmorganca/ollama/assets/654993/dfb4a673-8bff-4c40-95be-077014e6a55f) It is maybe because they don't include a license.txt in their repository. However, they do specify that it is Apache 2.0 ![image](https://github.com/jmorganca/ollama/assets/654993/0855ffb3-ad27-46fb-b326-0086243b2f39) also here ![image](https://github.com/jmorganca/ollama/assets/654993/1fa628bd-e8cf-4f75-97d5-484af80628c2) It might be nice if licenses were added to the website as well, that users could filter by their licensing needs
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1461/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4039
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4039/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4039/comments
https://api.github.com/repos/ollama/ollama/issues/4039/events
https://github.com/ollama/ollama/pull/4039
2,270,522,545
PR_kwDOJ0Z1Ps5uF_a6
4,039
types/model: reduce Name.Filepath allocs from 5 to 2
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-04-30T05:14:34
2024-04-30T18:09:20
2024-04-30T18:09:19
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4039", "html_url": "https://github.com/ollama/ollama/pull/4039", "diff_url": "https://github.com/ollama/ollama/pull/4039.diff", "patch_url": "https://github.com/ollama/ollama/pull/4039.patch", "merged_at": "2024-04-30T18:09:19" }
null
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4039/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/560
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/560/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/560/comments
https://api.github.com/repos/ollama/ollama/issues/560/events
https://github.com/ollama/ollama/issues/560
1,905,749,865
I_kwDOJ0Z1Ps5xl29p
560
Is IPv6 supported?
{ "login": "jamesbraza", "id": 8990777, "node_id": "MDQ6VXNlcjg5OTA3Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesbraza", "html_url": "https://github.com/jamesbraza", "followers_url": "https://api.github.com/users/jamesbraza/followers", "following_url": "https://api.github.com/users/jamesbraza/following{/other_user}", "gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}", "starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions", "organizations_url": "https://api.github.com/users/jamesbraza/orgs", "repos_url": "https://api.github.com/users/jamesbraza/repos", "events_url": "https://api.github.com/users/jamesbraza/events{/privacy}", "received_events_url": "https://api.github.com/users/jamesbraza/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-09-20T21:33:12
2023-09-21T16:28:17
2023-09-21T02:54:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
With the Ollama server running: ```bash > curl -X POST --header 'Content-Type: application/json' "http://[::1]:11434/api/generate" -d '{ "model": "llama2:13b", "prompt": "Your first prompt goes here" }' curl: (7) Failed to connect to ::1 port 11434 after 5 ms: Couldn't connect to server ``` I am wondering, is IPv6 supported with Ollama server?
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/560/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1361
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1361/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1361/comments
https://api.github.com/repos/ollama/ollama/issues/1361/events
https://github.com/ollama/ollama/issues/1361
2,022,410,814
I_kwDOJ0Z1Ps54i4o-
1,361
Add support for gpt4-x-alpaca
{ "login": "priamai", "id": 57333254, "node_id": "MDQ6VXNlcjU3MzMzMjU0", "avatar_url": "https://avatars.githubusercontent.com/u/57333254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/priamai", "html_url": "https://github.com/priamai", "followers_url": "https://api.github.com/users/priamai/followers", "following_url": "https://api.github.com/users/priamai/following{/other_user}", "gists_url": "https://api.github.com/users/priamai/gists{/gist_id}", "starred_url": "https://api.github.com/users/priamai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/priamai/subscriptions", "organizations_url": "https://api.github.com/users/priamai/orgs", "repos_url": "https://api.github.com/users/priamai/repos", "events_url": "https://api.github.com/users/priamai/events{/privacy}", "received_events_url": "https://api.github.com/users/priamai/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
1
2023-12-03T07:50:31
2024-03-12T06:35:10
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi there, this is an amazing model: https://huggingface.co/chavinlo/gpt4-x-alpaca Cheers.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1361/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6714
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6714/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6714/comments
https://api.github.com/repos/ollama/ollama/issues/6714/events
https://github.com/ollama/ollama/pull/6714
2,514,941,013
PR_kwDOJ0Z1Ps5653f9
6,714
catch when model vocab size is set correctly
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-09-09T21:19:20
2024-09-10T00:18:57
2024-09-10T00:18:55
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6714", "html_url": "https://github.com/ollama/ollama/pull/6714", "diff_url": "https://github.com/ollama/ollama/pull/6714.diff", "patch_url": "https://github.com/ollama/ollama/pull/6714.patch", "merged_at": "2024-09-10T00:18:55" }
This check catches if there are too many tokens in the tokenizer vs. the expected number of tokens specified in the `vocab_size` field of `config.json`. This typically happens if the `added_tokens` array in `tokenizer.json` ends up has too many tokens. Right now this results in the back end barfing during inference instead of catching it during `ollama create`.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6714/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/778
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/778/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/778/comments
https://api.github.com/repos/ollama/ollama/issues/778/events
https://github.com/ollama/ollama/pull/778
1,942,147,639
PR_kwDOJ0Z1Ps5cv_e7
778
show request to server rather than local check
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-10-13T15:14:41
2023-10-16T21:27:26
2023-10-16T21:27:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/778", "html_url": "https://github.com/ollama/ollama/pull/778", "diff_url": "https://github.com/ollama/ollama/pull/778.diff", "patch_url": "https://github.com/ollama/ollama/pull/778.patch", "merged_at": "2023-10-16T21:27:25" }
The show command should send a request to the server, rather than making a direct call to the function locally. resolces #776
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/778/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1604
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1604/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1604/comments
https://api.github.com/repos/ollama/ollama/issues/1604/events
https://github.com/ollama/ollama/pull/1604
2,048,514,225
PR_kwDOJ0Z1Ps5iXLIp
1,604
Updated syntax in client.py
{ "login": "omcodedthis", "id": 119602009, "node_id": "U_kgDOByD7WQ", "avatar_url": "https://avatars.githubusercontent.com/u/119602009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omcodedthis", "html_url": "https://github.com/omcodedthis", "followers_url": "https://api.github.com/users/omcodedthis/followers", "following_url": "https://api.github.com/users/omcodedthis/following{/other_user}", "gists_url": "https://api.github.com/users/omcodedthis/gists{/gist_id}", "starred_url": "https://api.github.com/users/omcodedthis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omcodedthis/subscriptions", "organizations_url": "https://api.github.com/users/omcodedthis/orgs", "repos_url": "https://api.github.com/users/omcodedthis/repos", "events_url": "https://api.github.com/users/omcodedthis/events{/privacy}", "received_events_url": "https://api.github.com/users/omcodedthis/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-12-19T11:58:00
2024-01-18T22:27:41
2024-01-18T22:27:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1604", "html_url": "https://github.com/ollama/ollama/pull/1604", "diff_url": "https://github.com/ollama/ollama/pull/1604.diff", "patch_url": "https://github.com/ollama/ollama/pull/1604.patch", "merged_at": null }
* Updated the syntax for `heartbeat()` in `client.py`. * Functionality is maintained.
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1604/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1874
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1874/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1874/comments
https://api.github.com/repos/ollama/ollama/issues/1874/events
https://github.com/ollama/ollama/pull/1874
2,073,027,262
PR_kwDOJ0Z1Ps5jnP__
1,874
Set corret CUDA minimum compute capability version
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-01-09T19:29:52
2024-01-09T19:37:22
2024-01-09T19:37:22
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1874", "html_url": "https://github.com/ollama/ollama/pull/1874", "diff_url": "https://github.com/ollama/ollama/pull/1874.diff", "patch_url": "https://github.com/ollama/ollama/pull/1874.patch", "merged_at": "2024-01-09T19:37:22" }
If you attempt to run the current CUDA build on compute capability 5.2 cards, you'll hit the following failure: cuBLAS error 15 at ggml-cuda.cu:7956: the requested functionality is not supported
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1874/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/567
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/567/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/567/comments
https://api.github.com/repos/ollama/ollama/issues/567/events
https://github.com/ollama/ollama/pull/567
1,907,689,851
PR_kwDOJ0Z1Ps5a7XZk
567
update submodule
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-09-21T20:13:28
2023-09-21T20:22:24
2023-09-21T20:22:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/567", "html_url": "https://github.com/ollama/ollama/pull/567", "diff_url": "https://github.com/ollama/ollama/pull/567.diff", "patch_url": "https://github.com/ollama/ollama/pull/567.patch", "merged_at": "2023-09-21T20:22:23" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/567/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4013
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4013/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4013/comments
https://api.github.com/repos/ollama/ollama/issues/4013/events
https://github.com/ollama/ollama/issues/4013
2,267,947,120
I_kwDOJ0Z1Ps6HLiBw
4,013
API Endpoint for Listing Loaded Running Models
{ "login": "strikeoncmputrz", "id": 648143, "node_id": "MDQ6VXNlcjY0ODE0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/648143?v=4", "gravatar_id": "", "url": "https://api.github.com/users/strikeoncmputrz", "html_url": "https://github.com/strikeoncmputrz", "followers_url": "https://api.github.com/users/strikeoncmputrz/followers", "following_url": "https://api.github.com/users/strikeoncmputrz/following{/other_user}", "gists_url": "https://api.github.com/users/strikeoncmputrz/gists{/gist_id}", "starred_url": "https://api.github.com/users/strikeoncmputrz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/strikeoncmputrz/subscriptions", "organizations_url": "https://api.github.com/users/strikeoncmputrz/orgs", "repos_url": "https://api.github.com/users/strikeoncmputrz/repos", "events_url": "https://api.github.com/users/strikeoncmputrz/events{/privacy}", "received_events_url": "https://api.github.com/users/strikeoncmputrz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
3
2024-04-29T01:25:17
2024-05-14T00:17:37
2024-05-14T00:17:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It would be excellent to be able to interrogate the API to determine which models are running at any given time, rather than just seeing which checkpoints were pulled. I use a variety of clients to interact with Ollama's API. I sometimes run models with a long `keep_alive` and assume others have similar use cases. The only way I know of to identify a running model is through processes: `ps aux | grep -- '--model' | grep -v grep | grep -Po '(?<=--model\s).*' | cut -d ' ' -f1`. This will give you the full path to the model's blob. From there, you can compare that with the output of ollama show --modelfile (or the /api/show endpoint). I checked the open issues and reddit and didn't see any similar RFIs or requests. I wrote a [bash script](https://github.com/strikeoncmputrz/LLM_Scripts/blob/main/show_loaded_models.sh) (depends on jq) that implements this as POC.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4013/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2869
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2869/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2869/comments
https://api.github.com/repos/ollama/ollama/issues/2869/events
https://github.com/ollama/ollama/issues/2869
2,164,318,072
I_kwDOJ0Z1Ps6BAN94
2,869
Ollama doesn't use Radeon RX 6600
{ "login": "nameiwillforget", "id": 81373487, "node_id": "MDQ6VXNlcjgxMzczNDg3", "avatar_url": "https://avatars.githubusercontent.com/u/81373487?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nameiwillforget", "html_url": "https://github.com/nameiwillforget", "followers_url": "https://api.github.com/users/nameiwillforget/followers", "following_url": "https://api.github.com/users/nameiwillforget/following{/other_user}", "gists_url": "https://api.github.com/users/nameiwillforget/gists{/gist_id}", "starred_url": "https://api.github.com/users/nameiwillforget/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nameiwillforget/subscriptions", "organizations_url": "https://api.github.com/users/nameiwillforget/orgs", "repos_url": "https://api.github.com/users/nameiwillforget/repos", "events_url": "https://api.github.com/users/nameiwillforget/events{/privacy}", "received_events_url": "https://api.github.com/users/nameiwillforget/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
22
2024-03-01T22:57:13
2024-09-06T20:08:20
2024-03-12T07:27:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm using Arch Linux with the latest updates installed and ollama installed from its AUR package. When I use the Smaug model, it uses my CPU considerably but my GPU not at all: ![amdgpu](https://github.com/ollama/ollama/assets/81373487/be629472-a4eb-4f31-b8e9-726e2f9a8c21) I put the output of `ollama serve` and ollama running Smaug into a file: [ollama.txt](https://github.com/ollama/ollama/files/14466737/ollama.txt) [smaug.txt](https://github.com/ollama/ollama/files/14466741/smaug.txt) I've installed Cuda because I thought for a moment it is needed, but I don't think that's the reason it doesn't work.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2869/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2063
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2063/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2063/comments
https://api.github.com/repos/ollama/ollama/issues/2063/events
https://github.com/ollama/ollama/pull/2063
2,089,401,919
PR_kwDOJ0Z1Ps5kfBUs
2,063
Save and load sessions
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-01-19T01:54:02
2024-02-12T20:10:33
2024-01-25T20:12:36
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2063", "html_url": "https://github.com/ollama/ollama/pull/2063", "diff_url": "https://github.com/ollama/ollama/pull/2063.diff", "patch_url": "https://github.com/ollama/ollama/pull/2063.patch", "merged_at": "2024-01-25T20:12:36" }
This change allows users to interactively save a session from the REPL, and then load it back up again later. It also adds a new `MESSAGE` command for Modelfiles so that users can build their own session which can be created with `ollama create`.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2063/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2063/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2286
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2286/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2286/comments
https://api.github.com/repos/ollama/ollama/issues/2286/events
https://github.com/ollama/ollama/issues/2286
2,109,272,858
I_kwDOJ0Z1Ps59uPMa
2,286
Codellama70b runs, but Codellama70b-Instruct spins forever after downloading
{ "login": "ewebgh33", "id": 123797054, "node_id": "U_kgDOB2D-Pg", "avatar_url": "https://avatars.githubusercontent.com/u/123797054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ewebgh33", "html_url": "https://github.com/ewebgh33", "followers_url": "https://api.github.com/users/ewebgh33/followers", "following_url": "https://api.github.com/users/ewebgh33/following{/other_user}", "gists_url": "https://api.github.com/users/ewebgh33/gists{/gist_id}", "starred_url": "https://api.github.com/users/ewebgh33/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ewebgh33/subscriptions", "organizations_url": "https://api.github.com/users/ewebgh33/orgs", "repos_url": "https://api.github.com/users/ewebgh33/repos", "events_url": "https://api.github.com/users/ewebgh33/events{/privacy}", "received_events_url": "https://api.github.com/users/ewebgh33/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-01-31T04:44:58
2024-07-19T21:39:51
2024-07-19T21:39:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Wondering if this is a config issue or something else? IE are any of the additional model files that are downloaded alongside the 38gb main file, borked in any way? Ollama is via WSL in windows. `ollama run codellama:70b` works and gives me code `ollama run codellama:70b-instruct` downloads but has the spinning dots thing and never progresses. That is to say Verifies "removing any unused layers" "success" But then nothing. Can't prompt, it just sits here and spins. Exit and restart, try it again. Already downloaded so it skips re-downloading... but spins again for infinite amount of time. Since 70b vanilla runs, it can't be memory or GPUs? I have 2x 4090 and 64gb RAM. Sure 128 would be better but as I said, 70b vanilla runs fine. Thanks
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2286/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5703
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5703/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5703/comments
https://api.github.com/repos/ollama/ollama/issues/5703/events
https://github.com/ollama/ollama/issues/5703
2,408,951,725
I_kwDOJ0Z1Ps6Pla-t
5,703
Mixtral truncates output after year
{ "login": "alexander-fischer", "id": 7881637, "node_id": "MDQ6VXNlcjc4ODE2Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/7881637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexander-fischer", "html_url": "https://github.com/alexander-fischer", "followers_url": "https://api.github.com/users/alexander-fischer/followers", "following_url": "https://api.github.com/users/alexander-fischer/following{/other_user}", "gists_url": "https://api.github.com/users/alexander-fischer/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexander-fischer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexander-fischer/subscriptions", "organizations_url": "https://api.github.com/users/alexander-fischer/orgs", "repos_url": "https://api.github.com/users/alexander-fischer/repos", "events_url": "https://api.github.com/users/alexander-fischer/events{/privacy}", "received_events_url": "https://api.github.com/users/alexander-fischer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
2024-07-15T14:56:29
2024-07-15T15:02:22
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Output from Mixtral stops after year of date. I could recreate the issue within ollama that appears in vLLM as well: https://github.com/vllm-project/vllm/issues/2464 The model I used was: `[mixtral:8x7b-instruct-v0.1-q8_0](https://ollama.com/library/mixtral:8x7b-instruct-v0.1-q8_0)` You should reproduce it with the mentioned prompt: ``` In order to write a concise single-paragraph summary, pay attention to the following text: The Commonwealth Bank of Australia (CBA) reported strong financial results for the first half of fiscal year 2023, with a statutory net profit after tax of AUD 5.216 billion, up 10% from the same period last year. Cash net profit after tax stood at AUD 5.153 billion, a 9% increase. Operating performance also improved by 18% to AUD 7.820 billion. The bank's home and consumer lending gross lending reached AUD 77 billion, while business and corporate lending gross lending amounted to AUD 18 billion. CBA's net promoter scores (NPS) remained high, with the bank ranking first in the consumer, business, and institutional categories. The bank's liquid assets and deposit funding increased, and its weighted average maturity stood at 5.8 years. CBA's CET1 ratio was 11.4%, and it declared a dividend per share of AUD 2.10 (35 cents). However, the bank warned that forward-looking statements should be treated with caution due to current economic uncertainties and geopolitical risks. Using only the text above, write a condensed and concise summary of key results (preferably as one paragraph): ``` The output stops after the year of date: `The Commonwealth Bank of Australia (CBA) reported strong financial results for the first half of fiscal year 2` ### OS Linux ### GPU Nvidia ### CPU _No response_ ### Ollama version 0.2.5
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5703/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5703/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4612
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4612/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4612/comments
https://api.github.com/repos/ollama/ollama/issues/4612/events
https://github.com/ollama/ollama/pull/4612
2,315,484,894
PR_kwDOJ0Z1Ps5weDLL
4,612
added new community integration (headless-ollama)
{ "login": "nischalj10", "id": 55933460, "node_id": "MDQ6VXNlcjU1OTMzNDYw", "avatar_url": "https://avatars.githubusercontent.com/u/55933460?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nischalj10", "html_url": "https://github.com/nischalj10", "followers_url": "https://api.github.com/users/nischalj10/followers", "following_url": "https://api.github.com/users/nischalj10/following{/other_user}", "gists_url": "https://api.github.com/users/nischalj10/gists{/gist_id}", "starred_url": "https://api.github.com/users/nischalj10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nischalj10/subscriptions", "organizations_url": "https://api.github.com/users/nischalj10/orgs", "repos_url": "https://api.github.com/users/nischalj10/repos", "events_url": "https://api.github.com/users/nischalj10/events{/privacy}", "received_events_url": "https://api.github.com/users/nischalj10/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-05-24T13:58:18
2024-06-09T01:51:16
2024-06-09T01:51:16
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4612", "html_url": "https://github.com/ollama/ollama/pull/4612", "diff_url": "https://github.com/ollama/ollama/pull/4612.diff", "patch_url": "https://github.com/ollama/ollama/pull/4612.patch", "merged_at": "2024-06-09T01:51:16" }
ollama makes it wonderfully easy to build desktop apps that rely on local LLMs with its js and python libraries. > however, the user's system needs to have ollama already installed for the desktop app to use the libraries and make calls to the LLMs. Making users install ollama client separately isn't good UX tbh. thus, "headless-ollama" this repo has pre-run scripts which automatically utilises node runtime to check for the host OS and installs the ollama client and the models needed by the desktop app before the server starts. this is really helpful while building desktop apps where you want everything to be local and self contained within the system.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4612/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5364
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5364/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5364/comments
https://api.github.com/repos/ollama/ollama/issues/5364/events
https://github.com/ollama/ollama/pull/5364
2,381,118,549
PR_kwDOJ0Z1Ps5z7dYK
5,364
Document concurrent behavior and settings
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-28T20:16:56
2024-07-01T16:49:52
2024-07-01T16:49:49
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5364", "html_url": "https://github.com/ollama/ollama/pull/5364", "diff_url": "https://github.com/ollama/ollama/pull/5364.diff", "patch_url": "https://github.com/ollama/ollama/pull/5364.patch", "merged_at": "2024-07-01T16:49:49" }
Merge after #4218
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5364/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1662
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1662/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1662/comments
https://api.github.com/repos/ollama/ollama/issues/1662/events
https://github.com/ollama/ollama/pull/1662
2,052,940,782
PR_kwDOJ0Z1Ps5imXZC
1,662
Update README.md - Community Integrations - Obsidian Local GPT plugin
{ "login": "pfrankov", "id": 584632, "node_id": "MDQ6VXNlcjU4NDYzMg==", "avatar_url": "https://avatars.githubusercontent.com/u/584632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pfrankov", "html_url": "https://github.com/pfrankov", "followers_url": "https://api.github.com/users/pfrankov/followers", "following_url": "https://api.github.com/users/pfrankov/following{/other_user}", "gists_url": "https://api.github.com/users/pfrankov/gists{/gist_id}", "starred_url": "https://api.github.com/users/pfrankov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pfrankov/subscriptions", "organizations_url": "https://api.github.com/users/pfrankov/orgs", "repos_url": "https://api.github.com/users/pfrankov/repos", "events_url": "https://api.github.com/users/pfrankov/events{/privacy}", "received_events_url": "https://api.github.com/users/pfrankov/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-12-21T19:13:07
2024-01-22T17:04:04
2024-01-22T17:04:04
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1662", "html_url": "https://github.com/ollama/ollama/pull/1662", "diff_url": "https://github.com/ollama/ollama/pull/1662.diff", "patch_url": "https://github.com/ollama/ollama/pull/1662.patch", "merged_at": null }
Local GPT plugin for Obsidian mainly relies on Ollama provider ![image](https://github.com/pfrankov/obsidian-local-gpt/assets/584632/724d4399-cb6c-4531-9f04-a1e5df2e3dad) ![image](https://github.com/jmorganca/ollama/assets/584632/199b11c2-dc2a-4168-8466-247af40b572c) But it's also possible to use OpenAI-like local server. I'd say that Local GPT plugin is enhanced version of [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama) in every way.
{ "login": "pfrankov", "id": 584632, "node_id": "MDQ6VXNlcjU4NDYzMg==", "avatar_url": "https://avatars.githubusercontent.com/u/584632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pfrankov", "html_url": "https://github.com/pfrankov", "followers_url": "https://api.github.com/users/pfrankov/followers", "following_url": "https://api.github.com/users/pfrankov/following{/other_user}", "gists_url": "https://api.github.com/users/pfrankov/gists{/gist_id}", "starred_url": "https://api.github.com/users/pfrankov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pfrankov/subscriptions", "organizations_url": "https://api.github.com/users/pfrankov/orgs", "repos_url": "https://api.github.com/users/pfrankov/repos", "events_url": "https://api.github.com/users/pfrankov/events{/privacy}", "received_events_url": "https://api.github.com/users/pfrankov/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1662/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6817
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6817/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6817/comments
https://api.github.com/repos/ollama/ollama/issues/6817/events
https://github.com/ollama/ollama/issues/6817
2,527,223,280
I_kwDOJ0Z1Ps6Wol3w
6,817
llama 3.1 8b params downloaded from huggingface, strange num_ctx behavior
{ "login": "akseg73", "id": 45887240, "node_id": "MDQ6VXNlcjQ1ODg3MjQw", "avatar_url": "https://avatars.githubusercontent.com/u/45887240?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akseg73", "html_url": "https://github.com/akseg73", "followers_url": "https://api.github.com/users/akseg73/followers", "following_url": "https://api.github.com/users/akseg73/following{/other_user}", "gists_url": "https://api.github.com/users/akseg73/gists{/gist_id}", "starred_url": "https://api.github.com/users/akseg73/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akseg73/subscriptions", "organizations_url": "https://api.github.com/users/akseg73/orgs", "repos_url": "https://api.github.com/users/akseg73/repos", "events_url": "https://api.github.com/users/akseg73/events{/privacy}", "received_events_url": "https://api.github.com/users/akseg73/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-09-15T22:04:14
2024-12-02T22:51:10
2024-12-02T22:51:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I downloaded llama3.1 8b quantized to 8 bits from huggingface. It appears to have a default context size of 132k. Looking at numerous sources on the internet it seemed reasonable that in order to utilize the model i should reduce the context size with Parameter num_ctx 32k. However when i utilize num_ctx to reduce the context size to 32k from 132k, ollama generates a much larger model. 15GB instead of 10GB for the default parameters. What could be wrong here. Reducing the context size should have perhaps reduced the size of the model generated. Is there something wrong that i have done? ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6817/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5729
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5729/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5729/comments
https://api.github.com/repos/ollama/ollama/issues/5729/events
https://github.com/ollama/ollama/pull/5729
2,411,984,656
PR_kwDOJ0Z1Ps51jus4
5,729
OpenAI: update message processing
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-16T20:25:11
2024-07-19T18:19:21
2024-07-19T18:19:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5729", "html_url": "https://github.com/ollama/ollama/pull/5729", "diff_url": "https://github.com/ollama/ollama/pull/5729.diff", "patch_url": "https://github.com/ollama/ollama/pull/5729.patch", "merged_at": "2024-07-19T18:19:20" }
null
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5729/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3386
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3386/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3386/comments
https://api.github.com/repos/ollama/ollama/issues/3386/events
https://github.com/ollama/ollama/issues/3386
2,213,170,172
I_kwDOJ0Z1Ps6D6kv8
3,386
Loading the model on VM from attached volumes is extremely slow
{ "login": "levy42", "id": 8012024, "node_id": "MDQ6VXNlcjgwMTIwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8012024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/levy42", "html_url": "https://github.com/levy42", "followers_url": "https://api.github.com/users/levy42/followers", "following_url": "https://api.github.com/users/levy42/following{/other_user}", "gists_url": "https://api.github.com/users/levy42/gists{/gist_id}", "starred_url": "https://api.github.com/users/levy42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/levy42/subscriptions", "organizations_url": "https://api.github.com/users/levy42/orgs", "repos_url": "https://api.github.com/users/levy42/repos", "events_url": "https://api.github.com/users/levy42/events{/privacy}", "received_events_url": "https://api.github.com/users/levy42/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-03-28T12:52:29
2024-06-01T22:39:50
2024-06-01T22:39:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When pulling the model and running it the first time everything works fine. However, after deallocating the VM and starting it again (attaching a permanent disk with Ollama models downloaded) it takes more than 20 minutes to load any large model. It seems it's loading it to the CPU first with a speed of 100 MB per second. It doesn't happen when I download a new model with "ollama pull" && "ollama run", only with models that were attached. ### What did you expect to see? Same loading time as after downloading the model ### Steps to reproduce - Install Ollama on VM Ubuntu 22.04 - ollama pull llama2:70b - ollama run llama2:70b _--> loads fast_ - restart VM (deallocate) - ollama run llama2:70b --> _takes 20x longer to start_ ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture x86 ### Platform _No response_ ### Ollama version 0.1.28 ### GPU Nvidia ### GPU info Nvidia A100 ### CPU _No response_ ### Other software _No response_
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3386/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3386/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7672
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7672/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7672/comments
https://api.github.com/repos/ollama/ollama/issues/7672/events
https://github.com/ollama/ollama/issues/7672
2,660,218,043
I_kwDOJ0Z1Ps6ej7S7
7,672
Moondream v2 (CPU) crashes with images (post predict EOF error) on 0.4.1
{ "login": "rvkwi", "id": 122366820, "node_id": "U_kgDOB0srZA", "avatar_url": "https://avatars.githubusercontent.com/u/122366820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rvkwi", "html_url": "https://github.com/rvkwi", "followers_url": "https://api.github.com/users/rvkwi/followers", "following_url": "https://api.github.com/users/rvkwi/following{/other_user}", "gists_url": "https://api.github.com/users/rvkwi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rvkwi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rvkwi/subscriptions", "organizations_url": "https://api.github.com/users/rvkwi/orgs", "repos_url": "https://api.github.com/users/rvkwi/repos", "events_url": "https://api.github.com/users/rvkwi/events{/privacy}", "received_events_url": "https://api.github.com/users/rvkwi/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-11-14T22:31:06
2024-11-14T22:42:45
2024-11-14T22:42:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Moondream v2 seems to run into an issue with images on CPU with 0.4.1, resulting in `Error: POST predict: Post "http://127.0.0.1:33685/completion": EOF`. Does not seem to affect GPU. ``` ~ $ ollama run moondream:v2 "please describe this image /home/kwi/demo-2.png" --verbose Added image '/home/kwi/demo-2.png' Error: POST predict: Post "http://127.0.0.1:34833/completion": EOF ~ $ ollama run moondream:v2 hi Hi $ ollama --version ollama version is 0.4.1 ``` Only seems to happen with images, not text. The model loads and runs fine for text chat, but consistently crashes with EOF when attempting to process any image. journalctl: `level=DEBUG source=server.go:423 msg="llama runner terminated" error="signal: aborted"` What i tried so far. - Differen images - Different CPUS (Ryzen 5 6600H, Ryzen 7 5700X, i5-3320M) - Other vision models (minicpm-v and llama seem to work fine) - Different quants - CLI and through API I had an older laptop around with an outdated ollama 0.3.1 on it, that still worked fine. Upgrading to 0.4.1 caused the exact same error to appear as well. ### OS Linux ### GPU _No response_ ### CPU Intel, AMD ### Ollama version 0.4.1
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users/jessegross/followers", "following_url": "https://api.github.com/users/jessegross/following{/other_user}", "gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}", "starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jessegross/subscriptions", "organizations_url": "https://api.github.com/users/jessegross/orgs", "repos_url": "https://api.github.com/users/jessegross/repos", "events_url": "https://api.github.com/users/jessegross/events{/privacy}", "received_events_url": "https://api.github.com/users/jessegross/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7672/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/1885
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1885/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1885/comments
https://api.github.com/repos/ollama/ollama/issues/1885/events
https://github.com/ollama/ollama/pull/1885
2,073,674,231
PR_kwDOJ0Z1Ps5jpcGg
1,885
Update submodule to `6efb8eb30e7025b168f3fda3ff83b9b386428ad6`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-01-10T06:19:01
2024-01-10T21:48:39
2024-01-10T21:48:38
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1885", "html_url": "https://github.com/ollama/ollama/pull/1885", "diff_url": "https://github.com/ollama/ollama/pull/1885.diff", "patch_url": "https://github.com/ollama/ollama/pull/1885.patch", "merged_at": "2024-01-10T21:48:38" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1885/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2996
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2996/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2996/comments
https://api.github.com/repos/ollama/ollama/issues/2996/events
https://github.com/ollama/ollama/issues/2996
2,175,154,079
I_kwDOJ0Z1Ps6Bpjef
2,996
ollama pull qwen:1.8b error:Error: Head "https://registry.ollama.ai/v2/library/qwen/blobs/sha256:1296b084ed6bc4c6eaee99255d73e9c715d38e0087b6467fd1c498b908180614": unexpected EOF
{ "login": "wuwenrui", "id": 20716568, "node_id": "MDQ6VXNlcjIwNzE2NTY4", "avatar_url": "https://avatars.githubusercontent.com/u/20716568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wuwenrui", "html_url": "https://github.com/wuwenrui", "followers_url": "https://api.github.com/users/wuwenrui/followers", "following_url": "https://api.github.com/users/wuwenrui/following{/other_user}", "gists_url": "https://api.github.com/users/wuwenrui/gists{/gist_id}", "starred_url": "https://api.github.com/users/wuwenrui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wuwenrui/subscriptions", "organizations_url": "https://api.github.com/users/wuwenrui/orgs", "repos_url": "https://api.github.com/users/wuwenrui/repos", "events_url": "https://api.github.com/users/wuwenrui/events{/privacy}", "received_events_url": "https://api.github.com/users/wuwenrui/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw", "url": "https://api.github.com/repos/ollama/ollama/labels/networking", "name": "networking", "color": "0B5368", "default": false, "description": "Issues relating to ollama pull and push" } ]
closed
false
null
[]
null
2
2024-03-08T02:24:22
2024-03-11T22:21:41
2024-03-11T22:21:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
ollama pull qwen:1.8b error: Error: Head "https://registry.ollama.ai/v2/library/qwen/blobs/sha256:1296b084ed6bc4c6eaee99255d73e9c715d38e0087b6467fd1c498b908180614": unexpected EOF
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2996/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/854
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/854/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/854/comments
https://api.github.com/repos/ollama/ollama/issues/854/events
https://github.com/ollama/ollama/issues/854
1,954,551,781
I_kwDOJ0Z1Ps50gBfl
854
Better Doc / Explanation and Examples of Template Syntax
{ "login": "redhermes", "id": 6583939, "node_id": "MDQ6VXNlcjY1ODM5Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/6583939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/redhermes", "html_url": "https://github.com/redhermes", "followers_url": "https://api.github.com/users/redhermes/followers", "following_url": "https://api.github.com/users/redhermes/following{/other_user}", "gists_url": "https://api.github.com/users/redhermes/gists{/gist_id}", "starred_url": "https://api.github.com/users/redhermes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/redhermes/subscriptions", "organizations_url": "https://api.github.com/users/redhermes/orgs", "repos_url": "https://api.github.com/users/redhermes/repos", "events_url": "https://api.github.com/users/redhermes/events{/privacy}", "received_events_url": "https://api.github.com/users/redhermes/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396191, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw", "url": "https://api.github.com/repos/ollama/ollama/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
4
2023-10-20T15:41:20
2023-10-25T19:29:59
2023-10-25T19:29:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Really like ollama for is simple setup and usage with both a CLI and API. The only thing that has tripped me up is getting the modelfile template correct for an imported model. It could be my inexperience but the documentation seems very sparse. I have been unable to get the JackalopeAI (on HuggingFace) to run after numeration attempts. The system goes into a loop of asking it's questions and proceeding to answer them. It would be helpful if all the full grammar of the modelfile was documented including the special symbols used by the template.
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/854/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3615
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3615/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3615/comments
https://api.github.com/repos/ollama/ollama/issues/3615/events
https://github.com/ollama/ollama/pull/3615
2,239,963,708
PR_kwDOJ0Z1Ps5se07_
3,615
Install Ollama on OSTree systems
{ "login": "ericcurtin", "id": 1694275, "node_id": "MDQ6VXNlcjE2OTQyNzU=", "avatar_url": "https://avatars.githubusercontent.com/u/1694275?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ericcurtin", "html_url": "https://github.com/ericcurtin", "followers_url": "https://api.github.com/users/ericcurtin/followers", "following_url": "https://api.github.com/users/ericcurtin/following{/other_user}", "gists_url": "https://api.github.com/users/ericcurtin/gists{/gist_id}", "starred_url": "https://api.github.com/users/ericcurtin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericcurtin/subscriptions", "organizations_url": "https://api.github.com/users/ericcurtin/orgs", "repos_url": "https://api.github.com/users/ericcurtin/repos", "events_url": "https://api.github.com/users/ericcurtin/events{/privacy}", "received_events_url": "https://api.github.com/users/ericcurtin/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
3
2024-04-12T11:47:22
2024-04-14T09:26:49
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3615", "html_url": "https://github.com/ollama/ollama/pull/3615", "diff_url": "https://github.com/ollama/ollama/pull/3615.diff", "patch_url": "https://github.com/ollama/ollama/pull/3615.patch", "merged_at": null }
There's a large plethora of OSTree OSes in the Fedora family: Silverblue, Kinoite, CoreOS, IoT, Onyx, Sericea, Vauxite In the CentOS Stream family: Automotive Stream Distribution, CoreOS In the Red Hat family: Red Hat In-Vehicle Operating System, Red Hat Enterprise Linux CoreOS, RHEL for Edge Then there's the Universal Blue family: Things like podman-machine and podman-desktop on Windows and macOS use Fedora CoreOS as the host OS so there is that also. The list goes on and on. These OSes are ideal for containerized AI LLMs like Ollama. I eventually got this working with podman, which is probably the best route to use on these OSes (or an rpm when someone packages it). This change gets the install script working. /usr/bin and /usr/share aren't writable on these systems, but /usr/local/bin and /usr/local/share are. So this change ensures if /usr/local/bin is used during installation, that we also use /usr/local/share, if /usr/local/share exists. And then everything seems to work fine on these OSes for non-containerized installs.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3615/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8030
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8030/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8030/comments
https://api.github.com/repos/ollama/ollama/issues/8030/events
https://github.com/ollama/ollama/pull/8030
2,730,861,255
PR_kwDOJ0Z1Ps6Evv7T
8,030
readme: include IBM Granite models
{ "login": "andresdanielmtz", "id": 103913163, "node_id": "U_kgDOBjGWyw", "avatar_url": "https://avatars.githubusercontent.com/u/103913163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andresdanielmtz", "html_url": "https://github.com/andresdanielmtz", "followers_url": "https://api.github.com/users/andresdanielmtz/followers", "following_url": "https://api.github.com/users/andresdanielmtz/following{/other_user}", "gists_url": "https://api.github.com/users/andresdanielmtz/gists{/gist_id}", "starred_url": "https://api.github.com/users/andresdanielmtz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andresdanielmtz/subscriptions", "organizations_url": "https://api.github.com/users/andresdanielmtz/orgs", "repos_url": "https://api.github.com/users/andresdanielmtz/repos", "events_url": "https://api.github.com/users/andresdanielmtz/events{/privacy}", "received_events_url": "https://api.github.com/users/andresdanielmtz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
1
2024-12-10T18:17:15
2024-12-16T09:18:45
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/8030", "html_url": "https://github.com/ollama/ollama/pull/8030", "diff_url": "https://github.com/ollama/ollama/pull/8030.diff", "patch_url": "https://github.com/ollama/ollama/pull/8030.patch", "merged_at": null }
null
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8030/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3109
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3109/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3109/comments
https://api.github.com/repos/ollama/ollama/issues/3109/events
https://github.com/ollama/ollama/issues/3109
2,184,258,885
I_kwDOJ0Z1Ps6CMSVF
3,109
OpenAI API and templates
{ "login": "pierreeliseeflory", "id": 46896737, "node_id": "MDQ6VXNlcjQ2ODk2NzM3", "avatar_url": "https://avatars.githubusercontent.com/u/46896737?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pierreeliseeflory", "html_url": "https://github.com/pierreeliseeflory", "followers_url": "https://api.github.com/users/pierreeliseeflory/followers", "following_url": "https://api.github.com/users/pierreeliseeflory/following{/other_user}", "gists_url": "https://api.github.com/users/pierreeliseeflory/gists{/gist_id}", "starred_url": "https://api.github.com/users/pierreeliseeflory/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pierreeliseeflory/subscriptions", "organizations_url": "https://api.github.com/users/pierreeliseeflory/orgs", "repos_url": "https://api.github.com/users/pierreeliseeflory/repos", "events_url": "https://api.github.com/users/pierreeliseeflory/events{/privacy}", "received_events_url": "https://api.github.com/users/pierreeliseeflory/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-03-13T15:11:40
2024-04-09T20:05:38
2024-03-15T11:17:11
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi, Does the new OpenAI API compatible endpoint `/v1/chat/completions` uses the default templates defined in the Modefile ? Thank you
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3109/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7349
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7349/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7349/comments
https://api.github.com/repos/ollama/ollama/issues/7349/events
https://github.com/ollama/ollama/issues/7349
2,612,784,172
I_kwDOJ0Z1Ps6bu-ws
7,349
add termux compile instructions to web page
{ "login": "fxmbsw7", "id": 39368685, "node_id": "MDQ6VXNlcjM5MzY4Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/39368685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmbsw7", "html_url": "https://github.com/fxmbsw7", "followers_url": "https://api.github.com/users/fxmbsw7/followers", "following_url": "https://api.github.com/users/fxmbsw7/following{/other_user}", "gists_url": "https://api.github.com/users/fxmbsw7/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmbsw7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmbsw7/subscriptions", "organizations_url": "https://api.github.com/users/fxmbsw7/orgs", "repos_url": "https://api.github.com/users/fxmbsw7/repos", "events_url": "https://api.github.com/users/fxmbsw7/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmbsw7/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 7700262114, "node_id": "LA_kwDOJ0Z1Ps8AAAAByvis4g", "url": "https://api.github.com/repos/ollama/ollama/labels/build", "name": "build", "color": "006b75", "default": false, "description": "Issues relating to building ollama from source" } ]
open
false
null
[]
null
1
2024-10-25T00:16:43
2024-11-04T19:18:44
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
pkg ugrade -y golang clang cmake libandroid-execinfo gzip git git clone https://github.com/ollama/ollama ollama cd ollama go generate ./... go build . cp ollama ~/../usr/bin this used to work to 0.3.13 then .14 the err came but i believe u change well , err will be gone , cmd working again .. greets ..
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7349/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7834
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7834/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7834/comments
https://api.github.com/repos/ollama/ollama/issues/7834/events
https://github.com/ollama/ollama/pull/7834
2,692,516,960
PR_kwDOJ0Z1Ps6DGwNn
7,834
server: fix Transport override
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-11-25T22:48:36
2024-11-25T23:08:36
2024-11-25T23:08:34
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7834", "html_url": "https://github.com/ollama/ollama/pull/7834", "diff_url": "https://github.com/ollama/ollama/pull/7834.diff", "patch_url": "https://github.com/ollama/ollama/pull/7834.patch", "merged_at": "2024-11-25T23:08:34" }
This changes makeRequest to update the http client Transport if and only if testMakeRequestDialContext is set. This is to avoid overriding the default Transport when testMakeRequestDialContext is nil, which broke existing behavior, included proxies, timeouts, and other behaviors. Fixes #7829 Fixes #7788
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7834/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7834/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7627
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7627/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7627/comments
https://api.github.com/repos/ollama/ollama/issues/7627/events
https://github.com/ollama/ollama/issues/7627
2,651,692,740
I_kwDOJ0Z1Ps6eDZ7E
7,627
support multiple lora adapters
{ "login": "lyingbug", "id": 11257935, "node_id": "MDQ6VXNlcjExMjU3OTM1", "avatar_url": "https://avatars.githubusercontent.com/u/11257935?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lyingbug", "html_url": "https://github.com/lyingbug", "followers_url": "https://api.github.com/users/lyingbug/followers", "following_url": "https://api.github.com/users/lyingbug/following{/other_user}", "gists_url": "https://api.github.com/users/lyingbug/gists{/gist_id}", "starred_url": "https://api.github.com/users/lyingbug/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lyingbug/subscriptions", "organizations_url": "https://api.github.com/users/lyingbug/orgs", "repos_url": "https://api.github.com/users/lyingbug/repos", "events_url": "https://api.github.com/users/lyingbug/events{/privacy}", "received_events_url": "https://api.github.com/users/lyingbug/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-11-12T10:08:50
2024-11-27T19:00:06
2024-11-27T19:00:06
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
llama.cpp support multiple adapters, see https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md why ollama support only one adapter? https://github.com/ollama/ollama/blob/65973ceb6417c2e2796fa59bd3225bc7bd79b403/llm/server.go#L203-L206
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users/jessegross/followers", "following_url": "https://api.github.com/users/jessegross/following{/other_user}", "gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}", "starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jessegross/subscriptions", "organizations_url": "https://api.github.com/users/jessegross/orgs", "repos_url": "https://api.github.com/users/jessegross/repos", "events_url": "https://api.github.com/users/jessegross/events{/privacy}", "received_events_url": "https://api.github.com/users/jessegross/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7627/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/797
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/797/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/797/comments
https://api.github.com/repos/ollama/ollama/issues/797/events
https://github.com/ollama/ollama/issues/797
1,944,725,328
I_kwDOJ0Z1Ps5z6idQ
797
Support GPU on older NVIDIA GPU and CUDA drivers
{ "login": "Syulin7", "id": 37265556, "node_id": "MDQ6VXNlcjM3MjY1NTU2", "avatar_url": "https://avatars.githubusercontent.com/u/37265556?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Syulin7", "html_url": "https://github.com/Syulin7", "followers_url": "https://api.github.com/users/Syulin7/followers", "following_url": "https://api.github.com/users/Syulin7/following{/other_user}", "gists_url": "https://api.github.com/users/Syulin7/gists{/gist_id}", "starred_url": "https://api.github.com/users/Syulin7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Syulin7/subscriptions", "organizations_url": "https://api.github.com/users/Syulin7/orgs", "repos_url": "https://api.github.com/users/Syulin7/repos", "events_url": "https://api.github.com/users/Syulin7/events{/privacy}", "received_events_url": "https://api.github.com/users/Syulin7/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
24
2023-10-16T09:01:24
2024-02-26T11:53:38
2023-11-28T21:26:44
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am testing using ollama on linux and docker, and its not using the GPU at all. it appears that ollma is not using the CUDA image. I resolved the issue by replacing the base image. https://github.com/jmorganca/ollama/blob/92578798bb1abcedd6bc99479d804f32d9ee2f6c/Dockerfile#L17-L23 change ubuntu:22.04 to nvidia/cuda:11.8.0-devel-ubuntu22.04 and then it works ![image](https://github.com/jmorganca/ollama/assets/37265556/52f7f99a-2533-4069-b700-7a738f03c7b4) Perhaps we can build a GPU image and push it to the community, using the "gpu" tag for differentiation.
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/797/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/797/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8371
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8371/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8371/comments
https://api.github.com/repos/ollama/ollama/issues/8371/events
https://github.com/ollama/ollama/issues/8371
2,779,254,168
I_kwDOJ0Z1Ps6lqA2Y
8,371
ollama not working
{ "login": "Rachit199", "id": 141905808, "node_id": "U_kgDOCHVPkA", "avatar_url": "https://avatars.githubusercontent.com/u/141905808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rachit199", "html_url": "https://github.com/Rachit199", "followers_url": "https://api.github.com/users/Rachit199/followers", "following_url": "https://api.github.com/users/Rachit199/following{/other_user}", "gists_url": "https://api.github.com/users/Rachit199/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rachit199/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rachit199/subscriptions", "organizations_url": "https://api.github.com/users/Rachit199/orgs", "repos_url": "https://api.github.com/users/Rachit199/repos", "events_url": "https://api.github.com/users/Rachit199/events{/privacy}", "received_events_url": "https://api.github.com/users/Rachit199/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
open
false
null
[]
null
1
2025-01-10T04:27:15
2025-01-10T23:57:31
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I installed ollama in ubuntu via curl command but not working when using ollama. So I check ollama version.>ollama -v ollama version is 0.0.0 Warning: client version is 0.5.0 ### OS Linux ### GPU _No response_ ### CPU AMD ### Ollama version _No response_
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8371/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5787
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5787/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5787/comments
https://api.github.com/repos/ollama/ollama/issues/5787/events
https://github.com/ollama/ollama/issues/5787
2,418,160,984
I_kwDOJ0Z1Ps6QIjVY
5,787
ollama run deepseek-coder-v2 creates gibberish output
{ "login": "flo-ivar", "id": 143725475, "node_id": "U_kgDOCJETow", "avatar_url": "https://avatars.githubusercontent.com/u/143725475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flo-ivar", "html_url": "https://github.com/flo-ivar", "followers_url": "https://api.github.com/users/flo-ivar/followers", "following_url": "https://api.github.com/users/flo-ivar/following{/other_user}", "gists_url": "https://api.github.com/users/flo-ivar/gists{/gist_id}", "starred_url": "https://api.github.com/users/flo-ivar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flo-ivar/subscriptions", "organizations_url": "https://api.github.com/users/flo-ivar/orgs", "repos_url": "https://api.github.com/users/flo-ivar/repos", "events_url": "https://api.github.com/users/flo-ivar/events{/privacy}", "received_events_url": "https://api.github.com/users/flo-ivar/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
2024-07-19T06:40:47
2024-09-17T01:39:45
2024-09-17T01:39:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hi, I am trying to run the 16b ollama deepseek-coder-v2, which leads to a "gibberish" output. Strangely enough it works after a fresh download, but then after trying to run it in Aider it doesnt. ![image](https://github.com/user-attachments/assets/9e6df4f7-dc47-49bc-a306-2e73c73b4098) ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.2.7
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5787/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/1247
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1247/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1247/comments
https://api.github.com/repos/ollama/ollama/issues/1247/events
https://github.com/ollama/ollama/issues/1247
2,007,161,530
I_kwDOJ0Z1Ps53otq6
1,247
Better validation for model names in `ollama create` and `ollama cp`
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5667396210, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg", "url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
1
2023-11-22T21:49:44
2023-11-29T20:54:30
2023-11-29T20:54:30
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Today creating `ollama create mymodel:my:tag` will work
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1247/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/490
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/490/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/490/comments
https://api.github.com/repos/ollama/ollama/issues/490/events
https://github.com/ollama/ollama/pull/490
1,886,657,684
PR_kwDOJ0Z1Ps5Z0rrI
490
Add OLLAMA_HOME environment variable support.
{ "login": "akhilcacharya", "id": 3621384, "node_id": "MDQ6VXNlcjM2MjEzODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3621384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akhilcacharya", "html_url": "https://github.com/akhilcacharya", "followers_url": "https://api.github.com/users/akhilcacharya/followers", "following_url": "https://api.github.com/users/akhilcacharya/following{/other_user}", "gists_url": "https://api.github.com/users/akhilcacharya/gists{/gist_id}", "starred_url": "https://api.github.com/users/akhilcacharya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akhilcacharya/subscriptions", "organizations_url": "https://api.github.com/users/akhilcacharya/orgs", "repos_url": "https://api.github.com/users/akhilcacharya/repos", "events_url": "https://api.github.com/users/akhilcacharya/events{/privacy}", "received_events_url": "https://api.github.com/users/akhilcacharya/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-09-07T22:53:51
2023-11-03T16:57:16
2023-10-25T22:34:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/490", "html_url": "https://github.com/ollama/ollama/pull/490", "diff_url": "https://github.com/ollama/ollama/pull/490.diff", "patch_url": "https://github.com/ollama/ollama/pull/490.patch", "merged_at": null }
## Problem I'd like to run Ollama on my Linux server, but I have a small home directory disk. As a result, rather than changing the home directory to my mass storage pool, I propose adding the environment variable ```OLLAMA_HOME``` to set the top-level filepath for Ollama. ## Change Switch out os.UserHomeDir with a wrapper in a new `util` package. `util.UserHomeDir` attempts to fetch the OLLAMA_HOME environment variable, and falls back otherwise. Add documentation under `faq.md`. ## Tests Tested manually. I'd be happy to add automated tests for this if existing infrastructure exists.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/490/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5436
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5436/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5436/comments
https://api.github.com/repos/ollama/ollama/issues/5436/events
https://github.com/ollama/ollama/issues/5436
2,386,507,001
I_kwDOJ0Z1Ps6OPzT5
5,436
Updates to Phi-3 mini 4k/128k
{ "login": "Qualzz", "id": 35169816, "node_id": "MDQ6VXNlcjM1MTY5ODE2", "avatar_url": "https://avatars.githubusercontent.com/u/35169816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Qualzz", "html_url": "https://github.com/Qualzz", "followers_url": "https://api.github.com/users/Qualzz/followers", "following_url": "https://api.github.com/users/Qualzz/following{/other_user}", "gists_url": "https://api.github.com/users/Qualzz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Qualzz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qualzz/subscriptions", "organizations_url": "https://api.github.com/users/Qualzz/orgs", "repos_url": "https://api.github.com/users/Qualzz/repos", "events_url": "https://api.github.com/users/Qualzz/events{/privacy}", "received_events_url": "https://api.github.com/users/Qualzz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-07-02T15:03:48
2024-07-02T20:34:30
2024-07-02T20:34:30
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Microsoft updated both checkpoints: [https://huggingface.co/microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) [https://huggingface.co/microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) > Release Notes > This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback. The model used additional post-training data leading to substantial gains on long-context understanding, instruction following, and structure output. We also improve multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community. ### Benchmarks | Benchmarks | Original | June 2024 Update | |---------------------------|----------|------------------| | Instruction Extra Hard | 5.7 | 5.9 | | Instruction Hard | 5.0 | 5.2 | | JSON Structure Output | 1.9 | 60.1 | | XML Structure Output | 47.8 | 52.9 | | GPQA | 25.9 | 29.7 | | MMLU | 68.1 | 69.7 | | **Average** | **25.7** | **37.3** | ### RULER: a retrieval-based benchmark for long context understanding | Model | 4K | 8K | 16K | 32K | 64K | 128K | Average | |-------------------|------|------|------|------|------|------|---------| | Original | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | 68.8 | | June 2024 Update | 92.4 | 91.1 | 90.8 | 87.9 | 79.8 | 65.6 | 84.6 | ### RepoQA: a benchmark for long context code understanding | Model | Python | C++ | Rust | Java | TypeScript | Average | |------------------|--------|------|------|------|------------|---------| | Original | 27 | 29 | 40 | 33 | 33 | 32.4 | | June 2024 Update | 85 | 63 | 72 | 93 | 72 | 77 |
{ "login": "Qualzz", "id": 35169816, "node_id": "MDQ6VXNlcjM1MTY5ODE2", "avatar_url": "https://avatars.githubusercontent.com/u/35169816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Qualzz", "html_url": "https://github.com/Qualzz", "followers_url": "https://api.github.com/users/Qualzz/followers", "following_url": "https://api.github.com/users/Qualzz/following{/other_user}", "gists_url": "https://api.github.com/users/Qualzz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Qualzz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qualzz/subscriptions", "organizations_url": "https://api.github.com/users/Qualzz/orgs", "repos_url": "https://api.github.com/users/Qualzz/repos", "events_url": "https://api.github.com/users/Qualzz/events{/privacy}", "received_events_url": "https://api.github.com/users/Qualzz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5436/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5436/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7777
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7777/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7777/comments
https://api.github.com/repos/ollama/ollama/issues/7777/events
https://github.com/ollama/ollama/pull/7777
2,679,045,060
PR_kwDOJ0Z1Ps6Cpn-0
7,777
ppc64le: corrected ioctls
{ "login": "stormljor", "id": 36227969, "node_id": "MDQ6VXNlcjM2MjI3OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/36227969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stormljor", "html_url": "https://github.com/stormljor", "followers_url": "https://api.github.com/users/stormljor/followers", "following_url": "https://api.github.com/users/stormljor/following{/other_user}", "gists_url": "https://api.github.com/users/stormljor/gists{/gist_id}", "starred_url": "https://api.github.com/users/stormljor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stormljor/subscriptions", "organizations_url": "https://api.github.com/users/stormljor/orgs", "repos_url": "https://api.github.com/users/stormljor/repos", "events_url": "https://api.github.com/users/stormljor/events{/privacy}", "received_events_url": "https://api.github.com/users/stormljor/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
4
2024-11-21T11:00:06
2025-01-28T00:51:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7777", "html_url": "https://github.com/ollama/ollama/pull/7777", "diff_url": "https://github.com/ollama/ollama/pull/7777.diff", "patch_url": "https://github.com/ollama/ollama/pull/7777.patch", "merged_at": null }
As described in #796 `ollama run` won't work on ppc64le out of the box, as the ioctl `TCSETS` is invalid. This PR changes the ioctl to `TCSETSF` while also moving it away from "magic numbers". According to man pages: ``` TCSETSF Equivalent to tcsetattr(fd, TCSAFLUSH, argp). Allow the output buffer to drain, discard pending input, and set the current serial port settings. ``` I've tested this change on an x64 CPU as well, and saw no issues/regressions.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7777/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7777/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/392
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/392/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/392/comments
https://api.github.com/repos/ollama/ollama/issues/392/events
https://github.com/ollama/ollama/pull/392
1,860,393,582
PR_kwDOJ0Z1Ps5YcQf_
392
add version
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-08-22T01:26:20
2023-08-22T16:50:28
2023-08-22T16:50:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/392", "html_url": "https://github.com/ollama/ollama/pull/392", "diff_url": "https://github.com/ollama/ollama/pull/392.diff", "patch_url": "https://github.com/ollama/ollama/pull/392.patch", "merged_at": "2023-08-22T16:50:25" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/392/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8274
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8274/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8274/comments
https://api.github.com/repos/ollama/ollama/issues/8274/events
https://github.com/ollama/ollama/issues/8274
2,764,372,182
I_kwDOJ0Z1Ps6kxPjW
8,274
Ollama hangs without timeout, Ollama model is consuming full CPU or GPU
{ "login": "ttww", "id": 3983391, "node_id": "MDQ6VXNlcjM5ODMzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/3983391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ttww", "html_url": "https://github.com/ttww", "followers_url": "https://api.github.com/users/ttww/followers", "following_url": "https://api.github.com/users/ttww/following{/other_user}", "gists_url": "https://api.github.com/users/ttww/gists{/gist_id}", "starred_url": "https://api.github.com/users/ttww/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ttww/subscriptions", "organizations_url": "https://api.github.com/users/ttww/orgs", "repos_url": "https://api.github.com/users/ttww/repos", "events_url": "https://api.github.com/users/ttww/events{/privacy}", "received_events_url": "https://api.github.com/users/ttww/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
7
2024-12-31T13:19:10
2025-01-01T17:01:09
2025-01-01T17:01:09
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Doing Ollama connections with langchain the API hangs, depending on the input. No information is loggend on the Ollama serve side with debug option 3. I have attached a test case (Python program, test image, and README) to reproduce it. [ollama_hang.tgz](https://github.com/user-attachments/files/18281727/ollama_hang.tgz) Changing the prompt, may change the situation (see [c't Forum](https://www.heise.de/forum/heise-online/Kommentare/Wie-eine-lokale-KI-die-Fotosammlung-auf-dem-NAS-verschlagworten-kann/Haengt-sich-bei-jedem-2-3-Bild-auf/thread-7564685/#posting_43932391)) ### OS Linux, macOS ### GPU Intel, Apple ### CPU AMD, Apple ### Ollama version 0.5.4
{ "login": "ttww", "id": 3983391, "node_id": "MDQ6VXNlcjM5ODMzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/3983391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ttww", "html_url": "https://github.com/ttww", "followers_url": "https://api.github.com/users/ttww/followers", "following_url": "https://api.github.com/users/ttww/following{/other_user}", "gists_url": "https://api.github.com/users/ttww/gists{/gist_id}", "starred_url": "https://api.github.com/users/ttww/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ttww/subscriptions", "organizations_url": "https://api.github.com/users/ttww/orgs", "repos_url": "https://api.github.com/users/ttww/repos", "events_url": "https://api.github.com/users/ttww/events{/privacy}", "received_events_url": "https://api.github.com/users/ttww/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8274/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1892
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1892/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1892/comments
https://api.github.com/repos/ollama/ollama/issues/1892/events
https://github.com/ollama/ollama/issues/1892
2,074,082,789
I_kwDOJ0Z1Ps57n_3l
1,892
upgrade openchat
{ "login": "morandalex", "id": 9484568, "node_id": "MDQ6VXNlcjk0ODQ1Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/9484568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/morandalex", "html_url": "https://github.com/morandalex", "followers_url": "https://api.github.com/users/morandalex/followers", "following_url": "https://api.github.com/users/morandalex/following{/other_user}", "gists_url": "https://api.github.com/users/morandalex/gists{/gist_id}", "starred_url": "https://api.github.com/users/morandalex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/morandalex/subscriptions", "organizations_url": "https://api.github.com/users/morandalex/orgs", "repos_url": "https://api.github.com/users/morandalex/repos", "events_url": "https://api.github.com/users/morandalex/events{/privacy}", "received_events_url": "https://api.github.com/users/morandalex/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
2
2024-01-10T10:40:27
2024-01-11T16:52:21
2024-01-11T00:09:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
hello a new release of openchat was released : https://huggingface.co/openchat/openchat-3.5-0106#benchmarks
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1892/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1892/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6706
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6706/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6706/comments
https://api.github.com/repos/ollama/ollama/issues/6706/events
https://github.com/ollama/ollama/issues/6706
2,512,960,755
I_kwDOJ0Z1Ps6VyLzz
6,706
Reflection 70B has significant issue with the weights
{ "login": "gileneusz", "id": 34601970, "node_id": "MDQ6VXNlcjM0NjAxOTcw", "avatar_url": "https://avatars.githubusercontent.com/u/34601970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gileneusz", "html_url": "https://github.com/gileneusz", "followers_url": "https://api.github.com/users/gileneusz/followers", "following_url": "https://api.github.com/users/gileneusz/following{/other_user}", "gists_url": "https://api.github.com/users/gileneusz/gists{/gist_id}", "starred_url": "https://api.github.com/users/gileneusz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gileneusz/subscriptions", "organizations_url": "https://api.github.com/users/gileneusz/orgs", "repos_url": "https://api.github.com/users/gileneusz/repos", "events_url": "https://api.github.com/users/gileneusz/events{/privacy}", "received_events_url": "https://api.github.com/users/gileneusz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
4
2024-09-09T05:43:27
2024-09-12T01:18:15
2024-09-12T01:18:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The whole drama is described here: https://x.com/shinboson/status/1832933753837982024 sorry for recommending the model, I was unaware of that and easily got into the hype it's still possible that's just technical issue, but I'm suspicious: https://x.com/rohanpaul_ai/status/1833094994929897862/photo/1
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6706/reactions", "total_count": 9, "+1": 8, "-1": 1, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6706/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/8497
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8497/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8497/comments
https://api.github.com/repos/ollama/ollama/issues/8497/events
https://github.com/ollama/ollama/issues/8497
2,798,255,347
I_kwDOJ0Z1Ps6myfzz
8,497
Repository for tyllama/kevin?
{ "login": "Dim-Tim-1963", "id": 42923977, "node_id": "MDQ6VXNlcjQyOTIzOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/42923977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dim-Tim-1963", "html_url": "https://github.com/Dim-Tim-1963", "followers_url": "https://api.github.com/users/Dim-Tim-1963/followers", "following_url": "https://api.github.com/users/Dim-Tim-1963/following{/other_user}", "gists_url": "https://api.github.com/users/Dim-Tim-1963/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dim-Tim-1963/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dim-Tim-1963/subscriptions", "organizations_url": "https://api.github.com/users/Dim-Tim-1963/orgs", "repos_url": "https://api.github.com/users/Dim-Tim-1963/repos", "events_url": "https://api.github.com/users/Dim-Tim-1963/events{/privacy}", "received_events_url": "https://api.github.com/users/Dim-Tim-1963/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
0
2025-01-20T05:51:59
2025-01-20T07:49:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The ollama library has the tyllama/kevin model: https://ollama.com/tyllama/kevin The description says that it can be installed from the repository, with the ability to remember previous dialogs and learn from them. But I didn't find that repository. Does it still exist? Removed, renamed?
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8497/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/4412
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4412/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4412/comments
https://api.github.com/repos/ollama/ollama/issues/4412/events
https://github.com/ollama/ollama/pull/4412
2,293,931,601
PR_kwDOJ0Z1Ps5vUcKA
4,412
Document older win10 terminal problems
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-05-13T22:10:03
2024-07-05T15:18:25
2024-07-05T15:18:22
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4412", "html_url": "https://github.com/ollama/ollama/pull/4412", "diff_url": "https://github.com/ollama/ollama/pull/4412.diff", "patch_url": "https://github.com/ollama/ollama/pull/4412.patch", "merged_at": "2024-07-05T15:18:22" }
We haven't found a workaround, so for now recommend updating. Fixes #3916
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4412/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4898
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4898/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4898/comments
https://api.github.com/repos/ollama/ollama/issues/4898/events
https://github.com/ollama/ollama/issues/4898
2,339,661,962
I_kwDOJ0Z1Ps6LdGiK
4,898
Error removing model
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-06-07T06:11:34
2024-06-10T18:40:04
2024-06-10T18:40:04
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? ``` ollama run wizardcoder:34b-python ollama rm wizardcoder:34b-python Error: remove /usr/share/ollama/.ollama/models/blobs/sha256-a168bedb9a09640289c5174690a6221adae48b75dc431a219923f052ef20d0af: no such file or directory ``` ### OS Linux ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4898/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4898/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6321
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6321/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6321/comments
https://api.github.com/repos/ollama/ollama/issues/6321/events
https://github.com/ollama/ollama/issues/6321
2,461,338,329
I_kwDOJ0Z1Ps6StQrZ
6,321
Feature request : get probability distribution
{ "login": "Alireza3242", "id": 77293766, "node_id": "MDQ6VXNlcjc3MjkzNzY2", "avatar_url": "https://avatars.githubusercontent.com/u/77293766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Alireza3242", "html_url": "https://github.com/Alireza3242", "followers_url": "https://api.github.com/users/Alireza3242/followers", "following_url": "https://api.github.com/users/Alireza3242/following{/other_user}", "gists_url": "https://api.github.com/users/Alireza3242/gists{/gist_id}", "starred_url": "https://api.github.com/users/Alireza3242/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Alireza3242/subscriptions", "organizations_url": "https://api.github.com/users/Alireza3242/orgs", "repos_url": "https://api.github.com/users/Alireza3242/repos", "events_url": "https://api.github.com/users/Alireza3242/events{/privacy}", "received_events_url": "https://api.github.com/users/Alireza3242/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
1
2024-08-12T15:42:02
2024-09-02T23:00:00
2024-09-02T22:59:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have a prompt and then i get an answer. In some part of answer is a JSON, something like this: ``` { res:"yes" } ``` or this: ``` { res:"no" } ``` I want to know the probability of token "yes" and "no". and use these probabilities in some algorithm.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6321/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2681
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2681/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2681/comments
https://api.github.com/repos/ollama/ollama/issues/2681/events
https://github.com/ollama/ollama/issues/2681
2,149,346,646
I_kwDOJ0Z1Ps6AHG1W
2,681
ollama 运行 orca-迷你模型完成后的问题
{ "login": "wxerada", "id": 160884705, "node_id": "U_kgDOCZbn4Q", "avatar_url": "https://avatars.githubusercontent.com/u/160884705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wxerada", "html_url": "https://github.com/wxerada", "followers_url": "https://api.github.com/users/wxerada/followers", "following_url": "https://api.github.com/users/wxerada/following{/other_user}", "gists_url": "https://api.github.com/users/wxerada/gists{/gist_id}", "starred_url": "https://api.github.com/users/wxerada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wxerada/subscriptions", "organizations_url": "https://api.github.com/users/wxerada/orgs", "repos_url": "https://api.github.com/users/wxerada/repos", "events_url": "https://api.github.com/users/wxerada/events{/privacy}", "received_events_url": "https://api.github.com/users/wxerada/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-02-22T15:35:33
2024-03-03T22:41:54
2024-02-22T16:03:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Error: Unable to load dynamic library: Unable to load dynamic server library: �Ҳ���ָ����ģ�顣
{ "login": "wxerada", "id": 160884705, "node_id": "U_kgDOCZbn4Q", "avatar_url": "https://avatars.githubusercontent.com/u/160884705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wxerada", "html_url": "https://github.com/wxerada", "followers_url": "https://api.github.com/users/wxerada/followers", "following_url": "https://api.github.com/users/wxerada/following{/other_user}", "gists_url": "https://api.github.com/users/wxerada/gists{/gist_id}", "starred_url": "https://api.github.com/users/wxerada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wxerada/subscriptions", "organizations_url": "https://api.github.com/users/wxerada/orgs", "repos_url": "https://api.github.com/users/wxerada/repos", "events_url": "https://api.github.com/users/wxerada/events{/privacy}", "received_events_url": "https://api.github.com/users/wxerada/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2681/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1997
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1997/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1997/comments
https://api.github.com/repos/ollama/ollama/issues/1997/events
https://github.com/ollama/ollama/issues/1997
2,081,151,447
I_kwDOJ0Z1Ps58C9nX
1,997
:back: Some kind of regression while running on some LlamaIndex versions (Kaggle & Killercoda)
{ "login": "adriens", "id": 5235127, "node_id": "MDQ6VXNlcjUyMzUxMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/5235127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adriens", "html_url": "https://github.com/adriens", "followers_url": "https://api.github.com/users/adriens/followers", "following_url": "https://api.github.com/users/adriens/following{/other_user}", "gists_url": "https://api.github.com/users/adriens/gists{/gist_id}", "starred_url": "https://api.github.com/users/adriens/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adriens/subscriptions", "organizations_url": "https://api.github.com/users/adriens/orgs", "repos_url": "https://api.github.com/users/adriens/repos", "events_url": "https://api.github.com/users/adriens/events{/privacy}", "received_events_url": "https://api.github.com/users/adriens/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
34
2024-01-15T02:52:36
2024-11-18T21:08:50
2024-05-10T01:03:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
# :grey_question: About While working on a `ollama` tutorial on Kaggle, since a few days, I faced a regression while working with LlamaIndex. Here is the output I could get on any model (worked everytime) ![image](https://github.com/langchain-ai/langchainjs/assets/5235127/89ebe9c2-55d4-41da-8b32-74d243759f2e) ... vs now (the code is now broken, and it fails consistetly): ![image](https://github.com/langchain-ai/langchainjs/assets/5235127/4121bd48-0c35-461b-81ba-f2353b06ee45) # :information_source: - :heavy_check_mark: Everything works perfectly well on my laptop :thinking: Looks like something changed that causes this "regression" while playing around in some cases :thought_balloon: # :tickets: Potentially related issues - https://github.com/jmorganca/ollama/issues/1478 - https://github.com/jmorganca/ollama/issues/1641 - https://github.com/jmorganca/ollama/issues/1550 - https://github.com/jmorganca/ollama/pull/1146 ## :scroll: Detailed stacktrace ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/httpcore/_exceptions.py:10, in map_exceptions(map) 9 try: ---> 10 yield 11 except Exception as exc: # noqa: PIE786 File /opt/conda/lib/python3.10/site-packages/httpcore/_backends/sync.py:206, in SyncBackend.connect_tcp(self, host, port, timeout, local_address, socket_options) 205 with map_exceptions(exc_map): --> 206 sock = socket.create_connection( 207 address, 208 timeout, 209 source_address=source_address, 210 ) 211 for option in socket_options: File /opt/conda/lib/python3.10/socket.py:845, in create_connection(address, timeout, source_address) 844 try: --> 845 raise err 846 finally: 847 # Break explicitly a reference cycle File /opt/conda/lib/python3.10/socket.py:833, in create_connection(address, timeout, source_address) 832 sock.bind(source_address) --> 833 sock.connect(sa) 834 # Break explicitly a reference cycle OSError: [Errno 99] Cannot assign requested address The above exception was the direct cause of the following exception: ConnectError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/httpx/_transports/default.py:67, in map_httpcore_exceptions() 66 try: ---> 67 yield 68 except Exception as exc: File /opt/conda/lib/python3.10/site-packages/httpx/_transports/default.py:231, in HTTPTransport.handle_request(self, request) 230 with map_httpcore_exceptions(): --> 231 resp = self._pool.handle_request(req) 233 assert isinstance(resp.stream, typing.Iterable) File /opt/conda/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py:268, in ConnectionPool.handle_request(self, request) 267 self.response_closed(status) --> 268 raise exc 269 else: File /opt/conda/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py:251, in ConnectionPool.handle_request(self, request) 250 try: --> 251 response = connection.handle_request(request) 252 except ConnectionNotAvailable: 253 # The ConnectionNotAvailable exception is a special case, that 254 # indicates we need to retry the request on a new connection. (...) 258 # might end up as an HTTP/2 connection, but which actually ends 259 # up as HTTP/1.1. File /opt/conda/lib/python3.10/site-packages/httpcore/_sync/connection.py:99, in HTTPConnection.handle_request(self, request) 98 self._connect_failed = True ---> 99 raise exc 100 elif not self._connection.is_available(): File /opt/conda/lib/python3.10/site-packages/httpcore/_sync/connection.py:76, in HTTPConnection.handle_request(self, request) 75 try: ---> 76 stream = self._connect(request) 78 ssl_object = stream.get_extra_info("ssl_object") File /opt/conda/lib/python3.10/site-packages/httpcore/_sync/connection.py:124, in HTTPConnection._connect(self, request) 123 with Trace("connect_tcp", logger, request, kwargs) as trace: --> 124 stream = self._network_backend.connect_tcp(**kwargs) 125 trace.return_value = stream File /opt/conda/lib/python3.10/site-packages/httpcore/_backends/sync.py:205, in SyncBackend.connect_tcp(self, host, port, timeout, local_address, socket_options) 200 exc_map: ExceptionMapping = { 201 socket.timeout: ConnectTimeout, 202 OSError: ConnectError, 203 } --> 205 with map_exceptions(exc_map): 206 sock = socket.create_connection( 207 address, 208 timeout, 209 source_address=source_address, 210 ) File /opt/conda/lib/python3.10/contextlib.py:153, in _GeneratorContextManager.__exit__(self, typ, value, traceback) 152 try: --> 153 self.gen.throw(typ, value, traceback) 154 except StopIteration as exc: 155 # Suppress StopIteration *unless* it's the same exception that 156 # was passed to throw(). This prevents a StopIteration 157 # raised inside the "with" statement from being suppressed. File /opt/conda/lib/python3.10/site-packages/httpcore/_exceptions.py:14, in map_exceptions(map) 13 if isinstance(exc, from_exc): ---> 14 raise to_exc(exc) from exc 15 raise ConnectError: [Errno 99] Cannot assign requested address The above exception was the direct cause of the following exception: ConnectError Traceback (most recent call last) Cell In[13], line 5 2 from llama_index.llms import Ollama 4 llm = Ollama(model=OLLAMA_MODEL) ----> 5 response = llm.complete("""Who is Grigori Perelman and why is he so important in mathematics? 6 (Answer with markdown sections, markdown with be the GitHub flavor.)""") 7 print(response) File /opt/conda/lib/python3.10/site-packages/llama_index/llms/base.py:226, in llm_completion_callback.<locals>.wrap.<locals>.wrapped_llm_predict(_self, *args, **kwargs) 216 with wrapper_logic(_self) as callback_manager: 217 event_id = callback_manager.on_event_start( 218 CBEventType.LLM, 219 payload={ (...) 223 }, 224 ) --> 226 f_return_val = f(_self, *args, **kwargs) 227 if isinstance(f_return_val, Generator): 228 # intercept the generator and add a callback to the end 229 def wrapped_gen() -> CompletionResponseGen: File /opt/conda/lib/python3.10/site-packages/llama_index/llms/ollama.py:180, in Ollama.complete(self, prompt, formatted, **kwargs) 171 payload = { 172 self.prompt_key: prompt, 173 "model": self.model, (...) 176 **kwargs, 177 } 179 with httpx.Client(timeout=Timeout(self.request_timeout)) as client: --> 180 response = client.post( 181 url=f"{self.base_url}/api/generate", 182 json=payload, 183 ) 184 response.raise_for_status() 185 raw = response.json() File /opt/conda/lib/python3.10/site-packages/httpx/_client.py:1146, in Client.post(self, url, content, data, files, json, params, headers, cookies, auth, follow_redirects, timeout, extensions) 1125 def post( 1126 self, 1127 url: URLTypes, (...) 1139 extensions: typing.Optional[RequestExtensions] = None, 1140 ) -> Response: 1141 """ 1142 Send a `POST` request. 1143 1144 **Parameters**: See `httpx.request`. 1145 """ -> 1146 return self.request( 1147 "POST", 1148 url, 1149 content=content, 1150 data=data, 1151 files=files, 1152 json=json, 1153 params=params, 1154 headers=headers, 1155 cookies=cookies, 1156 auth=auth, 1157 follow_redirects=follow_redirects, 1158 timeout=timeout, 1159 extensions=extensions, 1160 ) File /opt/conda/lib/python3.10/site-packages/httpx/_client.py:828, in Client.request(self, method, url, content, data, files, json, params, headers, cookies, auth, follow_redirects, timeout, extensions) 813 warnings.warn(message, DeprecationWarning) 815 request = self.build_request( 816 method=method, 817 url=url, (...) 826 extensions=extensions, 827 ) --> 828 return self.send(request, auth=auth, follow_redirects=follow_redirects) File /opt/conda/lib/python3.10/site-packages/httpx/_client.py:915, in Client.send(self, request, stream, auth, follow_redirects) 907 follow_redirects = ( 908 self.follow_redirects 909 if isinstance(follow_redirects, UseClientDefault) 910 else follow_redirects 911 ) 913 auth = self._build_request_auth(request, auth) --> 915 response = self._send_handling_auth( 916 request, 917 auth=auth, 918 follow_redirects=follow_redirects, 919 history=[], 920 ) 921 try: 922 if not stream: File /opt/conda/lib/python3.10/site-packages/httpx/_client.py:943, in Client._send_handling_auth(self, request, auth, follow_redirects, history) 940 request = next(auth_flow) 942 while True: --> 943 response = self._send_handling_redirects( 944 request, 945 follow_redirects=follow_redirects, 946 history=history, 947 ) 948 try: 949 try: File /opt/conda/lib/python3.10/site-packages/httpx/_client.py:980, in Client._send_handling_redirects(self, request, follow_redirects, history) 977 for hook in self._event_hooks["request"]: 978 hook(request) --> 980 response = self._send_single_request(request) 981 try: 982 for hook in self._event_hooks["response"]: File /opt/conda/lib/python3.10/site-packages/httpx/_client.py:1016, in Client._send_single_request(self, request) 1011 raise RuntimeError( 1012 "Attempted to send an async request with a sync Client instance." 1013 ) 1015 with request_context(request=request): -> 1016 response = transport.handle_request(request) 1018 assert isinstance(response.stream, SyncByteStream) 1020 response.request = request File /opt/conda/lib/python3.10/site-packages/httpx/_transports/default.py:230, in HTTPTransport.handle_request(self, request) 216 assert isinstance(request.stream, SyncByteStream) 218 req = httpcore.Request( 219 method=request.method, 220 url=httpcore.URL( (...) 228 extensions=request.extensions, 229 ) --> 230 with map_httpcore_exceptions(): 231 resp = self._pool.handle_request(req) 233 assert isinstance(resp.stream, typing.Iterable) File /opt/conda/lib/python3.10/contextlib.py:153, in _GeneratorContextManager.__exit__(self, typ, value, traceback) 151 value = typ() 152 try: --> 153 self.gen.throw(typ, value, traceback) 154 except StopIteration as exc: 155 # Suppress StopIteration *unless* it's the same exception that 156 # was passed to throw(). This prevents a StopIteration 157 # raised inside the "with" statement from being suppressed. 158 return exc is not value File /opt/conda/lib/python3.10/site-packages/httpx/_transports/default.py:84, in map_httpcore_exceptions() 81 raise 83 message = str(exc) ---> 84 raise mapped_exc(message) from exc ConnectError: [Errno 99] Cannot assign requested address ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1997/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1015
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1015/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1015/comments
https://api.github.com/repos/ollama/ollama/issues/1015/events
https://github.com/ollama/ollama/pull/1015
1,978,789,553
PR_kwDOJ0Z1Ps5eq1mp
1,015
Update api.md
{ "login": "vmellgre", "id": 46565663, "node_id": "MDQ6VXNlcjQ2NTY1NjYz", "avatar_url": "https://avatars.githubusercontent.com/u/46565663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vmellgre", "html_url": "https://github.com/vmellgre", "followers_url": "https://api.github.com/users/vmellgre/followers", "following_url": "https://api.github.com/users/vmellgre/following{/other_user}", "gists_url": "https://api.github.com/users/vmellgre/gists{/gist_id}", "starred_url": "https://api.github.com/users/vmellgre/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vmellgre/subscriptions", "organizations_url": "https://api.github.com/users/vmellgre/orgs", "repos_url": "https://api.github.com/users/vmellgre/repos", "events_url": "https://api.github.com/users/vmellgre/events{/privacy}", "received_events_url": "https://api.github.com/users/vmellgre/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-11-06T10:24:18
2023-11-29T21:21:58
2023-11-29T21:21:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1015", "html_url": "https://github.com/ollama/ollama/pull/1015", "diff_url": "https://github.com/ollama/ollama/pull/1015.diff", "patch_url": "https://github.com/ollama/ollama/pull/1015.patch", "merged_at": null }
Fixed documentation, responds one token for streamed results
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1015/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5535
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5535/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5535/comments
https://api.github.com/repos/ollama/ollama/issues/5535/events
https://github.com/ollama/ollama/pull/5535
2,394,163,520
PR_kwDOJ0Z1Ps50nkSy
5,535
llm: remove ambiguous log message when placing an upper limit on predictions
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-07T18:21:26
2024-07-07T18:32:07
2024-07-07T18:32:05
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5535", "html_url": "https://github.com/ollama/ollama/pull/5535", "diff_url": "https://github.com/ollama/ollama/pull/5535.diff", "patch_url": "https://github.com/ollama/ollama/pull/5535.patch", "merged_at": "2024-07-07T18:32:05" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5535/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2619
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2619/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2619/comments
https://api.github.com/repos/ollama/ollama/issues/2619/events
https://github.com/ollama/ollama/pull/2619
2,145,336,350
PR_kwDOJ0Z1Ps5ncl29
2,619
API doc formatting updates
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-02-20T21:40:43
2024-05-07T17:49:02
2024-05-07T17:49:02
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2619", "html_url": "https://github.com/ollama/ollama/pull/2619", "diff_url": "https://github.com/ollama/ollama/pull/2619.diff", "patch_url": "https://github.com/ollama/ollama/pull/2619.patch", "merged_at": null }
- in preparation for rendering on ollama.com
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2619/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6902
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6902/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6902/comments
https://api.github.com/repos/ollama/ollama/issues/6902/events
https://github.com/ollama/ollama/issues/6902
2,540,292,373
I_kwDOJ0Z1Ps6XackV
6,902
No ollama model can recognize the referenced information.
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/followers", "following_url": "https://api.github.com/users/SDAIer/following{/other_user}", "gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}", "starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions", "organizations_url": "https://api.github.com/users/SDAIer/orgs", "repos_url": "https://api.github.com/users/SDAIer/repos", "events_url": "https://api.github.com/users/SDAIer/events{/privacy}", "received_events_url": "https://api.github.com/users/SDAIer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
2024-09-21T14:05:25
2024-09-25T07:11:56
2024-09-25T07:11:56
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Scene One By calling a public cloud-based LLM model through an AI Agent, two documents exceeding 2000 words each are uploaded, and the input question is: Analyze the differences between the two documents. In this manner, the model can normally analyze the differences between the two documents. Scene Two If the locally deployed LLM model of olalma0.3.3 is called (multiple different models have been tried), with the same documents and the same question, the model indicates that it cannot find the documents for comparison. If the document content is reduced to around 1000 words, the model can then compare normally. The model's maxContext and maxResponse have been adjusted from small to large, with no effect. Scene Three: When uploading a document exceeding 2000 words and asking the ollama local model to summarize the content, the same issue arises. However, if the document is reduced to around 1000 words, the ollama local model can analyze it normally. Despite trying multiple ollama models and adjusting maxContext and maxResponse from 2000 to 30000, the problem persists. the log message as followed, (base) [root@gpu ~]# journalctl -u ollama -r -- Logs begin at 一 2024-09-02 03:24:01 CST, end at 六 2024-09-21 21:59:04 CST. -- 9月 21 21:59:04 gpu ollama[48923]: [GIN] 2024/09/21 - 21:59:04 | 200 | 11.452294657s | 172.16.1.219 | POST "/v1/chat/completions" 9月 21 21:59:01 gpu ollama[48923]: time=2024-09-21T21:59:01.646+08:00 level=INFO source=server.go:623 msg="llama runner started in 6.78 seconds" 9月 21 21:59:01 gpu ollama[48923]: INFO [main] model loaded | tid="140514816995328" timestamp=1726927141 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: graph splits = 2 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: graph nodes = 1850 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: CUDA_Host compute buffer size = 41.01 MiB 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: CUDA0 compute buffer size = 578.00 MiB 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: CUDA_Host output buffer size = 3.98 MiB 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: KV self size = 2944.00 MiB, K (f16): 1472.00 MiB, V (f16): 1472.00 MiB 9月 21 21:59:00 gpu ollama[48923]: llama_kv_cache_init: CUDA0 KV buffer size = 2944.00 MiB 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: freq_scale = 1 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: freq_base = 10000.0 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: flash_attn = 0 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: n_ubatch = 512 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: n_batch = 512 9月 21 21:59:00 gpu ollama[48923]: llama_new_context_with_model: n_ctx = 8192 9月 21 21:58:58 gpu ollama[48923]: time=2024-09-21T21:58:58.185+08:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm serve 9月 21 21:58:58 gpu ollama[48923]: llm_load_tensors: CUDA0 buffer size = 14898.60 MiB 9月 21 21:58:58 gpu ollama[48923]: llm_load_tensors: CPU buffer size = 922.85 MiB 9月 21 21:58:58 gpu ollama[48923]: llm_load_tensors: offloaded 47/47 layers to GPU 9月 21 21:58:58 gpu ollama[48923]: llm_load_tensors: offloading non-repeating layers to GPU 9月 21 21:58:58 gpu ollama[48923]: llm_load_tensors: offloading 46 repeating layers to GPU 9月 21 21:58:56 gpu ollama[48923]: time=2024-09-21T21:58:56.580+08:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm serve 9月 21 21:58:56 gpu ollama[48923]: llm_load_tensors: ggml ctx size = 0.45 MiB 9月 21 21:58:55 gpu ollama[48923]: Device 0: NVIDIA A30, compute capability 8.0, VMM: yes 9月 21 21:58:55 gpu ollama[48923]: ggml_cuda_init: found 1 CUDA devices: 9月 21 21:58:55 gpu ollama[48923]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 9月 21 21:58:55 gpu ollama[48923]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: max token length = 93 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: EOT token = 107 '<end_of_turn>' 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: LF token = 227 '<0x0A>' 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: PAD token = 0 '<pad>' 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: UNK token = 3 '<unk>' 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: EOS token = 1 '<eos>' 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: BOS token = 2 '<bos>' 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: general.name = gemma-2-27b-it 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: model size = 14.55 GiB (4.59 BPW) 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: model params = 27.23 B 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: model ftype = Q4_0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: model type = 27B 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: ssm_dt_rank = 0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: ssm_d_state = 0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: ssm_d_inner = 0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: ssm_d_conv = 0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: rope_finetuned = unknown 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_ctx_orig_yarn = 8192 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: freq_scale_train = 1 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: freq_base_train = 10000.0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: rope scaling = linear 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: rope type = 2 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: pooling type = 0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: causal attn = 1 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_expert_used = 0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_expert = 0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_ff = 36864 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: f_logit_scale = 0.0e+00 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: f_norm_eps = 0.0e+00 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_embd_v_gqa = 2048 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_embd_k_gqa = 2048 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_gqa = 2 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_embd_head_v = 128 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_embd_head_k = 128 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_swa = 4096 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_rot = 128 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_head_kv = 16 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_head = 32 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_layer = 46 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_embd = 4608 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_ctx_train = 8192 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: vocab_only = 0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_merges = 0 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: n_vocab = 256000 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: vocab type = SPM 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: arch = gemma2 9月 21 21:58:55 gpu ollama[48923]: llm_load_print_meta: format = GGUF V3 (latest) 9月 21 21:58:55 gpu ollama[48923]: llm_load_vocab: token to piece cache size = 1.6014 MB 9月 21 21:58:55 gpu ollama[48923]: llm_load_vocab: special tokens cache size = 108 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - type q6_K: 1 tensors 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - type q4_0: 322 tensors 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - type f32: 185 tensors 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = false 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 26: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 3 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 1 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 2 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000 9月 21 21:58:55 gpu ollama[48923]: time=2024-09-21T21:58:55.122+08:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm serve 9月 21 21:58:55 gpu ollama[48923]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = default 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 15: tokenizer.ggml.model str = llama 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 14: gemma2.attention.sliding_window u32 = 4096 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 13: gemma2.final_logit_softcapping f32 = 30.000000 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 12: gemma2.attn_logit_softcapping f32 = 50.000000 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 11: general.file_type u32 = 2 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 10: gemma2.attention.value_length u32 = 128 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 9: gemma2.attention.key_length u32 = 128 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 8: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 7: gemma2.attention.head_count_kv u32 = 16 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 6: gemma2.attention.head_count u32 = 32 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 5: gemma2.feed_forward_length u32 = 36864 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 4: gemma2.block_count u32 = 46 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 3: gemma2.embedding_length u32 = 4608 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 2: gemma2.context_length u32 = 8192 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 1: general.name str = gemma-2-27b-it 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: - kv 0: general.architecture str = gemma2 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 9月 21 21:58:54 gpu ollama[48923]: llama_model_loader: loaded meta data with 29 key-value pairs and 508 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-d 9月 21 21:58:54 gpu ollama[48923]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="63" port="34781" tid="140514816995328" timestamp=1726927 9月 21 21:58:54 gpu ollama[48923]: INFO [main] system info | n_threads=32 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VB 9月 21 21:58:54 gpu ollama[48923]: INFO [main] build info | build=1 commit="6eeaeba" tid="140514816995328" timestamp=1726927134 9月 21 21:58:54 gpu ollama[48923]: time=2024-09-21T21:58:54.869+08:00 level=INFO source=server.go:618 msg="waiting for server to become available" status="llm serve 9月 21 21:58:54 gpu ollama[48923]: time=2024-09-21T21:58:54.864+08:00 level=INFO source=server.go:584 msg="waiting for llama runner to start responding" 9月 21 21:58:54 gpu ollama[48923]: time=2024-09-21T21:58:54.863+08:00 level=INFO source=sched.go:445 msg="loaded runners" count=1 9月 21 21:58:54 gpu ollama[48923]: time=2024-09-21T21:58:54.863+08:00 level=INFO source=server.go:384 msg="starting llama server" cmd="/tmp/ollama242898797/runners/ 9月 21 21:58:54 gpu ollama[48923]: time=2024-09-21T21:58:54.862+08:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=47 laye 9月 21 21:58:54 gpu ollama[48923]: time=2024-09-21T21:58:54.861+08:00 level=INFO source=sched.go:710 msg="new model will fit in available VRAM in single GPU, loading ### upgrate to 0.3.11 , the same issue with log as followed 9月 21 22:39:08 gpu ollama[54696]: [GIN] 2024/09/21 - 22:39:08 | 200 | 10.260377093s | 172.16.1.219 | POST "/v1/chat/completions" 9月 21 22:39:05 gpu ollama[54696]: time=2024-09-21T22:39:05.203+08:00 level=INFO source=server.go:626 msg="llama runner started in 5.67 seconds" 9月 21 22:39:05 gpu ollama[54696]: INFO [main] model loaded | tid="140149569400832" timestamp=1726929545 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: graph splits = 2 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: graph nodes = 1850 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: CUDA_Host compute buffer size = 41.01 MiB 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: CUDA0 compute buffer size = 578.00 MiB 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: CUDA_Host output buffer size = 3.98 MiB 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: KV self size = 2944.00 MiB, K (f16): 1472.00 MiB, V (f16): 1472.00 MiB 9月 21 22:39:04 gpu ollama[54696]: llama_kv_cache_init: CUDA0 KV buffer size = 2944.00 MiB 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: freq_scale = 1 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: freq_base = 10000.0 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: flash_attn = 0 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: n_ubatch = 512 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: n_batch = 512 9月 21 22:39:04 gpu ollama[54696]: llama_new_context_with_model: n_ctx = 8192 9月 21 22:39:02 gpu ollama[54696]: time=2024-09-21T22:39:02.405+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm serve 9月 21 22:39:02 gpu ollama[54696]: llm_load_tensors: CUDA0 buffer size = 14898.60 MiB 9月 21 22:39:02 gpu ollama[54696]: llm_load_tensors: CPU buffer size = 922.85 MiB 9月 21 22:39:02 gpu ollama[54696]: llm_load_tensors: offloaded 47/47 layers to GPU 9月 21 22:39:02 gpu ollama[54696]: llm_load_tensors: offloading non-repeating layers to GPU 9月 21 22:39:02 gpu ollama[54696]: llm_load_tensors: offloading 46 repeating layers to GPU 9月 21 22:39:01 gpu ollama[54696]: time=2024-09-21T22:39:01.250+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm serve 9月 21 22:39:00 gpu ollama[54696]: llm_load_tensors: ggml ctx size = 0.45 MiB 9月 21 22:39:00 gpu ollama[54696]: Device 0: NVIDIA A30, compute capability 8.0, VMM: yes 9月 21 22:39:00 gpu ollama[54696]: ggml_cuda_init: found 1 CUDA devices: 9月 21 22:39:00 gpu ollama[54696]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no 9月 21 22:39:00 gpu ollama[54696]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: max token length = 93 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: EOT token = 107 '<end_of_turn>' 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: LF token = 227 '<0x0A>' 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: PAD token = 0 '<pad>' 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: UNK token = 3 '<unk>' 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: EOS token = 1 '<eos>' 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: BOS token = 2 '<bos>' 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: general.name = gemma-2-27b-it 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: model size = 14.55 GiB (4.59 BPW) 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: model params = 27.23 B 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: model ftype = Q4_0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: model type = 27B 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: ssm_dt_b_c_rms = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: ssm_dt_rank = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: ssm_d_state = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: ssm_d_inner = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: ssm_d_conv = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: rope_finetuned = unknown 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_ctx_orig_yarn = 8192 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: freq_scale_train = 1 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: freq_base_train = 10000.0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: rope scaling = linear 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: rope type = 2 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: pooling type = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: causal attn = 1 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_expert_used = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_expert = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_ff = 36864 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: f_logit_scale = 0.0e+00 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: f_clamp_kqv = 0.0e+00 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: f_norm_rms_eps = 1.0e-06 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: f_norm_eps = 0.0e+00 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_embd_v_gqa = 2048 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_embd_k_gqa = 2048 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_gqa = 2 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_embd_head_v = 128 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_embd_head_k = 128 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_swa = 4096 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_rot = 128 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_head_kv = 16 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_head = 32 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_layer = 46 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_embd = 4608 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_ctx_train = 8192 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: vocab_only = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_merges = 0 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: n_vocab = 256000 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: vocab type = SPM 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: arch = gemma2 9月 21 22:39:00 gpu ollama[54696]: llm_load_print_meta: format = GGUF V3 (latest) 9月 21 22:39:00 gpu ollama[54696]: llm_load_vocab: token to piece cache size = 1.6014 MB 9月 21 22:39:00 gpu ollama[54696]: llm_load_vocab: special tokens cache size = 108 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - type q6_K: 1 tensors 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - type q4_0: 322 tensors 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - type f32: 185 tensors 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 28: general.quantization_version u32 = 2 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = false 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 26: tokenizer.chat_template str = {{ bos_token }}{% if messages[0]['rol 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 25: tokenizer.ggml.add_eos_token bool = false 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = true 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 0 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 3 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 1 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 2 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,256000] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,256000] = [0.000000, 0.000000, 0.000000, 0.0000 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,256000] = ["<pad>", "<eos>", "<bos>", "<unk>", 9月 21 22:38:59 gpu ollama[54696]: time=2024-09-21T22:38:59.792+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm serve 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 16: tokenizer.ggml.pre str = default 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 15: tokenizer.ggml.model str = llama 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 14: gemma2.attention.sliding_window u32 = 4096 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 13: gemma2.final_logit_softcapping f32 = 30.000000 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 12: gemma2.attn_logit_softcapping f32 = 50.000000 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 11: general.file_type u32 = 2 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 10: gemma2.attention.value_length u32 = 128 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 9: gemma2.attention.key_length u32 = 128 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 8: gemma2.attention.layer_norm_rms_epsilon f32 = 0.000001 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 7: gemma2.attention.head_count_kv u32 = 16 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 6: gemma2.attention.head_count u32 = 32 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 5: gemma2.feed_forward_length u32 = 36864 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 4: gemma2.block_count u32 = 46 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 3: gemma2.embedding_length u32 = 4608 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 2: gemma2.context_length u32 = 8192 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 1: general.name str = gemma-2-27b-it 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: - kv 0: general.architecture str = gemma2 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. 9月 21 22:38:59 gpu ollama[54696]: llama_model_loader: loaded meta data with 29 key-value pairs and 508 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-d 9月 21 22:38:59 gpu ollama[54696]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="63" port="43383" tid="140149569400832" timestamp=1726929 9月 21 22:38:59 gpu ollama[54696]: INFO [main] system info | n_threads=32 n_threads_batch=32 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VB 9月 21 22:38:59 gpu ollama[54696]: INFO [main] build info | build=10 commit="9225b05" tid="140149569400832" timestamp=1726929539 9月 21 22:38:59 gpu ollama[54696]: time=2024-09-21T22:38:59.537+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm serve 9月 21 22:38:59 gpu ollama[54696]: time=2024-09-21T22:38:59.536+08:00 level=INFO source=server.go:587 msg="waiting for llama runner to start responding" 9月 21 22:38:59 gpu ollama[54696]: time=2024-09-21T22:38:59.536+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1 9月 21 22:38:59 gpu ollama[54696]: time=2024-09-21T22:38:59.534+08:00 level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama2642747161/runners 9月 21 22:38:59 gpu ollama[54696]: time=2024-09-21T22:38:59.516+08:00 level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=47 laye 9月 21 22:38:59 gpu ollama[54696]: time=2024-09-21T22:38:59.514+08:00 level=INFO source=server.go:103 msg="system memory" total="125.4 GiB" free="111.5 GiB" free_sw 9月 21 22:38:59 gpu ollama[54696]: time=2024-09-21T22:38:59.514+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loadin 9月 21 22:38:51 gpu ollama[54696]: time=2024-09-21T22:38:51.422+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-1a5993d8-1f60-3ecd-b80f-55ca9f1e 9月 21 22:38:51 gpu ollama[54696]: time=2024-09-21T22:38:51.422+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-ac079011-c45b-de29-f2e2-71b2e5d2 9月 21 22:38:51 gpu ollama[54696]: time=2024-09-21T22:38:51.422+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-6b83f2f6-dc65-7feb-5e02-0cd00879 9月 21 22:38:51 gpu ollama[54696]: time=2024-09-21T22:38:51.421+08:00 level=INFO source=types.go:107 msg="inference compute" id=GPU-ad4cba93-ee35-2ea2-dba7-7b5772a0 9月 21 22:38:50 gpu ollama[54696]: time=2024-09-21T22:38:50.097+08:00 level=INFO source=gpu.go:199 msg="looking for compatible GPUs" 9月 21 22:38:50 gpu ollama[54696]: time=2024-09-21T22:38:50.097+08:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda 9月 21 22:38:34 gpu ollama[54696]: time=2024-09-21T22:38:34.553+08:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama2642747161/runn 9月 21 22:38:34 gpu ollama[54696]: time=2024-09-21T22:38:34.552+08:00 level=INFO source=routes.go:1200 msg="Listening on [::]:11434 (version 0.3.11)" 9月 21 22:38:34 gpu ollama[54696]: time=2024-09-21T22:38:34.551+08:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0" 9月 21 22:38:34 gpu ollama[54696]: time=2024-09-21T22:38:34.549+08:00 level=INFO source=images.go:753 msg="total blobs: 44" 9月 21 22:38:34 gpu ollama[54696]: 2024/09/21 22:38:34 routes.go:1153: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HS ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.3
{ "login": "SDAIer", "id": 174102361, "node_id": "U_kgDOCmCXWQ", "avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SDAIer", "html_url": "https://github.com/SDAIer", "followers_url": "https://api.github.com/users/SDAIer/followers", "following_url": "https://api.github.com/users/SDAIer/following{/other_user}", "gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}", "starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions", "organizations_url": "https://api.github.com/users/SDAIer/orgs", "repos_url": "https://api.github.com/users/SDAIer/repos", "events_url": "https://api.github.com/users/SDAIer/events{/privacy}", "received_events_url": "https://api.github.com/users/SDAIer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6902/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1294
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1294/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1294/comments
https://api.github.com/repos/ollama/ollama/issues/1294/events
https://github.com/ollama/ollama/pull/1294
2,013,342,661
PR_kwDOJ0Z1Ps5gfvnh
1,294
Allow setting parameters in the REPL
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-11-28T00:02:40
2023-11-29T17:56:43
2023-11-29T17:56:42
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1294", "html_url": "https://github.com/ollama/ollama/pull/1294", "diff_url": "https://github.com/ollama/ollama/pull/1294.diff", "patch_url": "https://github.com/ollama/ollama/pull/1294.patch", "merged_at": "2023-11-29T17:56:42" }
This change adds a new `/set parameter` command inside the repl so that you can change parameters without having to recreate a modelfile. I have changed the `/show parameters` command to also reflect any parameters that have been set, however I haven't yet changed `/show modelfile` which should spit out a new modelfile which reflects the changes. That can come in a followup PR. Also not included in this PR are `/set template` and `/set system` which will come in a different PR.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1294/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3024
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3024/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3024/comments
https://api.github.com/repos/ollama/ollama/issues/3024/events
https://github.com/ollama/ollama/issues/3024
2,177,284,627
I_kwDOJ0Z1Ps6BxroT
3,024
Ollama not using GPU, falling back to CPU
{ "login": "kopigeek-labs", "id": 128293648, "node_id": "U_kgDOB6WbEA", "avatar_url": "https://avatars.githubusercontent.com/u/128293648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kopigeek-labs", "html_url": "https://github.com/kopigeek-labs", "followers_url": "https://api.github.com/users/kopigeek-labs/followers", "following_url": "https://api.github.com/users/kopigeek-labs/following{/other_user}", "gists_url": "https://api.github.com/users/kopigeek-labs/gists{/gist_id}", "starred_url": "https://api.github.com/users/kopigeek-labs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopigeek-labs/subscriptions", "organizations_url": "https://api.github.com/users/kopigeek-labs/orgs", "repos_url": "https://api.github.com/users/kopigeek-labs/repos", "events_url": "https://api.github.com/users/kopigeek-labs/events{/privacy}", "received_events_url": "https://api.github.com/users/kopigeek-labs/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" }, { "id": 6677677816, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A", "url": "https://api.github.com/repos/ollama/ollama/labels/docker", "name": "docker", "color": "0052CC", "default": false, "description": "Issues relating to using ollama in containers" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
7
2024-03-09T15:59:20
2024-04-29T22:43:52
2024-04-12T22:18:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm running Ollama via a docker container on Debian. For a llama2 model, my CPU utilization is at 100% while GPU remains at 0%. Here is my output from `docker logs ollama`: ``` time=2024-03-09T14:52:42.622Z level=INFO source=images.go:800 msg="total blobs: 0" time=2024-03-09T14:52:42.623Z level=INFO source=images.go:807 msg="total unused blobs removed: 0" time=2024-03-09T14:52:42.623Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.28)" time=2024-03-09T14:52:42.623Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-03-09T14:52:46.425Z level=INFO source=payload_common.go:150 msg="Dynamic LLM libraries [cpu_avx rocm_v60000 cpu_avx2 cuda_v11 cpu]" time=2024-03-09T14:52:46.425Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-09T14:52:46.425Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-09T14:52:46.426Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14]" time=2024-03-09T14:52:46.434Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14: nvml vram init failure: 999" time=2024-03-09T14:52:46.434Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-09T14:52:46.434Z level=INFO source=routes.go:1042 msg="no GPU detected" time=2024-03-09T15:12:39.692Z level=INFO source=images.go:800 msg="total blobs: 6" time=2024-03-09T15:12:39.694Z level=INFO source=images.go:807 msg="total unused blobs removed: 6" time=2024-03-09T15:12:39.695Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.28)" time=2024-03-09T15:12:39.695Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-03-09T15:12:43.522Z level=INFO source=payload_common.go:150 msg="Dynamic LLM libraries [rocm_v60000 cpu cpu_avx cpu_avx2 cuda_v11]" time=2024-03-09T15:12:43.523Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-09T15:12:43.523Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-09T15:12:43.525Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14]" time=2024-03-09T15:12:43.535Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14: nvml vram init failure: 999" time=2024-03-09T15:12:43.535Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-09T15:12:43.535Z level=INFO source=routes.go:1042 msg="no GPU detected" time=2024-03-09T15:25:32.983Z level=INFO source=images.go:800 msg="total blobs: 0" time=2024-03-09T15:25:32.984Z level=INFO source=images.go:807 msg="total unused blobs removed: 0" time=2024-03-09T15:25:32.984Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.28)" time=2024-03-09T15:25:32.985Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-03-09T15:25:36.686Z level=INFO source=payload_common.go:150 msg="Dynamic LLM libraries [cpu cpu_avx rocm_v60000 cpu_avx2 cuda_v11]" time=2024-03-09T15:25:36.686Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-09T15:25:36.686Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-09T15:25:36.688Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14]" time=2024-03-09T15:25:36.698Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14: nvml vram init failure: 999" time=2024-03-09T15:25:36.698Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-09T15:25:36.698Z level=INFO source=routes.go:1042 msg="no GPU detected" time=2024-03-09T15:28:43.196Z level=INFO source=images.go:800 msg="total blobs: 0" time=2024-03-09T15:28:43.198Z level=INFO source=images.go:807 msg="total unused blobs removed: 0" time=2024-03-09T15:28:43.198Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.28)" time=2024-03-09T15:28:43.199Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-03-09T15:28:46.997Z level=INFO source=payload_common.go:150 msg="Dynamic LLM libraries [cpu cpu_avx rocm_v60000 cuda_v11 cpu_avx2]" time=2024-03-09T15:28:46.997Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-09T15:28:46.998Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-09T15:28:46.999Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14]" time=2024-03-09T15:28:47.010Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14: nvml vram init failure: 999" time=2024-03-09T15:28:47.010Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-09T15:28:47.010Z level=INFO source=routes.go:1042 msg="no GPU detected" time=2024-03-09T15:33:09.444Z level=INFO source=images.go:800 msg="total blobs: 0" time=2024-03-09T15:33:09.444Z level=INFO source=images.go:807 msg="total unused blobs removed: 0" time=2024-03-09T15:33:09.445Z level=INFO source=routes.go:1019 msg="Listening on [::]:11434 (version 0.1.28)" time=2024-03-09T15:33:09.445Z level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-03-09T15:33:13.264Z level=INFO source=payload_common.go:150 msg="Dynamic LLM libraries [cuda_v11 cpu_avx cpu rocm_v60000 cpu_avx2]" time=2024-03-09T15:33:13.264Z level=INFO source=gpu.go:77 msg="Detecting GPU type" time=2024-03-09T15:33:13.264Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so" time=2024-03-09T15:33:13.278Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14]" time=2024-03-09T15:33:13.287Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14: nvml vram init failure: 999" time=2024-03-09T15:33:13.287Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-09T15:33:13.287Z level=INFO source=routes.go:1042 msg="no GPU detected" ... ... time=2024-03-09T15:36:53.196Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-09T15:36:53.196Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-03-09T15:36:53.196Z level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU" loading library /root/.ollama/assets/0.1.28/cpu_avx2/libext_server.so time=2024-03-09T15:36:53.200Z level=INFO source=dyn_ext_server.go:90 msg="Loading Dynamic llm server: /root/.ollama/assets/0.1.28/cpu_avx2/libext_server.so" time=2024-03-09T15:36:53.200Z level=INFO source=dyn_ext_server.go:150 msg="Initializing llama server" ... ... llama_kv_cache_init: CPU KV buffer size = 1024.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CPU input buffer size = 13.02 MiB llama_new_context_with_model: CPU compute buffer size = 160.00 MiB ``` I can confirm that I have NVIDIA drivers installed, and also the latest version of nvidia-container-toolkit ``` root@docker-debian:/root/docker# nvidia-ctk --version NVIDIA Container Toolkit CLI version 1.14.6 ``` `nvidia-smi` output: ``` root@docker-debian:/root/docker# sudo docker run --rm --runtime=nvidia --gpus all \ --device /dev/nvidia0:/dev/nvidia0 \ --device /dev/nvidia1:/dev/nvidia1 \ --device /dev/nvidiactl \ --device /dev/nvidia-modeset \ --device /dev/nvidia-uvm \ debian nvidia-smi Sat Mar 9 15:53:14 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.14 Driver Version: 550.54.14 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla M40 24GB Off | 00000000:02:00.0 Off | Off | | N/A 38C P8 16W / 250W | 0MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA GeForce GTX 1660 ... Off | 00000000:03:00.0 Off | N/A | | 51% 42C P8 12W / 125W | 0MiB / 6144MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` I'm very new to this and learning! Hope some one can point me in the right direction
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3024/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/155
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/155/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/155/comments
https://api.github.com/repos/ollama/ollama/issues/155/events
https://github.com/ollama/ollama/issues/155
1,815,125,416
I_kwDOJ0Z1Ps5sMJ2o
155
Where are the models pulled to?
{ "login": "m3kwong", "id": 888841, "node_id": "MDQ6VXNlcjg4ODg0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/888841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/m3kwong", "html_url": "https://github.com/m3kwong", "followers_url": "https://api.github.com/users/m3kwong/followers", "following_url": "https://api.github.com/users/m3kwong/following{/other_user}", "gists_url": "https://api.github.com/users/m3kwong/gists{/gist_id}", "starred_url": "https://api.github.com/users/m3kwong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/m3kwong/subscriptions", "organizations_url": "https://api.github.com/users/m3kwong/orgs", "repos_url": "https://api.github.com/users/m3kwong/repos", "events_url": "https://api.github.com/users/m3kwong/events{/privacy}", "received_events_url": "https://api.github.com/users/m3kwong/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
8
2023-07-21T04:15:37
2024-07-27T10:25:17
2023-08-23T17:47:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
It downloaded 7 gigs of stuff and i can't seem to find where it went. I want to download it. Any ideas?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/155/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/155/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5886
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5886/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5886/comments
https://api.github.com/repos/ollama/ollama/issues/5886/events
https://github.com/ollama/ollama/pull/5886
2,425,972,130
PR_kwDOJ0Z1Ps52QcxE
5,886
OpenAI: Add Usage to `v1/embeddings`
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-23T19:34:33
2024-08-01T22:49:39
2024-08-01T22:49:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5886", "html_url": "https://github.com/ollama/ollama/pull/5886", "diff_url": "https://github.com/ollama/ollama/pull/5886.diff", "patch_url": "https://github.com/ollama/ollama/pull/5886.patch", "merged_at": "2024-08-01T22:49:37" }
null
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5886/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1644
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1644/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1644/comments
https://api.github.com/repos/ollama/ollama/issues/1644/events
https://github.com/ollama/ollama/pull/1644
2,051,392,116
PR_kwDOJ0Z1Ps5ihEHz
1,644
Use cuda base image for final docker image
{ "login": "djmaze", "id": 7229, "node_id": "MDQ6VXNlcjcyMjk=", "avatar_url": "https://avatars.githubusercontent.com/u/7229?v=4", "gravatar_id": "", "url": "https://api.github.com/users/djmaze", "html_url": "https://github.com/djmaze", "followers_url": "https://api.github.com/users/djmaze/followers", "following_url": "https://api.github.com/users/djmaze/following{/other_user}", "gists_url": "https://api.github.com/users/djmaze/gists{/gist_id}", "starred_url": "https://api.github.com/users/djmaze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/djmaze/subscriptions", "organizations_url": "https://api.github.com/users/djmaze/orgs", "repos_url": "https://api.github.com/users/djmaze/repos", "events_url": "https://api.github.com/users/djmaze/events{/privacy}", "received_events_url": "https://api.github.com/users/djmaze/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
8
2023-12-20T22:34:01
2024-01-27T01:26:02
2024-01-27T01:26:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1644", "html_url": "https://github.com/ollama/ollama/pull/1644", "diff_url": "https://github.com/ollama/ollama/pull/1644.diff", "patch_url": "https://github.com/ollama/ollama/pull/1644.patch", "merged_at": null }
This is necessary so cuda works at all.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1644/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1644/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1642
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1642/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1642/comments
https://api.github.com/repos/ollama/ollama/issues/1642/events
https://github.com/ollama/ollama/pull/1642
2,051,211,722
PR_kwDOJ0Z1Ps5igboL
1,642
Add Cache option #1573
{ "login": "K0IN", "id": 19688162, "node_id": "MDQ6VXNlcjE5Njg4MTYy", "avatar_url": "https://avatars.githubusercontent.com/u/19688162?v=4", "gravatar_id": "", "url": "https://api.github.com/users/K0IN", "html_url": "https://github.com/K0IN", "followers_url": "https://api.github.com/users/K0IN/followers", "following_url": "https://api.github.com/users/K0IN/following{/other_user}", "gists_url": "https://api.github.com/users/K0IN/gists{/gist_id}", "starred_url": "https://api.github.com/users/K0IN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/K0IN/subscriptions", "organizations_url": "https://api.github.com/users/K0IN/orgs", "repos_url": "https://api.github.com/users/K0IN/repos", "events_url": "https://api.github.com/users/K0IN/events{/privacy}", "received_events_url": "https://api.github.com/users/K0IN/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
11
2023-12-20T20:24:14
2024-08-18T12:01:14
2023-12-22T22:16:20
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1642", "html_url": "https://github.com/ollama/ollama/pull/1642", "diff_url": "https://github.com/ollama/ollama/pull/1642.diff", "patch_url": "https://github.com/ollama/ollama/pull/1642.patch", "merged_at": "2023-12-22T22:16:20" }
This PR, adds the API option "cache", that allows the llama.cpp server to cache our prompt Eval and the response. This speed-up follow-up calls immensely for some models, if you use it over the API, with the same prompt (or even partial ones), it will speed up subsequent calls, since it skips the evaluation of the prompt. Also, this PR includes commands /set cache and /set nocache to give users the ability to enable prompt caching in the official CLI. * Add a new entry "cache" to the options object that is passed to the worker * Add commands /set cache and /set nocache to allow this in the repl cli * Update docs This is a partial fix for, Enable prompt cache #1573, we might need to patch llama.cpp at some point to allow us full flexibility.
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1642/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1642/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2107
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2107/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2107/comments
https://api.github.com/repos/ollama/ollama/issues/2107/events
https://github.com/ollama/ollama/issues/2107
2,091,949,934
I_kwDOJ0Z1Ps58sJ9u
2,107
Crash upon loading any model with the ROCm GPU
{ "login": "ThatOneCalculator", "id": 44733677, "node_id": "MDQ6VXNlcjQ0NzMzNjc3", "avatar_url": "https://avatars.githubusercontent.com/u/44733677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ThatOneCalculator", "html_url": "https://github.com/ThatOneCalculator", "followers_url": "https://api.github.com/users/ThatOneCalculator/followers", "following_url": "https://api.github.com/users/ThatOneCalculator/following{/other_user}", "gists_url": "https://api.github.com/users/ThatOneCalculator/gists{/gist_id}", "starred_url": "https://api.github.com/users/ThatOneCalculator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ThatOneCalculator/subscriptions", "organizations_url": "https://api.github.com/users/ThatOneCalculator/orgs", "repos_url": "https://api.github.com/users/ThatOneCalculator/repos", "events_url": "https://api.github.com/users/ThatOneCalculator/events{/privacy}", "received_events_url": "https://api.github.com/users/ThatOneCalculator/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA", "url": "https://api.github.com/repos/ollama/ollama/labels/amd", "name": "amd", "color": "000000", "default": false, "description": "Issues relating to AMD GPUs and ROCm" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
11
2024-01-20T07:40:46
2024-01-29T23:50:08
2024-01-29T23:47:31
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Stacktrace: ``` llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 40 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 13B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.36 B llm_load_print_meta: model size = 4.41 GiB (4.53 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: using ROCm for GPU acceleration llm_load_tensors: system memory used = 70.45 MiB llm_load_tensors: VRAM used = 4446.30 MiB llm_load_tensors: offloading 40 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 41/41 layers to GPU ................................................................................................... llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: VRAM kv self = 1280.00 MB llama_new_context_with_model: KV self size = 1280.00 MiB, K (f16): 640.00 MiB, V (f16): 640.00 MiB llama_build_graph: non-view tensors processed: 844/844 llama_new_context_with_model: compute buffer total size = 159.19 MiB llama_new_context_with_model: VRAM scratch buffer: 156.00 MiB llama_new_context_with_model: total VRAM used: 5882.31 MiB (model: 4446.30 MiB, context: 1436.00 MiB) SIGSEGV: segmentation violation PC=0x780302b2b380 m=18 sigcode=128 signal arrived during cgo execution goroutine 67 [syscall]: runtime.cgocall(0x9b3a90, 0xc000318808) /usr/lib/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0003187e0 sp=0xc0003187a8 pc=0x409b0b github.com/jmorganca/ollama/llm._Cfunc_dyn_llama_server_init({0x78029c001620, 0x780309434970, 0x7803094350c0, 0x780309435150, 0x780309435300, 0x780309435480, 0x7803094359b0, 0x780309435990, 0x780309435a40, 0x780309435f20, ...}, ...) _cgo_gotypes.go:284 +0x45 fp=0xc000318808 sp=0xc0003187e0 pc=0x7c25a5 github.com/jmorganca/ollama/llm.newDynExtServer.func7(0xae3c43?, 0x6c?) /home/kainoa/Git/ollama-clean/llm/dyn_ext_server.go:142 +0xef fp=0xc0003188f8 sp=0xc000318808 pc=0x7c3a0f github.com/jmorganca/ollama/llm.newDynExtServer({0xc000618000, 0x2e}, {0xc0001c48c0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...) /home/kainoa/Git/ollama-clean/llm/dyn_ext_server.go:142 +0xa32 fp=0xc000318b88 sp=0xc0003188f8 pc=0x7c3752 github.com/jmorganca/ollama/llm.newLlmServer({{_, _, _}, {_, _}, {_, _}}, {_, _}, {0x0, ...}, ...) /home/kainoa/Git/ollama-clean/llm/llm.go:147 +0x36a fp=0xc000318d48 sp=0xc000318b88 pc=0x7bff6a github.com/jmorganca/ollama/llm.New({0x0?, 0x1000100000100?}, {0xc0001c48c0, _}, {_, _, _}, {0x0, 0x0, 0x0}, ...) /home/kainoa/Git/ollama-clean/llm/llm.go:122 +0x6f9 fp=0xc000318fb8 sp=0xc000318d48 pc=0x7bf999 github.com/jmorganca/ollama/server.load(0xc000002f00?, 0xc000002f00, {{0x0, 0x800, 0x200, 0x1, 0xffffffffffffffff, 0x0, 0x0, 0x1, ...}, ...}, ...) /home/kainoa/Git/ollama-clean/server/routes.go:83 +0x3a5 fp=0xc000319138 sp=0xc000318fb8 pc=0x98fde5 github.com/jmorganca/ollama/server.ChatHandler(0xc0002fc100) /home/kainoa/Git/ollama-clean/server/routes.go:1071 +0x828 fp=0xc000319748 sp=0xc000319138 pc=0x99a728 github.com/gin-gonic/gin.(*Context).Next(...) /home/kainoa/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 github.com/jmorganca/ollama/server.(*Server).GenerateRoutes.func1(0xc0002fc100) /home/kainoa/Git/ollama-clean/server/routes.go:883 +0x68 fp=0xc000319780 sp=0xc000319748 pc=0x999268 github.com/gin-gonic/gin.(*Context).Next(...) /home/kainoa/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc0002fc100) /home/kainoa/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 +0x7a fp=0xc0003197d0 sp=0xc000319780 pc=0x974afa github.com/gin-gonic/gin.(*Context).Next(...) /home/kainoa/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc0002fc100) /home/kainoa/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 +0xde fp=0xc000319980 sp=0xc0003197d0 pc=0x973c9e github.com/gin-gonic/gin.(*Context).Next(...) /home/kainoa/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc0000e9a00, 0xc0002fc100) /home/kainoa/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 +0x65b fp=0xc000319b08 sp=0xc000319980 pc=0x972d5b github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc0000e9a00, {0x1258e00?, 0xc0001c61c0}, 0xc0002fc500) /home/kainoa/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 +0x1dd fp=0xc000319b48 sp=0xc000319b08 pc=0x97251d net/http.serverHandler.ServeHTTP({0x1257120?}, {0x1258e00?, 0xc0001c61c0?}, 0x6?) /usr/lib/go/src/net/http/server.go:2938 +0x8e fp=0xc000319b78 sp=0xc000319b48 pc=0x6ce14e net/http.(*conn).serve(0xc0001bae10, {0x125a468, 0xc0004a6720}) /usr/lib/go/src/net/http/server.go:2009 +0x5f4 fp=0xc000319fb8 sp=0xc000319b78 pc=0x6ca034 net/http.(*Server).Serve.func3() /usr/lib/go/src/net/http/server.go:3086 +0x28 fp=0xc000319fe0 sp=0xc000319fb8 pc=0x6ce968 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000319fe8 sp=0xc000319fe0 pc=0x46e081 created by net/http.(*Server).Serve in goroutine 1 /usr/lib/go/src/net/http/server.go:3086 +0x5cb goroutine 1 [IO wait]: runtime.gopark(0x480890?, 0xc0003ab848?, 0x98?, 0xb8?, 0x4f687d?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc00011b828 sp=0xc00011b808 pc=0x43e60e runtime.netpollblock(0x46c0f2?, 0x4092a6?, 0x0?) /usr/lib/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc00011b860 sp=0xc00011b828 pc=0x4370b7 internal/poll.runtime_pollWait(0x78036acc4e80, 0x72) /usr/lib/go/src/runtime/netpoll.go:343 +0x85 fp=0xc00011b880 sp=0xc00011b860 pc=0x4688a5 internal/poll.(*pollDesc).wait(0xc000484080?, 0x4?, 0x0) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00011b8a8 sp=0xc00011b880 pc=0x4ef4c7 internal/poll.(*pollDesc).waitRead(...) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0xc000484080) /usr/lib/go/src/internal/poll/fd_unix.go:611 +0x2ac fp=0xc00011b950 sp=0xc00011b8a8 pc=0x4f49ac net.(*netFD).accept(0xc000484080) /usr/lib/go/src/net/fd_unix.go:172 +0x29 fp=0xc00011ba08 sp=0xc00011b950 pc=0x56b569 net.(*TCPListener).accept(0xc0004595c0) /usr/lib/go/src/net/tcpsock_posix.go:152 +0x1e fp=0xc00011ba30 sp=0xc00011ba08 pc=0x58039e net.(*TCPListener).Accept(0xc0004595c0) /usr/lib/go/src/net/tcpsock.go:315 +0x30 fp=0xc00011ba60 sp=0xc00011ba30 pc=0x57f550 net/http.(*onceCloseListener).Accept(0xc0001bae10?) <autogenerated>:1 +0x24 fp=0xc00011ba78 sp=0xc00011ba60 pc=0x6f0ee4 net/http.(*Server).Serve(0xc000396ff0, {0x1258bf0, 0xc0004595c0}) /usr/lib/go/src/net/http/server.go:3056 +0x364 fp=0xc00011bba8 sp=0xc00011ba78 pc=0x6ce5a4 github.com/jmorganca/ollama/server.Serve({0x1258bf0, 0xc0004595c0}) /home/kainoa/Git/ollama-clean/server/routes.go:970 +0x494 fp=0xc00011bc98 sp=0xc00011bba8 pc=0x999754 github.com/jmorganca/ollama/cmd.RunServer(0xc000482300?, {0x169c7a0?, 0x4?, 0xacbac1?}) /home/kainoa/Git/ollama-clean/cmd/cmd.go:690 +0x199 fp=0xc00011bd30 sp=0xc00011bc98 pc=0x9abb39 github.com/spf13/cobra.(*Command).execute(0xc000417800, {0x169c7a0, 0x0, 0x0}) /home/kainoa/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:940 +0x87c fp=0xc00011be68 sp=0xc00011bd30 pc=0x763c9c github.com/spf13/cobra.(*Command).ExecuteC(0xc000416c00) /home/kainoa/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5 fp=0xc00011bf20 sp=0xc00011be68 pc=0x7644c5 github.com/spf13/cobra.(*Command).Execute(...) /home/kainoa/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 github.com/spf13/cobra.(*Command).ExecuteContext(...) /home/kainoa/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 main.main() /home/kainoa/Git/ollama-clean/main.go:11 +0x4d fp=0xc00011bf40 sp=0xc00011bf20 pc=0x9b2bad runtime.main() /usr/lib/go/src/runtime/proc.go:267 +0x2bb fp=0xc00011bfe0 sp=0xc00011bf40 pc=0x43e1bb runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00011bfe8 sp=0xc00011bfe0 pc=0x46e081 goroutine 2 [force gc (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000070fa8 sp=0xc000070f88 pc=0x43e60e runtime.goparkunlock(...) /usr/lib/go/src/runtime/proc.go:404 runtime.forcegchelper() /usr/lib/go/src/runtime/proc.go:322 +0xb3 fp=0xc000070fe0 sp=0xc000070fa8 pc=0x43e493 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000070fe8 sp=0xc000070fe0 pc=0x46e081 created by runtime.init.6 in goroutine 1 /usr/lib/go/src/runtime/proc.go:310 +0x1a goroutine 3 [GC sweep wait]: runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000071778 sp=0xc000071758 pc=0x43e60e runtime.goparkunlock(...) /usr/lib/go/src/runtime/proc.go:404 runtime.bgsweep(0x0?) /usr/lib/go/src/runtime/mgcsweep.go:321 +0xdf fp=0xc0000717c8 sp=0xc000071778 pc=0x42a57f runtime.gcenable.func1() /usr/lib/go/src/runtime/mgc.go:200 +0x25 fp=0xc0000717e0 sp=0xc0000717c8 pc=0x41f6c5 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000717e8 sp=0xc0000717e0 pc=0x46e081 created by runtime.gcenable in goroutine 1 /usr/lib/go/src/runtime/mgc.go:200 +0x66 goroutine 4 [GC scavenge wait]: runtime.gopark(0x104a1f?, 0xede89?, 0x0?, 0x0?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000071f70 sp=0xc000071f50 pc=0x43e60e runtime.goparkunlock(...) /usr/lib/go/src/runtime/proc.go:404 runtime.(*scavengerState).park(0x166cb20) /usr/lib/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc000071fa0 sp=0xc000071f70 pc=0x427de9 runtime.bgscavenge(0x0?) /usr/lib/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc000071fc8 sp=0xc000071fa0 pc=0x428399 runtime.gcenable.func2() /usr/lib/go/src/runtime/mgc.go:201 +0x25 fp=0xc000071fe0 sp=0xc000071fc8 pc=0x41f665 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x46e081 created by runtime.gcenable in goroutine 1 /usr/lib/go/src/runtime/mgc.go:201 +0xa5 goroutine 5 [finalizer wait]: runtime.gopark(0x198?, 0xac4a80?, 0x1?, 0xf7?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000070620 sp=0xc000070600 pc=0x43e60e runtime.runfinq() /usr/lib/go/src/runtime/mfinal.go:193 +0x107 fp=0xc0000707e0 sp=0xc000070620 pc=0x41e6e7 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000707e8 sp=0xc0000707e0 pc=0x46e081 created by runtime.createfing in goroutine 1 /usr/lib/go/src/runtime/mfinal.go:163 +0x3d goroutine 6 [select, locked to thread]: runtime.gopark(0xc0000727a8?, 0x2?, 0xa9?, 0xe8?, 0xc0000727a4?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000072638 sp=0xc000072618 pc=0x43e60e runtime.selectgo(0xc0000727a8, 0xc0000727a0, 0x0?, 0x0, 0x0?, 0x1) /usr/lib/go/src/runtime/select.go:327 +0x725 fp=0xc000072758 sp=0xc000072638 pc=0x44e165 runtime.ensureSigM.func1() /usr/lib/go/src/runtime/signal_unix.go:1014 +0x19f fp=0xc0000727e0 sp=0xc000072758 pc=0x46519f runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000727e8 sp=0xc0000727e0 pc=0x46e081 created by runtime.ensureSigM in goroutine 1 /usr/lib/go/src/runtime/signal_unix.go:997 +0xc8 goroutine 18 [syscall]: runtime.notetsleepg(0x0?, 0x0?) /usr/lib/go/src/runtime/lock_futex.go:236 +0x29 fp=0xc00006c7a0 sp=0xc00006c768 pc=0x411209 os/signal.signal_recv() /usr/lib/go/src/runtime/sigqueue.go:152 +0x29 fp=0xc00006c7c0 sp=0xc00006c7a0 pc=0x46aa49 os/signal.loop() /usr/lib/go/src/os/signal/signal_unix.go:23 +0x13 fp=0xc00006c7e0 sp=0xc00006c7c0 pc=0x6f3913 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006c7e8 sp=0xc00006c7e0 pc=0x46e081 created by os/signal.Notify.func1.1 in goroutine 1 /usr/lib/go/src/os/signal/signal.go:151 +0x1f goroutine 7 [chan receive]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000072f18 sp=0xc000072ef8 pc=0x43e60e runtime.chanrecv(0xc0004ac540, 0x0, 0x1) /usr/lib/go/src/runtime/chan.go:583 +0x3cd fp=0xc000072f90 sp=0xc000072f18 pc=0x40beed runtime.chanrecv1(0x0?, 0x0?) /usr/lib/go/src/runtime/chan.go:442 +0x12 fp=0xc000072fb8 sp=0xc000072f90 pc=0x40baf2 github.com/jmorganca/ollama/server.Serve.func1() /home/kainoa/Git/ollama-clean/server/routes.go:952 +0x25 fp=0xc000072fe0 sp=0xc000072fb8 pc=0x9997e5 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000072fe8 sp=0xc000072fe0 pc=0x46e081 created by github.com/jmorganca/ollama/server.Serve in goroutine 1 /home/kainoa/Git/ollama-clean/server/routes.go:951 +0x407 goroutine 62 [IO wait]: runtime.gopark(0x75?, 0xb?, 0x0?, 0x0?, 0xa?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc00011f8f8 sp=0xc00011f8d8 pc=0x43e60e runtime.netpollblock(0x47e9f8?, 0x4092a6?, 0x0?) /usr/lib/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc00011f930 sp=0xc00011f8f8 pc=0x4370b7 internal/poll.runtime_pollWait(0x78036acc4d88, 0x72) /usr/lib/go/src/runtime/netpoll.go:343 +0x85 fp=0xc00011f950 sp=0xc00011f930 pc=0x4688a5 internal/poll.(*pollDesc).wait(0xc000040080?, 0xc000428000?, 0x0) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00011f978 sp=0xc00011f950 pc=0x4ef4c7 internal/poll.(*pollDesc).waitRead(...) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc000040080, {0xc000428000, 0x1000, 0x1000}) /usr/lib/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc00011fa10 sp=0xc00011f978 pc=0x4f07ba net.(*netFD).Read(0xc000040080, {0xc000428000?, 0x4ef985?, 0x0?}) /usr/lib/go/src/net/fd_posix.go:55 +0x25 fp=0xc00011fa58 sp=0xc00011fa10 pc=0x569545 net.(*conn).Read(0xc000074038, {0xc000428000?, 0x0?, 0xc0000b0518?}) /usr/lib/go/src/net/net.go:179 +0x45 fp=0xc00011faa0 sp=0xc00011fa58 pc=0x577805 net.(*TCPConn).Read(0xc0000b0510?, {0xc000428000?, 0x0?, 0xc00011fac0?}) <autogenerated>:1 +0x25 fp=0xc00011fad0 sp=0xc00011faa0 pc=0x589705 net/http.(*connReader).Read(0xc0000b0510, {0xc000428000, 0x1000, 0x1000}) /usr/lib/go/src/net/http/server.go:791 +0x14b fp=0xc00011fb20 sp=0xc00011fad0 pc=0x6c42eb bufio.(*Reader).fill(0xc0004ac000) /usr/lib/go/src/bufio/bufio.go:113 +0x103 fp=0xc00011fb58 sp=0xc00011fb20 pc=0x653ea3 bufio.(*Reader).Peek(0xc0004ac000, 0x4) /usr/lib/go/src/bufio/bufio.go:151 +0x53 fp=0xc00011fb78 sp=0xc00011fb58 pc=0x653fd3 net/http.(*conn).serve(0xc0000fc240, {0x125a468, 0xc0004a6720}) /usr/lib/go/src/net/http/server.go:2044 +0x75c fp=0xc00011ffb8 sp=0xc00011fb78 pc=0x6ca19c net/http.(*Server).Serve.func3() /usr/lib/go/src/net/http/server.go:3086 +0x28 fp=0xc00011ffe0 sp=0xc00011ffb8 pc=0x6ce968 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00011ffe8 sp=0xc00011ffe0 pc=0x46e081 created by net/http.(*Server).Serve in goroutine 1 /usr/lib/go/src/net/http/server.go:3086 +0x5cb goroutine 12 [GC worker (idle)]: runtime.gopark(0x0?, 0x0?, 0xe0?, 0x2e?, 0xc0004c2fd0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc0004c2f50 sp=0xc0004c2f30 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0004c2fe0 sp=0xc0004c2f50 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004c2fe8 sp=0xc0004c2fe0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 34 [GC worker (idle)]: runtime.gopark(0xa09ea49875?, 0x3?, 0x84?, 0x3?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc0004be750 sp=0xc0004be730 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0004be7e0 sp=0xc0004be750 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004be7e8 sp=0xc0004be7e0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 13 [GC worker (idle)]: runtime.gopark(0xa09ea48fd3?, 0x1?, 0x72?, 0x10?, 0xc0000737d0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000073750 sp=0xc000073730 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0000737e0 sp=0xc000073750 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000737e8 sp=0xc0000737e0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 14 [GC worker (idle)]: runtime.gopark(0xa09ea45121?, 0x3?, 0x96?, 0x5?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc0004c3750 sp=0xc0004c3730 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0004c37e0 sp=0xc0004c3750 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0004c37e8 sp=0xc0004c37e0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 50 [GC worker (idle)]: runtime.gopark(0xa09ea49267?, 0x1?, 0x4f?, 0xb6?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000586750 sp=0xc000586730 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0005867e0 sp=0xc000586750 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005867e8 sp=0xc0005867e0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 51 [GC worker (idle)]: runtime.gopark(0xa09ea44f4b?, 0x1?, 0xc3?, 0xc5?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000586f50 sp=0xc000586f30 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc000586fe0 sp=0xc000586f50 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000586fe8 sp=0xc000586fe0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 52 [GC worker (idle)]: runtime.gopark(0xa09ea48ec5?, 0x1?, 0x40?, 0x34?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000587750 sp=0xc000587730 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0005877e0 sp=0xc000587750 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005877e8 sp=0xc0005877e0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 53 [GC worker (idle)]: runtime.gopark(0xa09ea490ff?, 0x1?, 0x9e?, 0x11?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000587f50 sp=0xc000587f30 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc000587fe0 sp=0xc000587f50 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000587fe8 sp=0xc000587fe0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 54 [GC worker (idle)]: runtime.gopark(0xa09ea46909?, 0x1?, 0xb7?, 0x51?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000588750 sp=0xc000588730 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0005887e0 sp=0xc000588750 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005887e8 sp=0xc0005887e0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 55 [GC worker (idle)]: runtime.gopark(0xa09ea450d1?, 0x3?, 0x57?, 0x4f?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000588f50 sp=0xc000588f30 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc000588fe0 sp=0xc000588f50 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000588fe8 sp=0xc000588fe0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 56 [GC worker (idle)]: runtime.gopark(0xa09ea45009?, 0x3?, 0x6a?, 0x4?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000589750 sp=0xc000589730 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0005897e0 sp=0xc000589750 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005897e8 sp=0xc0005897e0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 57 [GC worker (idle)]: runtime.gopark(0xa09ea49177?, 0x3?, 0x6?, 0x1d?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000589f50 sp=0xc000589f30 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc000589fe0 sp=0xc000589f50 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000589fe8 sp=0xc000589fe0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 58 [GC worker (idle)]: runtime.gopark(0x169e4e0?, 0x1?, 0xaa?, 0x2d?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000582750 sp=0xc000582730 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0005827e0 sp=0xc000582750 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005827e8 sp=0xc0005827e0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 59 [GC worker (idle)]: runtime.gopark(0xa09ea49159?, 0x3?, 0xc4?, 0x13?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000582f50 sp=0xc000582f30 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc000582fe0 sp=0xc000582f50 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000582fe8 sp=0xc000582fe0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 60 [GC worker (idle)]: runtime.gopark(0xa09ea43c3b?, 0x3?, 0xf5?, 0xc4?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000583750 sp=0xc000583730 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0005837e0 sp=0xc000583750 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0005837e8 sp=0xc0005837e0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 61 [GC worker (idle)]: runtime.gopark(0xa09ea46279?, 0xc00058a160?, 0x1a?, 0x14?, 0x0?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc000583f50 sp=0xc000583f30 pc=0x43e60e runtime.gcBgMarkWorker() /usr/lib/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc000583fe0 sp=0xc000583f50 pc=0x421245 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000583fe8 sp=0xc000583fe0 pc=0x46e081 created by runtime.gcBgMarkStartWorkers in goroutine 11 /usr/lib/go/src/runtime/mgc.go:1219 +0x1c goroutine 16 [IO wait]: runtime.gopark(0x41e?, 0xb?, 0x0?, 0x0?, 0xc?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc0005918f8 sp=0xc0005918d8 pc=0x43e60e runtime.netpollblock(0x47e9f8?, 0x4092a6?, 0x0?) /usr/lib/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc000591930 sp=0xc0005918f8 pc=0x4370b7 internal/poll.runtime_pollWait(0x78036acc4b98, 0x72) /usr/lib/go/src/runtime/netpoll.go:343 +0x85 fp=0xc000591950 sp=0xc000591930 pc=0x4688a5 internal/poll.(*pollDesc).wait(0xc000436080?, 0xc000312000?, 0x0) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000591978 sp=0xc000591950 pc=0x4ef4c7 internal/poll.(*pollDesc).waitRead(...) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc000436080, {0xc000312000, 0x1000, 0x1000}) /usr/lib/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc000591a10 sp=0xc000591978 pc=0x4f07ba net.(*netFD).Read(0xc000436080, {0xc000312000?, 0x4ef985?, 0x0?}) /usr/lib/go/src/net/fd_posix.go:55 +0x25 fp=0xc000591a58 sp=0xc000591a10 pc=0x569545 net.(*conn).Read(0xc00025c148, {0xc000312000?, 0x0?, 0xc000395aa8?}) /usr/lib/go/src/net/net.go:179 +0x45 fp=0xc000591aa0 sp=0xc000591a58 pc=0x577805 net.(*TCPConn).Read(0xc000395aa0?, {0xc000312000?, 0x0?, 0xc00031dac0?}) <autogenerated>:1 +0x25 fp=0xc000591ad0 sp=0xc000591aa0 pc=0x589705 net/http.(*connReader).Read(0xc000395aa0, {0xc000312000, 0x1000, 0x1000}) /usr/lib/go/src/net/http/server.go:791 +0x14b fp=0xc000591b20 sp=0xc000591ad0 pc=0x6c42eb bufio.(*Reader).fill(0xc0001a73e0) /usr/lib/go/src/bufio/bufio.go:113 +0x103 fp=0xc000591b58 sp=0xc000591b20 pc=0x653ea3 bufio.(*Reader).Peek(0xc0001a73e0, 0x4) /usr/lib/go/src/bufio/bufio.go:151 +0x53 fp=0xc000591b78 sp=0xc000591b58 pc=0x653fd3 net/http.(*conn).serve(0xc0001ba990, {0x125a468, 0xc0004a6720}) /usr/lib/go/src/net/http/server.go:2044 +0x75c fp=0xc000591fb8 sp=0xc000591b78 pc=0x6ca19c net/http.(*Server).Serve.func3() /usr/lib/go/src/net/http/server.go:3086 +0x28 fp=0xc000591fe0 sp=0xc000591fb8 pc=0x6ce968 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000591fe8 sp=0xc000591fe0 pc=0x46e081 created by net/http.(*Server).Serve in goroutine 1 /usr/lib/go/src/net/http/server.go:3086 +0x5cb goroutine 64 [IO wait]: runtime.gopark(0x41e?, 0xb?, 0x0?, 0x0?, 0xb?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc00058d8f8 sp=0xc00058d8d8 pc=0x43e60e runtime.netpollblock(0x47e9f8?, 0x4092a6?, 0x0?) /usr/lib/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc00058d930 sp=0xc00058d8f8 pc=0x4370b7 internal/poll.runtime_pollWait(0x78036acc4c90, 0x72) /usr/lib/go/src/runtime/netpoll.go:343 +0x85 fp=0xc00058d950 sp=0xc00058d930 pc=0x4688a5 internal/poll.(*pollDesc).wait(0xc000040200?, 0xc0002fa000?, 0x0) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00058d978 sp=0xc00058d950 pc=0x4ef4c7 internal/poll.(*pollDesc).waitRead(...) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc000040200, {0xc0002fa000, 0x1000, 0x1000}) /usr/lib/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc00058da10 sp=0xc00058d978 pc=0x4f07ba net.(*netFD).Read(0xc000040200, {0xc0002fa000?, 0x4ef985?, 0x0?}) /usr/lib/go/src/net/fd_posix.go:55 +0x25 fp=0xc00058da58 sp=0xc00058da10 pc=0x569545 net.(*conn).Read(0xc000074040, {0xc0002fa000?, 0x0?, 0xc0001d8218?}) /usr/lib/go/src/net/net.go:179 +0x45 fp=0xc00058daa0 sp=0xc00058da58 pc=0x577805 net.(*TCPConn).Read(0xc0001d8210?, {0xc0002fa000?, 0x0?, 0xc0003a7ac0?}) <autogenerated>:1 +0x25 fp=0xc00058dad0 sp=0xc00058daa0 pc=0x589705 net/http.(*connReader).Read(0xc0001d8210, {0xc0002fa000, 0x1000, 0x1000}) /usr/lib/go/src/net/http/server.go:791 +0x14b fp=0xc00058db20 sp=0xc00058dad0 pc=0x6c42eb bufio.(*Reader).fill(0xc00009a180) /usr/lib/go/src/bufio/bufio.go:113 +0x103 fp=0xc00058db58 sp=0xc00058db20 pc=0x653ea3 bufio.(*Reader).Peek(0xc00009a180, 0x4) /usr/lib/go/src/bufio/bufio.go:151 +0x53 fp=0xc00058db78 sp=0xc00058db58 pc=0x653fd3 net/http.(*conn).serve(0xc0000fc3f0, {0x125a468, 0xc0004a6720}) /usr/lib/go/src/net/http/server.go:2044 +0x75c fp=0xc00058dfb8 sp=0xc00058db78 pc=0x6ca19c net/http.(*Server).Serve.func3() /usr/lib/go/src/net/http/server.go:3086 +0x28 fp=0xc00058dfe0 sp=0xc00058dfb8 pc=0x6ce968 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00058dfe8 sp=0xc00058dfe0 pc=0x46e081 created by net/http.(*Server).Serve in goroutine 1 /usr/lib/go/src/net/http/server.go:3086 +0x5cb goroutine 68 [IO wait]: runtime.gopark(0x100000000?, 0xb?, 0x0?, 0x0?, 0xd?) /usr/lib/go/src/runtime/proc.go:398 +0xce fp=0xc00006e5a0 sp=0xc00006e580 pc=0x43e60e runtime.netpollblock(0x47e9f8?, 0x4092a6?, 0x0?) /usr/lib/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc00006e5d8 sp=0xc00006e5a0 pc=0x4370b7 internal/poll.runtime_pollWait(0x78036acc4aa0, 0x72) /usr/lib/go/src/runtime/netpoll.go:343 +0x85 fp=0xc00006e5f8 sp=0xc00006e5d8 pc=0x4688a5 internal/poll.(*pollDesc).wait(0xc000436180?, 0xc000438551?, 0x0) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00006e620 sp=0xc00006e5f8 pc=0x4ef4c7 internal/poll.(*pollDesc).waitRead(...) /usr/lib/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc000436180, {0xc000438551, 0x1, 0x1}) /usr/lib/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc00006e6b8 sp=0xc00006e620 pc=0x4f07ba net.(*netFD).Read(0xc000436180, {0xc000438551?, 0xc00006e740?, 0x46a750?}) /usr/lib/go/src/net/fd_posix.go:55 +0x25 fp=0xc00006e700 sp=0xc00006e6b8 pc=0x569545 net.(*conn).Read(0xc00025c1f0, {0xc000438551?, 0x1?, 0xc0002ea730?}) /usr/lib/go/src/net/net.go:179 +0x45 fp=0xc00006e748 sp=0xc00006e700 pc=0x577805 net.(*TCPConn).Read(0xc000395aa0?, {0xc000438551?, 0xc0002ea730?, 0x0?}) <autogenerated>:1 +0x25 fp=0xc00006e778 sp=0xc00006e748 pc=0x589705 net/http.(*connReader).backgroundRead(0xc000438540) /usr/lib/go/src/net/http/server.go:683 +0x37 fp=0xc00006e7c8 sp=0xc00006e778 pc=0x6c3eb7 net/http.(*connReader).startBackgroundRead.func2() /usr/lib/go/src/net/http/server.go:679 +0x25 fp=0xc00006e7e0 sp=0xc00006e7c8 pc=0x6c3de5 runtime.goexit() /usr/lib/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006e7e8 sp=0xc00006e7e0 pc=0x46e081 created by net/http.(*connReader).startBackgroundRead in goroutine 67 /usr/lib/go/src/net/http/server.go:679 +0xba rax 0x0 rbx 0x7800341b33c0 rcx 0x7802d8d00200 rdx 0x348 rdi 0x7802d8d00200 rsi 0x78003423a650 rbp 0x780310bfe910 rsp 0x780310bfe6e0 r8 0x90 r9 0x4 r10 0x3 r11 0x78029c9aa400 r12 0x17 r13 0x78029c9aa400 r14 0x78003efd1500 r15 0x78003efd16b8 rip 0x780302b2b380 rflags 0x10246 cs 0x33 fs 0x0 gs 0x0 ``` Version: 4c54f0ddeb997cfefe4716e5631b270112975aab (built with ` CLBlast_DIR=/usr/lib/cmake/CLBlast ROCM_PATH=/opt/rocm go generate ./... && go build .`)
{ "login": "ThatOneCalculator", "id": 44733677, "node_id": "MDQ6VXNlcjQ0NzMzNjc3", "avatar_url": "https://avatars.githubusercontent.com/u/44733677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ThatOneCalculator", "html_url": "https://github.com/ThatOneCalculator", "followers_url": "https://api.github.com/users/ThatOneCalculator/followers", "following_url": "https://api.github.com/users/ThatOneCalculator/following{/other_user}", "gists_url": "https://api.github.com/users/ThatOneCalculator/gists{/gist_id}", "starred_url": "https://api.github.com/users/ThatOneCalculator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ThatOneCalculator/subscriptions", "organizations_url": "https://api.github.com/users/ThatOneCalculator/orgs", "repos_url": "https://api.github.com/users/ThatOneCalculator/repos", "events_url": "https://api.github.com/users/ThatOneCalculator/events{/privacy}", "received_events_url": "https://api.github.com/users/ThatOneCalculator/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2107/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/556
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/556/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/556/comments
https://api.github.com/repos/ollama/ollama/issues/556/events
https://github.com/ollama/ollama/pull/556
1,905,373,816
PR_kwDOJ0Z1Ps5azgGR
556
pack in cuda libs
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-09-20T16:42:58
2023-09-20T22:02:38
2023-09-20T22:02:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/556", "html_url": "https://github.com/ollama/ollama/pull/556", "diff_url": "https://github.com/ollama/ollama/pull/556.diff", "patch_url": "https://github.com/ollama/ollama/pull/556.patch", "merged_at": "2023-09-20T22:02:37" }
This change packs CUDA libs into the llama runner and tells the runner to use those libs. Here is the example generate in my case. ``` go generate ./... ```
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/556/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8653
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8653/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8653/comments
https://api.github.com/repos/ollama/ollama/issues/8653/events
https://github.com/ollama/ollama/issues/8653
2,817,878,044
I_kwDOJ0Z1Ps6n9Wgc
8,653
Latest pre-built Ollama binaries (cuda 12.x) do not come with "oob" support for 5.x architecture
{ "login": "RKouchoo", "id": 19159026, "node_id": "MDQ6VXNlcjE5MTU5MDI2", "avatar_url": "https://avatars.githubusercontent.com/u/19159026?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RKouchoo", "html_url": "https://github.com/RKouchoo", "followers_url": "https://api.github.com/users/RKouchoo/followers", "following_url": "https://api.github.com/users/RKouchoo/following{/other_user}", "gists_url": "https://api.github.com/users/RKouchoo/gists{/gist_id}", "starred_url": "https://api.github.com/users/RKouchoo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RKouchoo/subscriptions", "organizations_url": "https://api.github.com/users/RKouchoo/orgs", "repos_url": "https://api.github.com/users/RKouchoo/repos", "events_url": "https://api.github.com/users/RKouchoo/events{/privacy}", "received_events_url": "https://api.github.com/users/RKouchoo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2025-01-29T11:00:37
2025-01-29T23:55:30
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### "oob" support for 5.x architecture is missing on prebuilt binaries Hello, I ended up needing some more power so I threw a spare Quadro M5000 into my AI rig only to find it was not being utilsed at all. I did the usual checks and the card has compute capability 5.2 (confirmed compatible in the support matrix [here](https://github.com/ollama/ollama/blob/main/docs/gpu.md)). As initial troubleshooting steps (bouncing ideas from other issues posted here) I tried: - Manually passing through the UUID of all GPU's via `CUDA_VISIBLE_DEVICES` gvar. ollama would acknowledge this in the logs but would never use the card anway. There was no log message complaining about compute capability or mention of dropping the card. - Setting `OLLAMA_SCHED_SPREAD` gvar to `true` I found that the ollama install script also grabbed **cuda 11.x by default**, but during the installation the GPU's I had installed were a pair of 20GB RTX4000 "Ada" generation + Aspeed AST2500 IPMI/VGA. I also had the 565 driver installed before setting everything else up, it quotes that its built with cuda 12.6, I don't quite understand why the installer would grab the 11.x toolkit. During a dive through the repo to see what I could find, I noticed the make config file for cuda_v12 ([here](https://github.com/ollama/ollama/blob/main/make/Makefile.cuda_v12)) does not include 5.0/5.2 support by default but it could. I also found that in the current release notes for cuda 12.8 - Maxwell, Pascal and Volta architectures will be "frozen" (depricated?) in future releases of the cuda toolkit [here](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#deprecated-architectures). I managed to build everything fine with the cuda 12.8 toolkit and confirmed ollama works as I expected, here are the build flags I used: `make cuda_v12 CUDA_ARCHITECTURES="50;52;60;61;62;70;72;75;80;86;87;89;90;90a" -j 32` I know that there is still heaps of Maxwell (5.2) cards still floating around in systems, people on a budget will definitely try use them with the hype of the recent model releases as they are capable of running them locally to an extent. I believe either the docs need an update or binaries should be compiled with support built in until theres an official notice or documentation change to avoid confusion. Apologies if I am wrong, I thought I would post this here before opening a pull request incase there was anything already in motion related to this. Cheers, RK ### OS Linux - Ubuntu 22.04 ### GPU Nvidia - RTX A4000 "Ada" x2, RTX 4070 & Quadro M5000 ### CPU 2x AMD EPYC 7371 ### Ollama version 0.5.7 (latest install script)
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8653/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1483
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1483/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1483/comments
https://api.github.com/repos/ollama/ollama/issues/1483/events
https://github.com/ollama/ollama/pull/1483
2,038,214,542
PR_kwDOJ0Z1Ps5h0Vos
1,483
retry on concurrent request failure
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-12-12T17:10:54
2023-12-12T17:14:36
2023-12-12T17:14:35
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/1483", "html_url": "https://github.com/ollama/ollama/pull/1483", "diff_url": "https://github.com/ollama/ollama/pull/1483.diff", "patch_url": "https://github.com/ollama/ollama/pull/1483.patch", "merged_at": "2023-12-12T17:14:35" }
- remove parallel - retry concurrent requests on failure
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1483/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6927
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6927/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6927/comments
https://api.github.com/repos/ollama/ollama/issues/6927/events
https://github.com/ollama/ollama/issues/6927
2,544,357,555
I_kwDOJ0Z1Ps6Xp9Cz
6,927
Why Is n_ctx in log Always Four Times the num_ctx Value in ModelFIle When Building qwen2.5-coder-7b-instruct-q5_k_m.gguf?
{ "login": "XiongDaowen", "id": 87518017, "node_id": "MDQ6VXNlcjg3NTE4MDE3", "avatar_url": "https://avatars.githubusercontent.com/u/87518017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XiongDaowen", "html_url": "https://github.com/XiongDaowen", "followers_url": "https://api.github.com/users/XiongDaowen/followers", "following_url": "https://api.github.com/users/XiongDaowen/following{/other_user}", "gists_url": "https://api.github.com/users/XiongDaowen/gists{/gist_id}", "starred_url": "https://api.github.com/users/XiongDaowen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XiongDaowen/subscriptions", "organizations_url": "https://api.github.com/users/XiongDaowen/orgs", "repos_url": "https://api.github.com/users/XiongDaowen/repos", "events_url": "https://api.github.com/users/XiongDaowen/events{/privacy}", "received_events_url": "https://api.github.com/users/XiongDaowen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
2024-09-24T05:26:34
2024-09-24T07:12:01
2024-09-24T07:12:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When I built qwen2.5-coder-7b-instruct-q5_k_m.gguf using the modelfile and set PARAMETER num_ctx 4096, the log output showed llama_new_context_with_model: n_ctx = 16384. After setting num_ctx to different values, I noticed that n_ctx is always 4 times the value of num_ctx. Why is this happening? The log: ![89d1ee67-30d0-49b6-9553-32fdc1840fb7](https://github.com/user-attachments/assets/b9b5a0d0-727b-4422-a592-320784054634) The ModelFile: ![4dc02651-9a6d-4007-95be-adcbd9aa871e](https://github.com/user-attachments/assets/591e6e7c-3047-478e-9207-19c2fb3ea509) ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version 0.3.3
{ "login": "XiongDaowen", "id": 87518017, "node_id": "MDQ6VXNlcjg3NTE4MDE3", "avatar_url": "https://avatars.githubusercontent.com/u/87518017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XiongDaowen", "html_url": "https://github.com/XiongDaowen", "followers_url": "https://api.github.com/users/XiongDaowen/followers", "following_url": "https://api.github.com/users/XiongDaowen/following{/other_user}", "gists_url": "https://api.github.com/users/XiongDaowen/gists{/gist_id}", "starred_url": "https://api.github.com/users/XiongDaowen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XiongDaowen/subscriptions", "organizations_url": "https://api.github.com/users/XiongDaowen/orgs", "repos_url": "https://api.github.com/users/XiongDaowen/repos", "events_url": "https://api.github.com/users/XiongDaowen/events{/privacy}", "received_events_url": "https://api.github.com/users/XiongDaowen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6927/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5140
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5140/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5140/comments
https://api.github.com/repos/ollama/ollama/issues/5140/events
https://github.com/ollama/ollama/issues/5140
2,362,279,633
I_kwDOJ0Z1Ps6MzYbR
5,140
Chat template not yet supported for Deepseek-Coder-V2 lite
{ "login": "Joly0", "id": 13993216, "node_id": "MDQ6VXNlcjEzOTkzMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/13993216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Joly0", "html_url": "https://github.com/Joly0", "followers_url": "https://api.github.com/users/Joly0/followers", "following_url": "https://api.github.com/users/Joly0/following{/other_user}", "gists_url": "https://api.github.com/users/Joly0/gists{/gist_id}", "starred_url": "https://api.github.com/users/Joly0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Joly0/subscriptions", "organizations_url": "https://api.github.com/users/Joly0/orgs", "repos_url": "https://api.github.com/users/Joly0/repos", "events_url": "https://api.github.com/users/Joly0/events{/privacy}", "received_events_url": "https://api.github.com/users/Joly0/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
1
2024-06-19T12:37:36
2024-06-19T18:46:11
2024-06-19T18:46:10
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Whenever i try to chat with the llm through open-webui and ollama, i get this in the logs of ollama: `ERROR [validate_model_chat_template] The chat template comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses | tid="22401915281408" timestamp=1718800491` and the output in open-webui is just crap, nothing else. The answer has nothing to do with what i have requested. ### OS Linux, Docker ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.44
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5140/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/942
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/942/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/942/comments
https://api.github.com/repos/ollama/ollama/issues/942/events
https://github.com/ollama/ollama/issues/942
1,966,688,244
I_kwDOJ0Z1Ps51OUf0
942
A question on memory
{ "login": "pexus", "id": 1809523, "node_id": "MDQ6VXNlcjE4MDk1MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/1809523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pexus", "html_url": "https://github.com/pexus", "followers_url": "https://api.github.com/users/pexus/followers", "following_url": "https://api.github.com/users/pexus/following{/other_user}", "gists_url": "https://api.github.com/users/pexus/gists{/gist_id}", "starred_url": "https://api.github.com/users/pexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pexus/subscriptions", "organizations_url": "https://api.github.com/users/pexus/orgs", "repos_url": "https://api.github.com/users/pexus/repos", "events_url": "https://api.github.com/users/pexus/events{/privacy}", "received_events_url": "https://api.github.com/users/pexus/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-10-28T18:01:38
2023-10-28T20:43:34
2023-10-28T20:43:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello this is a question - with regards to the memory spec on running the OSS LLM - see below: _Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models._ Is the reference to Memory requirement for GPU or the Main Memory (CPU) ? or a combination of GPU Memory and CPU Memory ? Appreciate a clarity on this. Thanks in advance.
{ "login": "pexus", "id": 1809523, "node_id": "MDQ6VXNlcjE4MDk1MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/1809523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pexus", "html_url": "https://github.com/pexus", "followers_url": "https://api.github.com/users/pexus/followers", "following_url": "https://api.github.com/users/pexus/following{/other_user}", "gists_url": "https://api.github.com/users/pexus/gists{/gist_id}", "starred_url": "https://api.github.com/users/pexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pexus/subscriptions", "organizations_url": "https://api.github.com/users/pexus/orgs", "repos_url": "https://api.github.com/users/pexus/repos", "events_url": "https://api.github.com/users/pexus/events{/privacy}", "received_events_url": "https://api.github.com/users/pexus/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/942/timeline
null
completed
false