url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/5936
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5936/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5936/comments
|
https://api.github.com/repos/ollama/ollama/issues/5936/events
|
https://github.com/ollama/ollama/issues/5936
| 2,428,801,856
|
I_kwDOJ0Z1Ps6QxJNA
| 5,936
|
ollama网页端聊天数据
|
{
"login": "mywwq",
"id": 133221105,
"node_id": "U_kgDOB_DK8Q",
"avatar_url": "https://avatars.githubusercontent.com/u/133221105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mywwq",
"html_url": "https://github.com/mywwq",
"followers_url": "https://api.github.com/users/mywwq/followers",
"following_url": "https://api.github.com/users/mywwq/following{/other_user}",
"gists_url": "https://api.github.com/users/mywwq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mywwq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mywwq/subscriptions",
"organizations_url": "https://api.github.com/users/mywwq/orgs",
"repos_url": "https://api.github.com/users/mywwq/repos",
"events_url": "https://api.github.com/users/mywwq/events{/privacy}",
"received_events_url": "https://api.github.com/users/mywwq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
| null |
[] | null | 6
| 2024-07-25T02:10:03
| 2024-07-28T17:13:39
| 2024-07-26T20:57:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
ollama网页端聊天数据请问默认存在哪里,浏览器缓存还是什么默认路径
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5936/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4549
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4549/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4549/comments
|
https://api.github.com/repos/ollama/ollama/issues/4549/events
|
https://github.com/ollama/ollama/pull/4549
| 2,307,023,013
|
PR_kwDOJ0Z1Ps5wA-NA
| 4,549
|
working on integration of multi-byte and multi-width runes
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396210,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2acg",
"url": "https://api.github.com/repos/ollama/ollama/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 6960960225,
"node_id": "LA_kwDOJ0Z1Ps8AAAABnufS4Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/cli",
"name": "cli",
"color": "5319e7",
"default": false,
"description": "Issues related to the Ollama CLI"
}
] |
closed
| false
| null |
[] | null | 0
| 2024-05-21T00:25:57
| 2024-05-28T19:04:04
| 2024-05-28T19:04:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4549",
"html_url": "https://github.com/ollama/ollama/pull/4549",
"diff_url": "https://github.com/ollama/ollama/pull/4549.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4549.patch",
"merged_at": "2024-05-28T19:04:03"
}
|
Fixed most issues touched on regarding multi-width runes.
Some notable issues still exist with a combination of `insert` and `remove` commands
Resolves: https://github.com/ollama/ollama/issues/3432 and resolves: https://github.com/ollama/ollama/issues/4156
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4549/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2320
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2320/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2320/comments
|
https://api.github.com/repos/ollama/ollama/issues/2320/events
|
https://github.com/ollama/ollama/issues/2320
| 2,114,220,632
|
I_kwDOJ0Z1Ps5-BHJY
| 2,320
|
AMD ROCm problem: GPU is constantly running at 100%
|
{
"login": "MichaelFomenko",
"id": 12229584,
"node_id": "MDQ6VXNlcjEyMjI5NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/12229584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelFomenko",
"html_url": "https://github.com/MichaelFomenko",
"followers_url": "https://api.github.com/users/MichaelFomenko/followers",
"following_url": "https://api.github.com/users/MichaelFomenko/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelFomenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelFomenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelFomenko/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelFomenko/orgs",
"repos_url": "https://api.github.com/users/MichaelFomenko/repos",
"events_url": "https://api.github.com/users/MichaelFomenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelFomenko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-02-02T06:49:23
| 2024-04-11T04:00:46
| 2024-02-03T00:24:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Following Problem:
When I run "ollama run Mistral" the GPU is constantly running at 100% and consuming 100 watt
But the Chat is working fine, without any Problems.
**The GPU is behaving strange:**
Before I run "ollama run Mistral" - the GPU utilization is **0%** and power: 0 watt and memory 0 MB.
After I run "ollama run Mistral" - the GPU utilization is **100%** and power: 100 watt and memory 5.000 MB.
When I run a Chat Prompt - the GPU utilization is **100%** and power: **300 watt** and memory 5.000 MB.
After I close ollama chat - the GPU utilization is **100%** and power: 100 watt and memory 5.000 MB.
After I close ollama serve - the GPU utilization is **0%** and power: 0 watt and memory 0 MB.
**Additional Information about GPU and Memmory Speed**
Before I run "ollama run Mistral" - GPUSpeed: **50Mhz** MemorySpeed: 90Mhz.
After I run "ollama run Mistral" - GPUSpeed: **3000Mhz** MemorySpeed: 90Mhz.
When I run a Chat Prompt - GPUSpeed: **3000Mhz** MemorySpeed: **1200Mhz**.
After I close ollama chat - GPUSpeed: **3000Mhz** MemorySpeed: 90Mhz.
After I close ollama serve - GPUSpeed: **50Mhz** MemorySpeed: 90Mhz.
ollama version: 0.1.22
ROCm Verion: 6.0
GPU: 7900 XTX
System: Ubuntu 22.04
CPU: 7950X
RAM: 64GB
When I start ollama serve:
**ollama serve**
```
2024/02/02 05:11:24 images.go:857: INFO total blobs: 7
2024/02/02 05:11:24 images.go:864: INFO total unused blobs removed: 0
2024/02/02 05:11:24 routes.go:950: INFO Listening on 127.0.0.1:11434 (version 0.1.22)
2024/02/02 05:11:24 payload_common.go:106: INFO Extracting dynamic libraries...
2024/02/02 05:11:25 payload_common.go:145: INFO Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v5 rocm_v6 cpu]
2024/02/02 05:11:25 gpu.go:94: INFO Detecting GPU type
2024/02/02 05:11:25 gpu.go:236: INFO Searching for GPU management library libnvidia-ml.so
2024/02/02 05:11:25 gpu.go:282: INFO Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05]
2024/02/02 05:11:25 gpu.go:294: INFO Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.525.147.05: nvml vram init failure: 9
2024/02/02 05:11:25 gpu.go:236: INFO Searching for GPU management library librocm_smi64.so
2024/02/02 05:11:25 gpu.go:282: INFO Discovered GPU libraries: [/opt/rocm/lib/librocm_smi64.so.6.0.60000 /opt/rocm-6.0.0/lib/librocm_smi64.so.6.0.60000]
2024/02/02 05:11:25 gpu.go:109: INFO Radeon GPU detected
```
**ollama run Mistral**
```
[GIN] 2024/02/02 - 07:36:56 | 200 | 32.421µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/02/02 - 07:36:56 | 200 | 723.312µs | 127.0.0.1 | POST "/api/show"
[GIN] 2024/02/02 - 07:36:56 | 200 | 284.482µs | 127.0.0.1 | POST "/api/show"
2024/02/02 07:36:56 cpu_common.go:11: INFO CPU has AVX2
loading library /tmp/ollama726758615/rocm_v6/libext_server.so
2024/02/02 07:36:56 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama726758615/rocm_v6/libext_server.so
2024/02/02 07:36:56 dyn_ext_server.go:145: INFO Initializing llama server
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
Device 0: Radeon RX 7900 XTX, compute capability 11.0, VMM: no
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /home/user/.ollama/models/blobs/sha256:e8a35b5937a5e6d5c35d1f2a15f161e07eefe5e5bb0a3cdd42998ee79b057730 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 2
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.83 GiB (4.54 BPW)
llm_load_print_meta: general.name = mistralai
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: ROCm0 buffer size = 3847.55 MiB
llm_load_tensors: CPU buffer size = 70.31 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: ROCm0 KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: ROCm_Host input buffer size = 12.01 MiB
llama_new_context_with_model: ROCm0 compute buffer size = 156.00 MiB
llama_new_context_with_model: ROCm_Host compute buffer size = 8.00 MiB
llama_new_context_with_model: graph splits (measure): 3
2024/02/02 07:37:15 dyn_ext_server.go:156: INFO Starting llama main loop
[GIN] 2024/02/02 - 07:37:15 | 200 | 18.899618958s | 127.0.0.1 | POST "/api/chat"
```
**Same behavior when I run the llama2 Model**
When I Run Mistral in [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) and using there **Transformers**, all working fine. The GPU is only at 100% active if I chat, else at 0%. But when I using there **llama.cpp**, the GPU behave the same like in ollama.
It seems like an **llama.cpp** problem
|
{
"login": "MichaelFomenko",
"id": 12229584,
"node_id": "MDQ6VXNlcjEyMjI5NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/12229584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelFomenko",
"html_url": "https://github.com/MichaelFomenko",
"followers_url": "https://api.github.com/users/MichaelFomenko/followers",
"following_url": "https://api.github.com/users/MichaelFomenko/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelFomenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelFomenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelFomenko/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelFomenko/orgs",
"repos_url": "https://api.github.com/users/MichaelFomenko/repos",
"events_url": "https://api.github.com/users/MichaelFomenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelFomenko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2320/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2320/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/ollama/ollama/issues/5351
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5351/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5351/comments
|
https://api.github.com/repos/ollama/ollama/issues/5351/events
|
https://github.com/ollama/ollama/issues/5351
| 2,379,421,225
|
I_kwDOJ0Z1Ps6N0xYp
| 5,351
|
gguf success,but run error
|
{
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enryteam/followers",
"following_url": "https://api.github.com/users/enryteam/following{/other_user}",
"gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enryteam/subscriptions",
"organizations_url": "https://api.github.com/users/enryteam/orgs",
"repos_url": "https://api.github.com/users/enryteam/repos",
"events_url": "https://api.github.com/users/enryteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/enryteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-06-28T02:15:24
| 2024-07-08T23:06:04
| 2024-07-08T23:06:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**ollama create glm-4-9b-chat -f ./Modelfile-glm**
transferring model data
using existing layer sha256:d7cd056b858a46ad875a4abb7b0d7cf8cde26ac1f975c18b97175fbfdb809acb
using existing layer sha256:821004920baf42135ce3fd33c72eb1022fc0215a569ea7b90337a9bf92f23294
creating new layer sha256:2dcaf84fc5b358793d9451b614605bcb5fec302c14166644fd77e3503ca5dcf4
creating new layer sha256:f15127bec7ed18496fc730bea8d4f052b33089396bbbf88603968323d2f1f88f
writing manifest
success
**ollama run glm-4-9b-chat**
Error: llama runner process has terminated: signal: aborted (core dumped)
GGUF download :
https://modelscope.cn/api/v1/models/LLM-Research/glm-4-9b-chat-GGUF/repo?Revision=master&FilePath=glm-4-9b-chat.Q6_K.gguf
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.36
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5351/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7588
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7588/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7588/comments
|
https://api.github.com/repos/ollama/ollama/issues/7588/events
|
https://github.com/ollama/ollama/pull/7588
| 2,645,913,909
|
PR_kwDOJ0Z1Ps6BZOqG
| 7,588
|
Enable JSON Schema support
|
{
"login": "hieunguyen1053",
"id": 41591244,
"node_id": "MDQ6VXNlcjQxNTkxMjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/41591244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hieunguyen1053",
"html_url": "https://github.com/hieunguyen1053",
"followers_url": "https://api.github.com/users/hieunguyen1053/followers",
"following_url": "https://api.github.com/users/hieunguyen1053/following{/other_user}",
"gists_url": "https://api.github.com/users/hieunguyen1053/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hieunguyen1053/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hieunguyen1053/subscriptions",
"organizations_url": "https://api.github.com/users/hieunguyen1053/orgs",
"repos_url": "https://api.github.com/users/hieunguyen1053/repos",
"events_url": "https://api.github.com/users/hieunguyen1053/events{/privacy}",
"received_events_url": "https://api.github.com/users/hieunguyen1053/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7
| 2024-11-09T10:35:49
| 2024-12-04T02:38:55
| 2024-12-04T02:37:45
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7588",
"html_url": "https://github.com/ollama/ollama/pull/7588",
"diff_url": "https://github.com/ollama/ollama/pull/7588.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7588.patch",
"merged_at": null
}
|
This merge request introduces a new feature that adds support for response_format based on the grammar guide of llama.cpp. This new functionality has been implemented to improve the flexibility of response formats, and it has been tested and works well with both the openai library and langchain.
Please review the code changes and test the feature to confirm compatibility with any additional components or configurations specific to your setup.
Let me know if you need further adjustments!
<img width="950" alt="Screenshot 2024-11-09 at 5 32 06 PM" src="https://github.com/user-attachments/assets/7fe1b87c-f8f2-4601-8148-df69452ba8d0">
<img width="752" alt="Screenshot 2024-11-09 at 5 32 48 PM" src="https://github.com/user-attachments/assets/bc871b4b-9e57-450d-84a4-099dd40b734d">
|
{
"login": "hieunguyen1053",
"id": 41591244,
"node_id": "MDQ6VXNlcjQxNTkxMjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/41591244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hieunguyen1053",
"html_url": "https://github.com/hieunguyen1053",
"followers_url": "https://api.github.com/users/hieunguyen1053/followers",
"following_url": "https://api.github.com/users/hieunguyen1053/following{/other_user}",
"gists_url": "https://api.github.com/users/hieunguyen1053/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hieunguyen1053/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hieunguyen1053/subscriptions",
"organizations_url": "https://api.github.com/users/hieunguyen1053/orgs",
"repos_url": "https://api.github.com/users/hieunguyen1053/repos",
"events_url": "https://api.github.com/users/hieunguyen1053/events{/privacy}",
"received_events_url": "https://api.github.com/users/hieunguyen1053/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7588/reactions",
"total_count": 8,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/ollama/ollama/issues/7588/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7209
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7209/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7209/comments
|
https://api.github.com/repos/ollama/ollama/issues/7209/events
|
https://github.com/ollama/ollama/pull/7209
| 2,587,865,669
|
PR_kwDOJ0Z1Ps5-oYGr
| 7,209
|
cmd: add "stop all" to stop all running models
|
{
"login": "famiu",
"id": 29580810,
"node_id": "MDQ6VXNlcjI5NTgwODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/29580810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/famiu",
"html_url": "https://github.com/famiu",
"followers_url": "https://api.github.com/users/famiu/followers",
"following_url": "https://api.github.com/users/famiu/following{/other_user}",
"gists_url": "https://api.github.com/users/famiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/famiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/famiu/subscriptions",
"organizations_url": "https://api.github.com/users/famiu/orgs",
"repos_url": "https://api.github.com/users/famiu/repos",
"events_url": "https://api.github.com/users/famiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/famiu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 2
| 2024-10-15T07:15:53
| 2025-01-09T19:34:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7209",
"html_url": "https://github.com/ollama/ollama/pull/7209",
"diff_url": "https://github.com/ollama/ollama/pull/7209.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7209.patch",
"merged_at": null
}
|
Allow using `ollama stop all` to stop all running models.
Closes #6987
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7209/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7209/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1358
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1358/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1358/comments
|
https://api.github.com/repos/ollama/ollama/issues/1358/events
|
https://github.com/ollama/ollama/issues/1358
| 2,022,305,170
|
I_kwDOJ0Z1Ps54ie2S
| 1,358
|
What is the minimum requirement for a significant improvement in performance?
|
{
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/oliverbob/followers",
"following_url": "https://api.github.com/users/oliverbob/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverbob/subscriptions",
"organizations_url": "https://api.github.com/users/oliverbob/orgs",
"repos_url": "https://api.github.com/users/oliverbob/repos",
"events_url": "https://api.github.com/users/oliverbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2023-12-03T03:08:49
| 2023-12-04T22:05:38
| 2023-12-04T22:05:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi everyone, I have been trying ollama across multiple servers with various specs. I also tested it on the highest package (ram/cpu) at digital ocean. I tested the same on my desktop as well as in my HPE DL380 Gen9 server that is 64GB with the following specs:
`lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 1400.000
CPU max MHz: 3200.0000
CPU min MHz: 1200.0000
BogoMIPS: 4794.74
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 1.5 MiB
L3 cache: 15 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional c
ache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerabl
e
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerabl
e
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disable
d via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __u
ser pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IB
RS_FW, STIBP conditional, RSB filling, PBRSB
-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep
mtrr pge mca cmov pat pse36 clflush dts acpi
mmx fxsr sse sse2 ss ht tm pbe syscall nx p
dpe1gb rdtscp lm constant_tsc arch_perfmon p
ebs bts rep_good nopl xtopology nonstop_tsc
cpuid aperfmperf pni pclmulqdq dtes64 monito
r ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16
xtpr pdcm pcid dca sse4_1 sse4_2 x2apic mov
be popcnt tsc_deadline_timer aes xsave avx f
16c rdrand lahf_lm abm cpuid_fault epb invpc
id_single pti intel_ppin ssbd ibrs ibpb stib
p tpr_shadow vnmi flexpriority ept vpid ept_
ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 e
rms invpcid cqm xsaveopt cqm_llc cqm_occup_l
lc dtherm ida arat pln pts md_clear flush_l1
d`
However, I don't have "any significant" improvement from the performance the intel i5 desktop:
`lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-7400 CPU @ 3.00GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 9
CPU(s) scaling MHz: 91%
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mc
a cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss
ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art
arch_perfmon pebs bts rep_good nopl xtopology nonstop_
tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cp
l vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1
sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsav
e avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault
invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnm
i flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1
avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflus
hopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida
arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_
clear flush_l1d arch_capabilities
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 128 KiB (4 instances)
L1i: 128 KiB (4 instances)
L2: 1 MiB (4 instances)
L3: 6 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerabilities:
Itlb multihit: KVM: Mitigation: VMX disabled
L1tf: Mitigation; PTE Inversion; VMX conditional cache flushe
s, SMT disabled
Mds: Mitigation; Clear CPU buffers; SMT disabled
Meltdown: Mitigation; PTI
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
and seccomp
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer
sanitization
Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, I
BRS_FW, STIBP disabled, RSB filling
Srbds: Mitigation; Microcode
Tsx async abort: Not affected
`
All results showed "very poor" performance on 7b parameter models, so that I began to conclude that I cannot use this in a production environment unless I find a solution. This concerns me a lot because I have production-level implementation with its API that are mission critical for my clients. Also, my company is purchasing servers in "hopes" that ollama will get up to the required speed at least may be even "half the speed" of Chat GPT (for free users). We have several implementations of Modelfiles for each clients, but all of them frustrates our clients and they are losing hope and are very, very upset about the situation. I hope it will not end up with them suing my company. So I came here hoping to save face.
My question:
How can we possibly improve the performance of ollama with the minimum required hardware? Would an upgrade to the latest HPE DL380a Gen11 bring a significant increase in performance to achieve half the performance as that of OpenAI's ChatGPT? For instance, if I will fill all its memory, processors, gpus to the maximum capacity, will that be able to solve/fix this issue? If this will not increase the performance, what SPECIFIC HARDWARE is PROVEN COMPATIBLE without performance issues?
I like Ollama's simplicity of interfacing with the API. Are there any "live" samples we can check performance against? Are there any Ollama API HOSTED online that are PERFORMANCE PROOF that I can grab even if its PAID (just for a temporary fix) while we're looking for solutions?
Will running it on a GPU based cloud solution like AWS, GPC or Azure be worth the investment against the required performance of at least half the speed of ChatGPT demanded by our clients?
Or to simplify my question, what is the minimum required and TESTED hardware configuration to compete with the response speed of ChatGPT?
Any help from anyone on this active community will be appreciated.
Thank you very much.
|
{
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1358/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2810
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2810/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2810/comments
|
https://api.github.com/repos/ollama/ollama/issues/2810/events
|
https://github.com/ollama/ollama/issues/2810
| 2,159,225,905
|
I_kwDOJ0Z1Ps6Asywx
| 2,810
|
Sending twice an empty prompt to the embedding API stalls ollama.
|
{
"login": "dstruck",
"id": 2195318,
"node_id": "MDQ6VXNlcjIxOTUzMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2195318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dstruck",
"html_url": "https://github.com/dstruck",
"followers_url": "https://api.github.com/users/dstruck/followers",
"following_url": "https://api.github.com/users/dstruck/following{/other_user}",
"gists_url": "https://api.github.com/users/dstruck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dstruck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dstruck/subscriptions",
"organizations_url": "https://api.github.com/users/dstruck/orgs",
"repos_url": "https://api.github.com/users/dstruck/repos",
"events_url": "https://api.github.com/users/dstruck/events{/privacy}",
"received_events_url": "https://api.github.com/users/dstruck/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0
| 2024-02-28T15:25:04
| 2024-03-01T01:40:57
| 2024-03-01T01:40:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
"ollama version is 0.1.27" running on "Debian GNU/Linux 12 (bookworm)".
Running the API call `curl http://localhost:11434/api/embeddings -d '{"model": "llama2", "prompt": ""}'` returns `{"embedding":null}` as expected.
Running the same API call a second time, stalls ollama completely. Restarting the service does not work, you have to kill the process with `-9`.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2810/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5853
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5853/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5853/comments
|
https://api.github.com/repos/ollama/ollama/issues/5853/events
|
https://github.com/ollama/ollama/issues/5853
| 2,423,028,921
|
I_kwDOJ0Z1Ps6QbHy5
| 5,853
|
1.38 works the best
|
{
"login": "perpendicularai",
"id": 146530480,
"node_id": "U_kgDOCLvgsA",
"avatar_url": "https://avatars.githubusercontent.com/u/146530480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perpendicularai",
"html_url": "https://github.com/perpendicularai",
"followers_url": "https://api.github.com/users/perpendicularai/followers",
"following_url": "https://api.github.com/users/perpendicularai/following{/other_user}",
"gists_url": "https://api.github.com/users/perpendicularai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/perpendicularai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/perpendicularai/subscriptions",
"organizations_url": "https://api.github.com/users/perpendicularai/orgs",
"repos_url": "https://api.github.com/users/perpendicularai/repos",
"events_url": "https://api.github.com/users/perpendicularai/events{/privacy}",
"received_events_url": "https://api.github.com/users/perpendicularai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-07-22T14:30:23
| 2024-07-22T14:34:23
| 2024-07-22T14:34:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Version 1.38 works the best. Anything from 0.1.33 through 0.1.38 works great!
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.1.38
|
{
"login": "perpendicularai",
"id": 146530480,
"node_id": "U_kgDOCLvgsA",
"avatar_url": "https://avatars.githubusercontent.com/u/146530480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perpendicularai",
"html_url": "https://github.com/perpendicularai",
"followers_url": "https://api.github.com/users/perpendicularai/followers",
"following_url": "https://api.github.com/users/perpendicularai/following{/other_user}",
"gists_url": "https://api.github.com/users/perpendicularai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/perpendicularai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/perpendicularai/subscriptions",
"organizations_url": "https://api.github.com/users/perpendicularai/orgs",
"repos_url": "https://api.github.com/users/perpendicularai/repos",
"events_url": "https://api.github.com/users/perpendicularai/events{/privacy}",
"received_events_url": "https://api.github.com/users/perpendicularai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5853/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7625
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7625/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7625/comments
|
https://api.github.com/repos/ollama/ollama/issues/7625/events
|
https://github.com/ollama/ollama/issues/7625
| 2,651,155,333
|
I_kwDOJ0Z1Ps6eBWuF
| 7,625
|
Embedded struct in `ToolFunction`
|
{
"login": "NatoBoram",
"id": 10495562,
"node_id": "MDQ6VXNlcjEwNDk1NTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/10495562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NatoBoram",
"html_url": "https://github.com/NatoBoram",
"followers_url": "https://api.github.com/users/NatoBoram/followers",
"following_url": "https://api.github.com/users/NatoBoram/following{/other_user}",
"gists_url": "https://api.github.com/users/NatoBoram/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NatoBoram/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NatoBoram/subscriptions",
"organizations_url": "https://api.github.com/users/NatoBoram/orgs",
"repos_url": "https://api.github.com/users/NatoBoram/repos",
"events_url": "https://api.github.com/users/NatoBoram/events{/privacy}",
"received_events_url": "https://api.github.com/users/NatoBoram/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2024-11-12T06:14:26
| 2024-11-12T06:14:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
https://github.com/ollama/ollama/blob/65973ceb6417c2e2796fa59bd3225bc7bd79b403/api/types.go#L165-L177
This makes creating tools really annoying.
```go
package main
import (
ollama "github.com/ollama/ollama/api"
)
var modFunctions = []ollama.Tool{{
Type: "function",
Function: ollama.ToolFunction{
Name: "remove",
Description: "Remove a post when it violates a rule",
Parameters: struct {
Type string `json:"type"`
Required []string `json:"required"`
Properties map[string]struct {
Type string `json:"type"`
Description string `json:"description"`
Enum []string `json:"enum,omitempty"`
} `json:"properties"`
}{
Type: "object",
Required: []string{"reason"},
Properties: map[string]struct {
Type string `json:"type"`
Description string `json:"description"`
Enum []string `json:"enum,omitempty"`
}{
"reason": {
Type: "string",
Description: "These are the rules of the subreddit. If the post violates one of these rules, remove it.",
Enum: []string{
"actual_animal_attack",
"bad_explanatory_comment",
"direct_link_to_other_subreddit",
"does_not_fit_the_subreddit",
"leopard_in_title_or_explanatory_comment",
"no_explanatory_comment",
"uncivil_behaviour",
},
},
},
},
},
}, {
Type: "function",
Function: ollama.ToolFunction{
Name: "approve",
Description: "Approve a post when the explanatory comment explains how someone is suffering consequences from something they voted for, supported or wanted to impose on other people",
Parameters: struct {
Type string `json:"type"`
Required []string `json:"required"`
Properties map[string]struct {
Type string `json:"type"`
Description string `json:"description"`
Enum []string `json:"enum,omitempty"`
} `json:"properties"`
}{
Type: "object",
Required: []string{"someone", "something", "consequences"},
Properties: map[string]struct {
Type string `json:"type"`
Description string `json:"description"`
Enum []string `json:"enum,omitempty"`
}{
"someone": {
Type: "string",
Description: "The name of the person who voted for, supported or wanted to impose something on other people.",
},
"something": {
Type: "string",
Description: "The thing that the person voted for, supported or wanted to impose on other people.",
},
"consequences": {
Type: "string",
Description: "The consequences of the thing that the person voted for, supported or wanted to impose on other people.",
},
},
},
},
}}
```
Please make a separate struct for parameters and properties :(
### OS
Linux
### GPU
Nvidia, AMD
### CPU
Intel, AMD
### Ollama version
Docker
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7625/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7625/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7319
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7319/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7319/comments
|
https://api.github.com/repos/ollama/ollama/issues/7319/events
|
https://github.com/ollama/ollama/issues/7319
| 2,605,788,402
|
I_kwDOJ0Z1Ps6bUSzy
| 7,319
|
support LLaMA-Omni
|
{
"login": "chaoqunxie",
"id": 44899524,
"node_id": "MDQ6VXNlcjQ0ODk5NTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/44899524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chaoqunxie",
"html_url": "https://github.com/chaoqunxie",
"followers_url": "https://api.github.com/users/chaoqunxie/followers",
"following_url": "https://api.github.com/users/chaoqunxie/following{/other_user}",
"gists_url": "https://api.github.com/users/chaoqunxie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chaoqunxie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chaoqunxie/subscriptions",
"organizations_url": "https://api.github.com/users/chaoqunxie/orgs",
"repos_url": "https://api.github.com/users/chaoqunxie/repos",
"events_url": "https://api.github.com/users/chaoqunxie/events{/privacy}",
"received_events_url": "https://api.github.com/users/chaoqunxie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-22T15:26:57
| 2024-10-23T17:17:14
| 2024-10-23T17:17:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
this is github link https://github.com/ictnlp/LLaMA-Omni
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7319/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3667
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3667/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3667/comments
|
https://api.github.com/repos/ollama/ollama/issues/3667/events
|
https://github.com/ollama/ollama/issues/3667
| 2,245,049,802
|
I_kwDOJ0Z1Ps6F0L3K
| 3,667
|
exception create_tensor: tensor 'blk.0.ffn_gate.0.weight' not found
|
{
"login": "nkeilar",
"id": 325430,
"node_id": "MDQ6VXNlcjMyNTQzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/325430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nkeilar",
"html_url": "https://github.com/nkeilar",
"followers_url": "https://api.github.com/users/nkeilar/followers",
"following_url": "https://api.github.com/users/nkeilar/following{/other_user}",
"gists_url": "https://api.github.com/users/nkeilar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nkeilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nkeilar/subscriptions",
"organizations_url": "https://api.github.com/users/nkeilar/orgs",
"repos_url": "https://api.github.com/users/nkeilar/repos",
"events_url": "https://api.github.com/users/nkeilar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nkeilar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-16T04:10:08
| 2024-04-16T04:20:46
| 2024-04-16T04:15:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Getting this error when trying to use wizardlm2:8x22b-q2_K on a dual 3090 system.

ollama version is 0.1.31
Someone else is having same issue in this thread, but I think its a new issue:
https://github.com/ollama/ollama/issues/3032#issuecomment-2058129280
### What did you expect to see?
Model loads into memory
### Steps to reproduce
Install latest Ollama, try load the wizardlm2:8x22b-q2_K model
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.31
### GPU
Nvidia
### GPU info
ue Apr 16 14:09:48 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 On | N/A |
| 33% 43C P5 48W / 350W | 1429MiB / 24576MiB | 23% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 3090 Off | 00000000:08:00.0 Off | N/A |
| 30% 35C P8 25W / 200W | 276MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 3657 G /usr/lib/xorg/Xorg 654MiB |
| 0 N/A N/A 4314 G /usr/bin/gnome-shell 82MiB |
| 0 N/A N/A 10313 G /usr/bin/nextcloud 82MiB |
| 0 N/A N/A 911826 C /usr/local/bin/ollama 260MiB |
| 0 N/A N/A 1525117 G ...onEnabled --variations-seed-version 36MiB |
| 0 N/A N/A 3518670 G /usr/lib/firefox/firefox 198MiB |
| 1 N/A N/A 3657 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 911826 C /usr/local/bin/ollama 260MiB |
+-----------------------------------------------------------------------------------------+
### CPU
Intel
### Other software
_No response_
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3667/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5931
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5931/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5931/comments
|
https://api.github.com/repos/ollama/ollama/issues/5931/events
|
https://github.com/ollama/ollama/pull/5931
| 2,428,540,961
|
PR_kwDOJ0Z1Ps52ZIIr
| 5,931
|
Add llm-axe to Community Libraries in ReadMe
|
{
"login": "emirsahin1",
"id": 50391065,
"node_id": "MDQ6VXNlcjUwMzkxMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50391065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emirsahin1",
"html_url": "https://github.com/emirsahin1",
"followers_url": "https://api.github.com/users/emirsahin1/followers",
"following_url": "https://api.github.com/users/emirsahin1/following{/other_user}",
"gists_url": "https://api.github.com/users/emirsahin1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emirsahin1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emirsahin1/subscriptions",
"organizations_url": "https://api.github.com/users/emirsahin1/orgs",
"repos_url": "https://api.github.com/users/emirsahin1/repos",
"events_url": "https://api.github.com/users/emirsahin1/events{/privacy}",
"received_events_url": "https://api.github.com/users/emirsahin1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-24T21:57:22
| 2024-11-20T18:53:14
| 2024-11-20T18:53:14
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5931",
"html_url": "https://github.com/ollama/ollama/pull/5931",
"diff_url": "https://github.com/ollama/ollama/pull/5931.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5931.patch",
"merged_at": "2024-11-20T18:53:14"
}
| null |
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5931/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3152
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3152/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3152/comments
|
https://api.github.com/repos/ollama/ollama/issues/3152/events
|
https://github.com/ollama/ollama/issues/3152
| 2,187,212,523
|
I_kwDOJ0Z1Ps6CXjbr
| 3,152
|
Multilanguage support
|
{
"login": "jaimecoj",
"id": 9117697,
"node_id": "MDQ6VXNlcjkxMTc2OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9117697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaimecoj",
"html_url": "https://github.com/jaimecoj",
"followers_url": "https://api.github.com/users/jaimecoj/followers",
"following_url": "https://api.github.com/users/jaimecoj/following{/other_user}",
"gists_url": "https://api.github.com/users/jaimecoj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaimecoj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaimecoj/subscriptions",
"organizations_url": "https://api.github.com/users/jaimecoj/orgs",
"repos_url": "https://api.github.com/users/jaimecoj/repos",
"events_url": "https://api.github.com/users/jaimecoj/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaimecoj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 3
| 2024-03-14T20:27:18
| 2024-09-28T00:59:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
There is no info about supported languages in README. Does any of the models support other language than English? I'm looking for open source model supporting Spanish.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3152/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/ollama/ollama/issues/3152/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8112
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8112/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8112/comments
|
https://api.github.com/repos/ollama/ollama/issues/8112/events
|
https://github.com/ollama/ollama/issues/8112
| 2,741,393,703
|
I_kwDOJ0Z1Ps6jZlkn
| 8,112
|
qwen2.5-vl is now supported
|
{
"login": "Rakhsan",
"id": 94316113,
"node_id": "U_kgDOBZ8mUQ",
"avatar_url": "https://avatars.githubusercontent.com/u/94316113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rakhsan",
"html_url": "https://github.com/Rakhsan",
"followers_url": "https://api.github.com/users/Rakhsan/followers",
"following_url": "https://api.github.com/users/Rakhsan/following{/other_user}",
"gists_url": "https://api.github.com/users/Rakhsan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rakhsan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rakhsan/subscriptions",
"organizations_url": "https://api.github.com/users/Rakhsan/orgs",
"repos_url": "https://api.github.com/users/Rakhsan/repos",
"events_url": "https://api.github.com/users/Rakhsan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rakhsan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-12-16T05:45:38
| 2024-12-17T19:28:51
| 2024-12-17T19:28:51
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I know this is a duplicate but hear me out I saw that llama.cpp now supports qwen2.5-vl. Can you just support it in ollama?
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8112/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7974
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7974/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7974/comments
|
https://api.github.com/repos/ollama/ollama/issues/7974/events
|
https://github.com/ollama/ollama/pull/7974
| 2,723,676,325
|
PR_kwDOJ0Z1Ps6EXEjK
| 7,974
|
Update default model from Llama 3.2 to Llama 3.3
|
{
"login": "nwithan8",
"id": 17054780,
"node_id": "MDQ6VXNlcjE3MDU0Nzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17054780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nwithan8",
"html_url": "https://github.com/nwithan8",
"followers_url": "https://api.github.com/users/nwithan8/followers",
"following_url": "https://api.github.com/users/nwithan8/following{/other_user}",
"gists_url": "https://api.github.com/users/nwithan8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nwithan8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nwithan8/subscriptions",
"organizations_url": "https://api.github.com/users/nwithan8/orgs",
"repos_url": "https://api.github.com/users/nwithan8/repos",
"events_url": "https://api.github.com/users/nwithan8/events{/privacy}",
"received_events_url": "https://api.github.com/users/nwithan8/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-12-06T18:26:40
| 2024-12-10T21:33:40
| 2024-12-10T21:33:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7974",
"html_url": "https://github.com/ollama/ollama/pull/7974",
"diff_url": "https://github.com/ollama/ollama/pull/7974.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7974.patch",
"merged_at": null
}
|
Ref: https://github.com/ollama/ollama/pull/6959
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7974/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5148
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5148/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5148/comments
|
https://api.github.com/repos/ollama/ollama/issues/5148/events
|
https://github.com/ollama/ollama/pull/5148
| 2,363,065,876
|
PR_kwDOJ0Z1Ps5zACoO
| 5,148
|
docs: Add content about "OLLAMA_NUM_PARALLEL" and "OLLAMA_MAX_LOADED_MODELS" to the FAQ.
|
{
"login": "mili-tan",
"id": 24996957,
"node_id": "MDQ6VXNlcjI0OTk2OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/24996957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mili-tan",
"html_url": "https://github.com/mili-tan",
"followers_url": "https://api.github.com/users/mili-tan/followers",
"following_url": "https://api.github.com/users/mili-tan/following{/other_user}",
"gists_url": "https://api.github.com/users/mili-tan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mili-tan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mili-tan/subscriptions",
"organizations_url": "https://api.github.com/users/mili-tan/orgs",
"repos_url": "https://api.github.com/users/mili-tan/repos",
"events_url": "https://api.github.com/users/mili-tan/events{/privacy}",
"received_events_url": "https://api.github.com/users/mili-tan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-06-19T20:15:35
| 2024-07-04T14:25:46
| 2024-07-04T14:25:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5148",
"html_url": "https://github.com/ollama/ollama/pull/5148",
"diff_url": "https://github.com/ollama/ollama/pull/5148.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5148.patch",
"merged_at": null
}
| null |
{
"login": "mili-tan",
"id": 24996957,
"node_id": "MDQ6VXNlcjI0OTk2OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/24996957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mili-tan",
"html_url": "https://github.com/mili-tan",
"followers_url": "https://api.github.com/users/mili-tan/followers",
"following_url": "https://api.github.com/users/mili-tan/following{/other_user}",
"gists_url": "https://api.github.com/users/mili-tan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mili-tan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mili-tan/subscriptions",
"organizations_url": "https://api.github.com/users/mili-tan/orgs",
"repos_url": "https://api.github.com/users/mili-tan/repos",
"events_url": "https://api.github.com/users/mili-tan/events{/privacy}",
"received_events_url": "https://api.github.com/users/mili-tan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5148/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7922
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7922/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7922/comments
|
https://api.github.com/repos/ollama/ollama/issues/7922/events
|
https://github.com/ollama/ollama/pull/7922
| 2,716,166,945
|
PR_kwDOJ0Z1Ps6D9Fto
| 7,922
|
Add CI full build capability
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 7835709391,
"node_id": "LA_kwDOJ0Z1Ps8AAAAB0wtvzw",
"url": "https://api.github.com/repos/ollama/ollama/labels/pr%20full%20build",
"name": "pr full build",
"color": "CAF4CA",
"default": false,
"description": "trigger CI to build Ollama for all platforms"
}
] |
open
| false
| null |
[] | null | 0
| 2024-12-03T23:19:32
| 2024-12-04T23:04:06
| null |
COLLABORATOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7922",
"html_url": "https://github.com/ollama/ollama/pull/7922",
"diff_url": "https://github.com/ollama/ollama/pull/7922.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7922.patch",
"merged_at": null
}
|
For labeled PRs, generate a full build for testing
Container builds are skipped for now, but all other supported platforms are generated. A helper script is included to easily download the artifacts as long as you have the github cli installed.
```
% ./scripts/download_pr.sh 7922
Downloading artifacts for PR 7922 with commit 1ab41a6605139fb75aa283d284545405adf61b5f
dist-windows
############################################## 100.0%
Archive: dist-windows.zip
inflating: ollama-windows-amd64.zip
inflating: ollama-windows-arm64.zip
inflating: OllamaSetup.exe
dist-darwin
############################################## 100.0%
Archive: dist-darwin.zip
inflating: Ollama-darwin.zip
inflating: ollama-darwin
dist-linux-amd64
############################################## 100.0%
Archive: dist-linux-amd64.zip
inflating: ollama-linux-amd64-rocm.tgz
inflating: ollama-linux-amd64.tgz
dist-linux-arm64
############################################## 100.0%
Archive: dist-linux-arm64.zip
inflating: ollama-linux-arm64-jetpack5.tgz
inflating: ollama-linux-arm64-jetpack6.tgz
inflating: ollama-linux-arm64.tg
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7922/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7210
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7210/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7210/comments
|
https://api.github.com/repos/ollama/ollama/issues/7210/events
|
https://github.com/ollama/ollama/issues/7210
| 2,588,174,745
|
I_kwDOJ0Z1Ps6aRGmZ
| 7,210
|
Intel Arc + NVIDIA Docker Setup
|
{
"login": "blunweon",
"id": 169746148,
"node_id": "U_kgDOCh4e5A",
"avatar_url": "https://avatars.githubusercontent.com/u/169746148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blunweon",
"html_url": "https://github.com/blunweon",
"followers_url": "https://api.github.com/users/blunweon/followers",
"following_url": "https://api.github.com/users/blunweon/following{/other_user}",
"gists_url": "https://api.github.com/users/blunweon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blunweon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blunweon/subscriptions",
"organizations_url": "https://api.github.com/users/blunweon/orgs",
"repos_url": "https://api.github.com/users/blunweon/repos",
"events_url": "https://api.github.com/users/blunweon/events{/privacy}",
"received_events_url": "https://api.github.com/users/blunweon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-10-15T09:27:03
| 2024-10-29T23:38:58
| 2024-10-29T23:38:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi all,
I have recently gotten a new Intel Arc A380 and hoped to make it work with Ollama. I have tried reading issues on Github to get a sense on how supported Intel dGPUs are and it seemed to be working for some.
I am currently running it in Docker under Proxmox LXC.
This is my compose.yaml file:
```
version: "3.3"
services:
ollama:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities:
- gpu
volumes:
- ollama:/root/.ollama
- /mnt/nas-hdd0/models:/root/models
ports:
- 11434:11434
environment:
OLLAMA_ORIGINS: "*"
OLLAMA_HOST: 0.0.0.0
OLLAMA_KEEP_ALIVE: 15
OLLAMA_SCHED_SPREAD: 1
OLLAMA_DEBUG: 1
OLLAMA_INTEL_GPU: 1
NEOReadDebugKeys: 1
OverrideGpuAddressSpace: 48
container_name: ollama
image: ollama/ollama
restart: always
volumes:
ollama: {}
networks: {}
```
My Docker logs:
```
2024/10/14 16:44:12 routes.go:1158: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:true OLLAMA_KEEP_ALIVE:15s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_SCHED_SPREAD:true OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-10-14T16:44:12.616Z level=INFO source=images.go:754 msg="total blobs: 74"
time=2024-10-14T16:44:12.618Z level=INFO source=images.go:761 msg="total unused blobs removed: 0"
time=2024-10-14T16:44:12.618Z level=INFO source=routes.go:1205 msg="Listening on [::]:11434 (version 0.3.13)"
time=2024-10-14T16:44:12.619Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu/ollama_llama_server
time=2024-10-14T16:44:12.619Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-10-14T16:44:12.619Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-10-14T16:44:12.619Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v11/ollama_llama_server
time=2024-10-14T16:44:12.619Z level=DEBUG source=common.go:294 msg="availableServers : found" file=/usr/lib/ollama/runners/cuda_v12/ollama_llama_server
time=2024-10-14T16:44:12.619Z level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-10-14T16:44:12.619Z level=DEBUG source=common.go:50 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-10-14T16:44:12.619Z level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-10-14T16:44:12.619Z level=INFO source=gpu.go:199 msg="looking for compatible GPUs"
time=2024-10-14T16:44:12.619Z level=DEBUG source=gpu.go:86 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-10-14T16:44:12.619Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libcuda.so*
time=2024-10-14T16:44:12.619Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-10-14T16:44:12.620Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[/usr/lib/x86_64-linux-gnu/libcuda.so.550.90.07]
CUDA driver version: 12.4
time=2024-10-14T16:44:12.635Z level=DEBUG source=gpu.go:118 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.550.90.07
[GPU-71904de4-2b69-b701-99e0-5d39fa30f86c] CUDA totalMem 5924 mb
[GPU-71904de4-2b69-b701-99e0-5d39fa30f86c] CUDA freeMem 5851 mb
[GPU-71904de4-2b69-b701-99e0-5d39fa30f86c] Compute Capability 7.5
time=2024-10-14T16:44:12.854Z level=DEBUG source=gpu.go:468 msg="Searching for GPU library" name=libze_intel_gpu.so*
time=2024-10-14T16:44:12.854Z level=DEBUG source=gpu.go:491 msg="gpu library search" globs="[/usr/lib/ollama/libze_intel_gpu.so* /usr/local/nvidia/lib/libze_intel_gpu.so* /usr/local/nvidia/lib64/libze_intel_gpu.so* /usr/lib/x86_64-linux-gnu/libze_intel_gpu.so* /usr/lib*/libze_intel_gpu.so*]"
time=2024-10-14T16:44:12.855Z level=DEBUG source=gpu.go:525 msg="discovered GPU libraries" paths=[]
time=2024-10-14T16:44:12.855Z level=DEBUG source=amd_linux.go:376 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-10-14T16:44:12.855Z level=INFO source=types.go:107 msg="inference compute" id=GPU-71904de4-2b69-b701-99e0-5d39fa30f86c library=cuda variant=v12 compute=7.5 driver=12.4 name="NVIDIA GeForce GTX 1660 Ti" total="5.8 GiB" available="5.7 GiB"
```
Based on the above logs, it seems to get a bug(?) after discovering GPU libraries for Intel? I do have Intel drivers installed within the LXC, and other transcoding/ML tasks in Immich works.
I am able to find the inferred GPU library so technically it should work?
```
> ls /usr/lib/x86_64-linux-gnu/libze_intel*
/usr/lib/x86_64-linux-gnu/libze_intel_gpu.so.1
/usr/lib/x86_64-linux-gnu/libze_intel_gpu.so.1.3.29735.20
```
Using the Linux installation script and running `OLLAMA_INTEL_GPU=1 ollama serve` managed to get both Intel Arc and NVIDIA GPUs detected by Ollama.
However, it seems that there might be some issues with the Docker image as it should still be able to detect the Intel GPU hmmmm
### OS
Linux, Docker
### GPU
Nvidia, Intel
### CPU
Intel
### Ollama version
0.3.13
|
{
"login": "blunweon",
"id": 169746148,
"node_id": "U_kgDOCh4e5A",
"avatar_url": "https://avatars.githubusercontent.com/u/169746148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blunweon",
"html_url": "https://github.com/blunweon",
"followers_url": "https://api.github.com/users/blunweon/followers",
"following_url": "https://api.github.com/users/blunweon/following{/other_user}",
"gists_url": "https://api.github.com/users/blunweon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blunweon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blunweon/subscriptions",
"organizations_url": "https://api.github.com/users/blunweon/orgs",
"repos_url": "https://api.github.com/users/blunweon/repos",
"events_url": "https://api.github.com/users/blunweon/events{/privacy}",
"received_events_url": "https://api.github.com/users/blunweon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7210/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3342
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3342/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3342/comments
|
https://api.github.com/repos/ollama/ollama/issues/3342/events
|
https://github.com/ollama/ollama/issues/3342
| 2,205,684,754
|
I_kwDOJ0Z1Ps6DeBQS
| 3,342
|
Support eGPU on Intel Macs
|
{
"login": "noomorph",
"id": 1962469,
"node_id": "MDQ6VXNlcjE5NjI0Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1962469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noomorph",
"html_url": "https://github.com/noomorph",
"followers_url": "https://api.github.com/users/noomorph/followers",
"following_url": "https://api.github.com/users/noomorph/following{/other_user}",
"gists_url": "https://api.github.com/users/noomorph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noomorph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noomorph/subscriptions",
"organizations_url": "https://api.github.com/users/noomorph/orgs",
"repos_url": "https://api.github.com/users/noomorph/repos",
"events_url": "https://api.github.com/users/noomorph/events{/privacy}",
"received_events_url": "https://api.github.com/users/noomorph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-25T12:52:44
| 2024-03-25T13:41:33
| 2024-03-25T13:41:33
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I'm trying to run ollama with AMD Radeon 5700XT (eGPU) on Mac Mini 2018 (Intel).

I see that only my CPU is busy, not the GPU.

I suspect that this is the culprit:

### How should we solve this?
Try to detect an eGPU nevertheless.
### What is the impact of not solving this?
Low performance of models.
### Anything else?
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3342/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3342/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6691
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6691/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6691/comments
|
https://api.github.com/repos/ollama/ollama/issues/6691/events
|
https://github.com/ollama/ollama/issues/6691
| 2,511,997,331
|
I_kwDOJ0Z1Ps6VugmT
| 6,691
|
Is everything fine with `phi3` model?
|
{
"login": "eirnym",
"id": 485399,
"node_id": "MDQ6VXNlcjQ4NTM5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/485399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eirnym",
"html_url": "https://github.com/eirnym",
"followers_url": "https://api.github.com/users/eirnym/followers",
"following_url": "https://api.github.com/users/eirnym/following{/other_user}",
"gists_url": "https://api.github.com/users/eirnym/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eirnym/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eirnym/subscriptions",
"organizations_url": "https://api.github.com/users/eirnym/orgs",
"repos_url": "https://api.github.com/users/eirnym/repos",
"events_url": "https://api.github.com/users/eirnym/events{/privacy}",
"received_events_url": "https://api.github.com/users/eirnym/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 5
| 2024-09-07T18:06:24
| 2024-09-13T12:18:12
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I downloaded model 3 moths ago and it worked fine, but now it doesn't work at all.
My query is `generate 20 non-existing random English-sounding nouns, less than 6 sylables`. Previously it just generated words without descriptions as expected, now with them.
When I substitute "English" with "Polish", it goes into an infinite loop and when I put "German", it start to spill out UUIDs.
Example of Polish output:
```
1. Krzeszinski
2. Szmaragdowa
3. Złotyka
4. Pomocnicza
5. Wesołeńca
6. Jędrzejki
7. Kartwinka
8. Chrobotnica
9. Skrępijny
1 end. 20 nouns generated successfully! Now, let's shuffle them:
Shuffled List (Randomized):
4. Pomocnicza
6. Jędrzejki
7. Kartwinka
3. Złotyka
9. Skrępijny
1 end. 20 nouns generated successfully! Now, let's shuffle them:
Shuffled List (Randomized):
(and it repeats forever)
```
Example of German output:
```
1. Torgelichtweisenheit
... (8 another words correctly generated)
10. Sonnenfinsternistränenqualm
1de25af6-bb4a-3c17-bf8a-9d6e989e3ecc_GermanSoundingNouns=nonExistingWordsList=[Torgelichtweisenheit,Fuchsbärennachtfrost,Himmelspechvogelzunge,...,Sonnenfinsternistränenqualm1de25af6-bb4a-3c17-bf8a-9d6e989e3ecc_GermanSoundingNouns=nonExistingWordsList=[Torgelichtweisenheit,Fuchsbärennachtfrost,Himmelspechvogelzunge,...,de25af6-bb4a-3c17-bf8a-9d6e989e3ecc]
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
ollama version is 0.3.9
previous ollama version was 0.2.3
logs: [ollama.log](https://github.com/user-attachments/files/16919885/ollama.log)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6691/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/346
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/346/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/346/comments
|
https://api.github.com/repos/ollama/ollama/issues/346/events
|
https://github.com/ollama/ollama/pull/346
| 1,850,315,856
|
PR_kwDOJ0Z1Ps5X6SaR
| 346
|
Add context to api docs
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-14T18:23:31
| 2023-08-15T14:43:23
| 2023-08-15T14:43:22
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/346",
"html_url": "https://github.com/ollama/ollama/pull/346",
"diff_url": "https://github.com/ollama/ollama/pull/346.diff",
"patch_url": "https://github.com/ollama/ollama/pull/346.patch",
"merged_at": "2023-08-15T14:43:22"
}
| null |
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/346/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3357
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3357/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3357/comments
|
https://api.github.com/repos/ollama/ollama/issues/3357/events
|
https://github.com/ollama/ollama/issues/3357
| 2,207,541,782
|
I_kwDOJ0Z1Ps6DlGoW
| 3,357
|
Run GGUF files directly
|
{
"login": "Dampfinchen",
"id": 59751859,
"node_id": "MDQ6VXNlcjU5NzUxODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/59751859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dampfinchen",
"html_url": "https://github.com/Dampfinchen",
"followers_url": "https://api.github.com/users/Dampfinchen/followers",
"following_url": "https://api.github.com/users/Dampfinchen/following{/other_user}",
"gists_url": "https://api.github.com/users/Dampfinchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dampfinchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dampfinchen/subscriptions",
"organizations_url": "https://api.github.com/users/Dampfinchen/orgs",
"repos_url": "https://api.github.com/users/Dampfinchen/repos",
"events_url": "https://api.github.com/users/Dampfinchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dampfinchen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] |
closed
| false
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3
| 2024-03-26T08:14:18
| 2024-05-30T01:12:04
| 2024-03-27T23:22:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
Why is the GGUF converted instead of just being run directly like all the other inference engines (Llama.cpp, Koboldcpp, Oobabooga, LM-Studio etc).
### How should we solve this?
Let the GGUF file be able to run directly without conversion.
### What is the impact of not solving this?
_No response_
### Anything else?
_No response_
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3357/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3357/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2853
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2853/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2853/comments
|
https://api.github.com/repos/ollama/ollama/issues/2853/events
|
https://github.com/ollama/ollama/issues/2853
| 2,162,557,814
|
I_kwDOJ0Z1Ps6A5gN2
| 2,853
|
not work in 1080ti gpu
|
{
"login": "basakamars",
"id": 9200486,
"node_id": "MDQ6VXNlcjkyMDA0ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9200486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/basakamars",
"html_url": "https://github.com/basakamars",
"followers_url": "https://api.github.com/users/basakamars/followers",
"following_url": "https://api.github.com/users/basakamars/following{/other_user}",
"gists_url": "https://api.github.com/users/basakamars/gists{/gist_id}",
"starred_url": "https://api.github.com/users/basakamars/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/basakamars/subscriptions",
"organizations_url": "https://api.github.com/users/basakamars/orgs",
"repos_url": "https://api.github.com/users/basakamars/repos",
"events_url": "https://api.github.com/users/basakamars/events{/privacy}",
"received_events_url": "https://api.github.com/users/basakamars/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 6
| 2024-03-01T04:01:34
| 2024-03-08T21:25:49
| 2024-03-07T17:49:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 551.61 Driver Version: 551.61 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 1080 Ti WDDM | 00000000:04:00.0 On | N/A |
| 30% 23C P2 56W / 250W | 286MiB / 11264MiB | 2% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
**not work in gpu, only run to cpu, why?**
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2853/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6796
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6796/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6796/comments
|
https://api.github.com/repos/ollama/ollama/issues/6796/events
|
https://github.com/ollama/ollama/issues/6796
| 2,525,693,343
|
I_kwDOJ0Z1Ps6WiwWf
| 6,796
|
Model Library per api call
|
{
"login": "Leon-Sander",
"id": 72946124,
"node_id": "MDQ6VXNlcjcyOTQ2MTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/72946124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leon-Sander",
"html_url": "https://github.com/Leon-Sander",
"followers_url": "https://api.github.com/users/Leon-Sander/followers",
"following_url": "https://api.github.com/users/Leon-Sander/following{/other_user}",
"gists_url": "https://api.github.com/users/Leon-Sander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leon-Sander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leon-Sander/subscriptions",
"organizations_url": "https://api.github.com/users/Leon-Sander/orgs",
"repos_url": "https://api.github.com/users/Leon-Sander/repos",
"events_url": "https://api.github.com/users/Leon-Sander/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leon-Sander/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-09-13T21:01:22
| 2024-09-17T17:41:27
| 2024-09-17T17:41:27
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It would be nice to have an api endpoint which could list all possible models as seen on [ollama.com/library](https://ollama.com/library)
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6796/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3276
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3276/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3276/comments
|
https://api.github.com/repos/ollama/ollama/issues/3276/events
|
https://github.com/ollama/ollama/issues/3276
| 2,198,676,395
|
I_kwDOJ0Z1Ps6DDSOr
| 3,276
|
Running Ollama with Zluda on AMD GPU for CUDA support
|
{
"login": "drnushooz",
"id": 10852951,
"node_id": "MDQ6VXNlcjEwODUyOTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10852951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drnushooz",
"html_url": "https://github.com/drnushooz",
"followers_url": "https://api.github.com/users/drnushooz/followers",
"following_url": "https://api.github.com/users/drnushooz/following{/other_user}",
"gists_url": "https://api.github.com/users/drnushooz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drnushooz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drnushooz/subscriptions",
"organizations_url": "https://api.github.com/users/drnushooz/orgs",
"repos_url": "https://api.github.com/users/drnushooz/repos",
"events_url": "https://api.github.com/users/drnushooz/events{/privacy}",
"received_events_url": "https://api.github.com/users/drnushooz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-03-20T22:16:24
| 2024-03-21T05:10:07
| 2024-03-21T05:10:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
AMD has an official build of CUDA api on top of ROCm which is called Zluda. This is a placeholder of how ollama runs on various platform with AMD Radeon GPU. Here is the link to Zluda project https://github.com/vosen/ZLUDA
### How should we solve this?
Just try to install Zluda on Linux and run Ollama and TensorFlow on it. See if it detects the GPU as CUDA compatible.
### What is the impact of not solving this?
_No response_
### Anything else?
_No response_
|
{
"login": "drnushooz",
"id": 10852951,
"node_id": "MDQ6VXNlcjEwODUyOTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10852951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drnushooz",
"html_url": "https://github.com/drnushooz",
"followers_url": "https://api.github.com/users/drnushooz/followers",
"following_url": "https://api.github.com/users/drnushooz/following{/other_user}",
"gists_url": "https://api.github.com/users/drnushooz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drnushooz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drnushooz/subscriptions",
"organizations_url": "https://api.github.com/users/drnushooz/orgs",
"repos_url": "https://api.github.com/users/drnushooz/repos",
"events_url": "https://api.github.com/users/drnushooz/events{/privacy}",
"received_events_url": "https://api.github.com/users/drnushooz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3276/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3276/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8419
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8419/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8419/comments
|
https://api.github.com/repos/ollama/ollama/issues/8419/events
|
https://github.com/ollama/ollama/issues/8419
| 2,786,791,763
|
I_kwDOJ0Z1Ps6mGxFT
| 8,419
|
Does ollama support video as input?
|
{
"login": "papandadj",
"id": 25424898,
"node_id": "MDQ6VXNlcjI1NDI0ODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/25424898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/papandadj",
"html_url": "https://github.com/papandadj",
"followers_url": "https://api.github.com/users/papandadj/followers",
"following_url": "https://api.github.com/users/papandadj/following{/other_user}",
"gists_url": "https://api.github.com/users/papandadj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/papandadj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/papandadj/subscriptions",
"organizations_url": "https://api.github.com/users/papandadj/orgs",
"repos_url": "https://api.github.com/users/papandadj/repos",
"events_url": "https://api.github.com/users/papandadj/events{/privacy}",
"received_events_url": "https://api.github.com/users/papandadj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 3
| 2025-01-14T10:33:55
| 2025-01-28T21:16:01
| 2025-01-28T21:16:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
For example, models like minicpm already support video. Can these models directly take video as input?
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8419/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3166
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3166/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3166/comments
|
https://api.github.com/repos/ollama/ollama/issues/3166/events
|
https://github.com/ollama/ollama/issues/3166
| 2,188,122,254
|
I_kwDOJ0Z1Ps6CbBiO
| 3,166
|
Please add the memory requirement estimate if run with cpu and vram request for run with GPU for each model in model list.
|
{
"login": "JerryYao75",
"id": 35689526,
"node_id": "MDQ6VXNlcjM1Njg5NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/35689526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JerryYao75",
"html_url": "https://github.com/JerryYao75",
"followers_url": "https://api.github.com/users/JerryYao75/followers",
"following_url": "https://api.github.com/users/JerryYao75/following{/other_user}",
"gists_url": "https://api.github.com/users/JerryYao75/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JerryYao75/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JerryYao75/subscriptions",
"organizations_url": "https://api.github.com/users/JerryYao75/orgs",
"repos_url": "https://api.github.com/users/JerryYao75/repos",
"events_url": "https://api.github.com/users/JerryYao75/events{/privacy}",
"received_events_url": "https://api.github.com/users/JerryYao75/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 3
| 2024-03-15T09:49:52
| 2024-11-02T06:45:09
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I want to know if my computer can support the model or not, but currently no one can tell me.
### How should we solve this?
Add the memory needed for each model tag if run on cpu.
Add the vram needed for each model tag if run on GPU.
### What is the impact of not solving this?
All users had to download each model and test, this will be a big waste of resources.
### Anything else?
_No response_
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3166/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3166/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/80
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/80/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/80/comments
|
https://api.github.com/repos/ollama/ollama/issues/80/events
|
https://github.com/ollama/ollama/pull/80
| 1,805,273,022
|
PR_kwDOJ0Z1Ps5VioNa
| 80
|
ollama app welcome screen for first time run
|
{
"login": "hoyyeva",
"id": 63033505,
"node_id": "MDQ6VXNlcjYzMDMzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/63033505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoyyeva",
"html_url": "https://github.com/hoyyeva",
"followers_url": "https://api.github.com/users/hoyyeva/followers",
"following_url": "https://api.github.com/users/hoyyeva/following{/other_user}",
"gists_url": "https://api.github.com/users/hoyyeva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoyyeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoyyeva/subscriptions",
"organizations_url": "https://api.github.com/users/hoyyeva/orgs",
"repos_url": "https://api.github.com/users/hoyyeva/repos",
"events_url": "https://api.github.com/users/hoyyeva/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoyyeva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-14T17:56:08
| 2023-07-21T00:35:20
| 2023-07-14T23:34:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/80",
"html_url": "https://github.com/ollama/ollama/pull/80",
"diff_url": "https://github.com/ollama/ollama/pull/80.diff",
"patch_url": "https://github.com/ollama/ollama/pull/80.patch",
"merged_at": "2023-07-14T23:34:25"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/80/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/80/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8611
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8611/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8611/comments
|
https://api.github.com/repos/ollama/ollama/issues/8611/events
|
https://github.com/ollama/ollama/issues/8611
| 2,813,726,762
|
I_kwDOJ0Z1Ps6nthAq
| 8,611
|
/clear not actually clearing
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 4
| 2025-01-27T18:15:01
| 2025-01-29T21:05:05
| 2025-01-29T21:05:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Steps that I've done, which shows the bug. There might be a simpler sequence but this is mine.
1. ollama run hf.co/mradermacher/DS-R1-Distill-Q2.5-7B-RP-GGUF:latest
2. /set parameter num_ctx 16384
3. save chrisdeepseek
4. /bye
5. ollama run chrisdeepseek
6. create flappybird.py code.
7. (do some testing extra)
8. /bye
9. ollama run chrisdeepseek
10. flappy bird code comes back!
11. /clear
12. /bye
13. ollama run chrisdeepseek
14. flaapy bird code comes back!
Seems like the larger context doesn't actually get cleared.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8611/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3903
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3903/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3903/comments
|
https://api.github.com/repos/ollama/ollama/issues/3903/events
|
https://github.com/ollama/ollama/issues/3903
| 2,262,726,380
|
I_kwDOJ0Z1Ps6G3nbs
| 3,903
|
index 0 is out of range for type 'uvm_gpu_chunk_t *[*]'
|
{
"login": "jferments",
"id": 158022198,
"node_id": "U_kgDOCWs6Ng",
"avatar_url": "https://avatars.githubusercontent.com/u/158022198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jferments",
"html_url": "https://github.com/jferments",
"followers_url": "https://api.github.com/users/jferments/followers",
"following_url": "https://api.github.com/users/jferments/following{/other_user}",
"gists_url": "https://api.github.com/users/jferments/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jferments/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jferments/subscriptions",
"organizations_url": "https://api.github.com/users/jferments/orgs",
"repos_url": "https://api.github.com/users/jferments/repos",
"events_url": "https://api.github.com/users/jferments/events{/privacy}",
"received_events_url": "https://api.github.com/users/jferments/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-04-25T05:51:38
| 2024-10-17T19:01:25
| 2024-05-21T17:43:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I am using Ollama on Ubuntu 24.04, and am getting the following error showing up in dmesg:
[ 48.005440] ------------[ cut here ]------------
[ 48.005441] UBSAN: array-index-out-of-bounds in build/nvidia/535.171.04/build/nvidia-uvm/uvm_pmm_gpu.c:2038:44
[ 48.005443] index 0 is out of range for type 'uvm_gpu_chunk_t *[*]'
[ 48.005445] CPU: 24 PID: 3014 Comm: ollama Tainted: P O 6.8.0-31-generic #31-Ubuntu
[ 48.005447] Hardware name: ASUS System Product Name/Pro WS WRX90E-SAGE SE, BIOS 0404 12/20/2023
[ 48.005448] Call Trace:
[ 48.005449] <TASK>
[ 48.005450] dump_stack_lvl+0x48/0x70
[ 48.005453] dump_stack+0x10/0x20
[ 48.005455] __ubsan_handle_out_of_bounds+0xc6/0x110
[ 48.005458] uvm_pmm_gpu_alloc+0x2f5/0x6d0 [nvidia_uvm]
[ 48.005490] phys_mem_allocate+0xac/0x230 [nvidia_uvm]
[ 48.005521] allocate_directory+0xb4/0x130 [nvidia_uvm]
[ 48.005548] ? allocate_directory+0xb4/0x130 [nvidia_uvm]
[ 48.005577] uvm_page_tree_init+0x133/0x450 [nvidia_uvm]
[ 48.005607] uvm_gpu_retain_by_uuid+0x19df/0x2b80 [nvidia_uvm]
[ 48.005639] uvm_va_space_register_gpu+0x47/0x740 [nvidia_uvm]
[ 48.005669] uvm_api_register_gpu+0x5a/0x90 [nvidia_uvm]
[ 48.005696] uvm_ioctl+0x1a26/0x1cd0 [nvidia_uvm]
[ 48.005724] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005726] ? xas_find+0x74/0x1e0
[ 48.005728] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005731] ? next_uptodate_folio+0xa9/0x320
[ 48.005734] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005736] ? filemap_map_pages+0x2fe/0x4c0
[ 48.005739] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005741] ? list_lru_add+0xd1/0x140
[ 48.005744] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005746] ? _raw_spin_lock_irqsave+0xe/0x20
[ 48.005748] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005750] ? thread_context_non_interrupt_add+0x13a/0x250 [nvidia_uvm]
[ 48.005780] uvm_unlocked_ioctl_entry.part.0+0x7b/0xf0 [nvidia_uvm]
[ 48.005808] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005811] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005813] ? handle_pte_fault+0x114/0x1d0
[ 48.005815] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005817] ? __handle_mm_fault+0x653/0x790
[ 48.005820] uvm_unlocked_ioctl_entry+0x6b/0x90 [nvidia_uvm]
[ 48.005847] __x64_sys_ioctl+0xa0/0xf0
[ 48.005850] x64_sys_call+0x143b/0x25c0
[ 48.005853] do_syscall_64+0x7f/0x180
[ 48.005855] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005857] ? handle_mm_fault+0xad/0x380
[ 48.005860] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005862] ? do_user_addr_fault+0x338/0x6b0
[ 48.005864] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005866] ? irqentry_exit_to_user_mode+0x7b/0x260
[ 48.005869] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005871] ? irqentry_exit+0x43/0x50
[ 48.005874] ? srso_alias_return_thunk+0x5/0xfbef5
[ 48.005876] ? exc_page_fault+0x94/0x1b0
[ 48.005879] entry_SYSCALL_64_after_hwframe+0x73/0x7b
[ 48.005882] RIP: 0033:0x74615dd24ded
[ 48.005887] Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10 c7 45 b0 10 00 00 00 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 1a 48 8b 45 c8 64 48 2b 04 25 28 00 00 00
[ 48.005889] RSP: 002b:00007460f5fff2d0 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 48.005891] RAX: ffffffffffffffda RBX: 00007460ebd00860 RCX: 000074615dd24ded
[ 48.005892] RDX: 00007460f5fff370 RSI: 0000000000000025 RDI: 0000000000000008
[ 48.005894] RBP: 00007460f5fff320 R08: 00007460ebd008f0 R09: 0000000000000000
[ 48.005895] R10: 000074609c02dab0 R11: 0000000000000246 R12: 000074609c0370f6
[ 48.005896] R13: 00007460ebd008f0 R14: 00007460f5fff370 R15: 0000000000000008
[ 48.005900] </TASK>
[ 48.005901] ---[ end trace ]---
I am using an AMD 7965WX CPU, 2 x RTX 4090 GPUs, and a Asus WRX90E-SAGE motherboard.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.30
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3903/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/779
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/779/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/779/comments
|
https://api.github.com/repos/ollama/ollama/issues/779/events
|
https://github.com/ollama/ollama/issues/779
| 1,942,202,920
|
I_kwDOJ0Z1Ps5zw6oo
| 779
|
API stream false doesn't seem to work
|
{
"login": "jgunzelman88",
"id": 25258421,
"node_id": "MDQ6VXNlcjI1MjU4NDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/25258421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jgunzelman88",
"html_url": "https://github.com/jgunzelman88",
"followers_url": "https://api.github.com/users/jgunzelman88/followers",
"following_url": "https://api.github.com/users/jgunzelman88/following{/other_user}",
"gists_url": "https://api.github.com/users/jgunzelman88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jgunzelman88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jgunzelman88/subscriptions",
"organizations_url": "https://api.github.com/users/jgunzelman88/orgs",
"repos_url": "https://api.github.com/users/jgunzelman88/repos",
"events_url": "https://api.github.com/users/jgunzelman88/events{/privacy}",
"received_events_url": "https://api.github.com/users/jgunzelman88/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-10-13T15:51:31
| 2023-10-16T18:17:46
| 2023-10-13T16:53:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am trying to use the rest api and I am posting the following
`{
"model": "mistral",
"prompt":"tell me a fancy joke",
"stream": false
}`
And I get the following response. Doesn't stream false disable partial responses? I am using the [0.1.2](https://hub.docker.com/layers/ollama/ollama/0.1.2/images/sha256-465621f7398c2a51ea1b4a377f70e97905c3605e7ee93cde7a39aa7d7eaec26f?context=explore) image from docker hub.
`{
"model": "mistral",
"created_at": "2023-10-13T15:23:36.463078827Z",
"response": "\n",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:36.63979395Z",
"response": "Why",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:36.818460356Z",
"response": " did",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:36.995619125Z",
"response": " the",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:37.168520786Z",
"response": " tom",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:37.33740151Z",
"response": "ato",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:37.512231358Z",
"response": " turn",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:37.682994442Z",
"response": " red",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:37.855768719Z",
"response": "?",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:38.03340991Z",
"response": "\n",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:38.204668394Z",
"response": "\n",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:38.380301847Z",
"response": "Because",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:38.558119569Z",
"response": " it",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:38.737065138Z",
"response": " saw",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:38.910857111Z",
"response": " the",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:39.082354702Z",
"response": " salad",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:39.257027457Z",
"response": " dressing",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:39.434786137Z",
"response": "!",
"done": false
}{
"model": "mistral",
"created_at": "2023-10-13T15:23:39.60969303Z",
"done": true,
"context": [
733,
16289,
28793,
1912,
528,
264,
19602,
13015,
733,
28748,
16289,
28793,
13,
13,
7638,
863,
272,
6679,
1827,
1527,
2760,
28804,
13,
13,
17098,
378,
2672,
272,
25256,
21993,
28808
],
"total_duration": 3323485053,
"load_duration": 792966,
"prompt_eval_count": 1,
"eval_count": 19,
"eval_duration": 3300021000
}`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/779/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5655
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5655/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5655/comments
|
https://api.github.com/repos/ollama/ollama/issues/5655/events
|
https://github.com/ollama/ollama/pull/5655
| 2,406,435,330
|
PR_kwDOJ0Z1Ps51RFtk
| 5,655
|
remove template
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-07-12T22:37:10
| 2024-07-15T20:13:20
| 2024-07-14T03:56:24
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5655",
"html_url": "https://github.com/ollama/ollama/pull/5655",
"diff_url": "https://github.com/ollama/ollama/pull/5655.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5655.patch",
"merged_at": "2024-07-14T03:56:24"
}
|
Remove the broken `/set template` command in the CLI. This is an alternative to #5613 that doesn't add another parameter to the `/api/chat` endpoint.
Given the `/set template` command was broken for 6 months and only one person noticed (thank you @protosam) I think it's probably safe to remove this.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5655/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1624
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1624/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1624/comments
|
https://api.github.com/repos/ollama/ollama/issues/1624/events
|
https://github.com/ollama/ollama/issues/1624
| 2,050,099,050
|
I_kwDOJ0Z1Ps56Mgdq
| 1,624
|
Some questions about embedding api
|
{
"login": "lingen",
"id": 2062865,
"node_id": "MDQ6VXNlcjIwNjI4NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lingen",
"html_url": "https://github.com/lingen",
"followers_url": "https://api.github.com/users/lingen/followers",
"following_url": "https://api.github.com/users/lingen/following{/other_user}",
"gists_url": "https://api.github.com/users/lingen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lingen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lingen/subscriptions",
"organizations_url": "https://api.github.com/users/lingen/orgs",
"repos_url": "https://api.github.com/users/lingen/repos",
"events_url": "https://api.github.com/users/lingen/events{/privacy}",
"received_events_url": "https://api.github.com/users/lingen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 8
| 2023-12-20T08:41:11
| 2024-06-18T15:32:19
| 2024-01-11T07:50:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, I have some questions about embedding api of Ollama.
As Ollama document's guide, we can use embedding API, as
```shell
curl http://localhost:11434/api/embeddings -d '{
"model": "llama2",
"prompt": "Here is an article about llamas..."
}'
```
But I feel very strange about the API.
I know some llm model like 'llama2' is not an embedding model but for text generation. There are many special models just for embedding, the BGE embedding model for example.
And I know Every embedding model has its max token length limits and Dimension length.
So what does Ollama's embedding API mean?
If I use llama2 and Ollama in embedding API, What are the differents between the BGE Embedding model? Also what is the max token length of Ollama embedding?
If anyone can answer my questions, I would be very grateful.
|
{
"login": "lingen",
"id": 2062865,
"node_id": "MDQ6VXNlcjIwNjI4NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lingen",
"html_url": "https://github.com/lingen",
"followers_url": "https://api.github.com/users/lingen/followers",
"following_url": "https://api.github.com/users/lingen/following{/other_user}",
"gists_url": "https://api.github.com/users/lingen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lingen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lingen/subscriptions",
"organizations_url": "https://api.github.com/users/lingen/orgs",
"repos_url": "https://api.github.com/users/lingen/repos",
"events_url": "https://api.github.com/users/lingen/events{/privacy}",
"received_events_url": "https://api.github.com/users/lingen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1624/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1624/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4033
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4033/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4033/comments
|
https://api.github.com/repos/ollama/ollama/issues/4033/events
|
https://github.com/ollama/ollama/issues/4033
| 2,269,784,675
|
I_kwDOJ0Z1Ps6HSipj
| 4,033
|
incomprehensible answers from Gemma:7b
|
{
"login": "kukidevs",
"id": 113847173,
"node_id": "U_kgDOBskrhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/113847173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kukidevs",
"html_url": "https://github.com/kukidevs",
"followers_url": "https://api.github.com/users/kukidevs/followers",
"following_url": "https://api.github.com/users/kukidevs/following{/other_user}",
"gists_url": "https://api.github.com/users/kukidevs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kukidevs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kukidevs/subscriptions",
"organizations_url": "https://api.github.com/users/kukidevs/orgs",
"repos_url": "https://api.github.com/users/kukidevs/repos",
"events_url": "https://api.github.com/users/kukidevs/events{/privacy}",
"received_events_url": "https://api.github.com/users/kukidevs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-04-29T19:20:41
| 2024-05-01T13:13:17
| 2024-05-01T13:13:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
<img width="627" alt="image" src="https://github.com/ollama/ollama/assets/113847173/34b3d3e0-70b2-4695-a86f-f824178e1b68">
Mistral:7b works fine, so I suppose that the issue is realted to the model
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4033/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3451
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3451/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3451/comments
|
https://api.github.com/repos/ollama/ollama/issues/3451/events
|
https://github.com/ollama/ollama/issues/3451
| 2,220,024,227
|
I_kwDOJ0Z1Ps6EUuGj
| 3,451
|
Community based github repo ollama development using agents
|
{
"login": "hemangjoshi37a",
"id": 12392345,
"node_id": "MDQ6VXNlcjEyMzkyMzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/12392345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemangjoshi37a",
"html_url": "https://github.com/hemangjoshi37a",
"followers_url": "https://api.github.com/users/hemangjoshi37a/followers",
"following_url": "https://api.github.com/users/hemangjoshi37a/following{/other_user}",
"gists_url": "https://api.github.com/users/hemangjoshi37a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hemangjoshi37a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemangjoshi37a/subscriptions",
"organizations_url": "https://api.github.com/users/hemangjoshi37a/orgs",
"repos_url": "https://api.github.com/users/hemangjoshi37a/repos",
"events_url": "https://api.github.com/users/hemangjoshi37a/events{/privacy}",
"received_events_url": "https://api.github.com/users/hemangjoshi37a/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2024-04-02T09:34:47
| 2024-04-19T15:41:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
I want to have a system in which the users are allowed to donate their gpu resources for development of this repo using any agent type framework where a particular user's system is given a name and a task with a particular type agent for example a developer agent or a tester agent or something like that , and in rewards users can gain contributor level in this github repo, and any other perks like gaining access to new features.
### How should we solve this?
we can give a tab in the settings from which users can set a flag or a checkbox where it says donate gpu time and using this users can give access to the developers of this repo to use their gpus to develop this repos by aloting agents to the user's gpu. here the agents are allowded to develop the code in context of existing code and the docs then the contributors of this repo should manually check for the code if the agent made a good code then they approve and merge their PR.
### What is the impact of not solving this?
Slow development of this repo.
### Anything else?
N/A
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3451/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5016
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5016/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5016/comments
|
https://api.github.com/repos/ollama/ollama/issues/5016/events
|
https://github.com/ollama/ollama/issues/5016
| 2,350,516,131
|
I_kwDOJ0Z1Ps6MGgej
| 5,016
|
Integration with MLFlow
|
{
"login": "ulhaqi12",
"id": 44068298,
"node_id": "MDQ6VXNlcjQ0MDY4Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/44068298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ulhaqi12",
"html_url": "https://github.com/ulhaqi12",
"followers_url": "https://api.github.com/users/ulhaqi12/followers",
"following_url": "https://api.github.com/users/ulhaqi12/following{/other_user}",
"gists_url": "https://api.github.com/users/ulhaqi12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ulhaqi12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ulhaqi12/subscriptions",
"organizations_url": "https://api.github.com/users/ulhaqi12/orgs",
"repos_url": "https://api.github.com/users/ulhaqi12/repos",
"events_url": "https://api.github.com/users/ulhaqi12/events{/privacy}",
"received_events_url": "https://api.github.com/users/ulhaqi12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2024-06-13T08:25:25
| 2024-11-22T09:05:26
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey,
Currently, Ollama is saving models locally on a cache. To maintain different versions of LLMs or finetuned ones and also for extensive monitoring it's a good idea to provide integration with MLFlow where we can log all the experiments on MLFlow for better monitoring of the system.. I propose integrating Ollama with MLFlow to enhance our ML lifecycle management, leveraging Ollama's advanced model serving and monitoring capabilities.
BR,
Ikram
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5016/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5016/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/8674
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8674/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8674/comments
|
https://api.github.com/repos/ollama/ollama/issues/8674/events
|
https://github.com/ollama/ollama/issues/8674
| 2,819,400,345
|
I_kwDOJ0Z1Ps6oDKKZ
| 8,674
|
No compatible GPUs were discovered
|
{
"login": "mikedolx",
"id": 15738117,
"node_id": "MDQ6VXNlcjE1NzM4MTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/15738117?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikedolx",
"html_url": "https://github.com/mikedolx",
"followers_url": "https://api.github.com/users/mikedolx/followers",
"following_url": "https://api.github.com/users/mikedolx/following{/other_user}",
"gists_url": "https://api.github.com/users/mikedolx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikedolx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikedolx/subscriptions",
"organizations_url": "https://api.github.com/users/mikedolx/orgs",
"repos_url": "https://api.github.com/users/mikedolx/repos",
"events_url": "https://api.github.com/users/mikedolx/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikedolx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 1
| 2025-01-29T21:47:22
| 2025-01-29T22:06:33
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hi,
i'm currently trying to setup ollama within docker. I am using the following `docker-compose.yml`:
```yaml
services:
ollama:
container_name: ollama
restart: unless-stopped
image: ollama/ollama:latest
ports:
- 11434:11434
environment:
- OLLAMA_KEEP_ALIVE=24h
networks:
- ollama-docker
volumes:
- ollama:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: "1"
capabilities: [gpu]
ollama-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: ollama-webui
volumes:
- webui:/app/backend/data
depends_on:
- ollama
ports:
- 11080:8080
environment: # https://docs.openwebui.com/getting-started/env-configuration#default_models
- OLLAMA_BASE_URLS=http://host.docker.internal:7869 #comma separated ollama hosts
- ENV=dev
- WEBUI_AUTH=False
- WEBUI_NAME=valiantlynx AI
- WEBUI_URL=http://localhost:8080
- WEBUI_SECRET_KEY=t0p-s3cr3t
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
networks:
- ollama-docker
volumes:
webui:
ollama:
networks:
ollama-docker:
external: false
```
When i start the containers and check the logs of the ollama container i can see the following logs.
```
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
2025/01/29 21:41:11 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:24h0m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
time=2025-01-29T21:41:11.597Z level=INFO source=images.go:432 msg="total blobs: 0"
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
time=2025-01-29T21:41:11.597Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
time=2025-01-29T21:41:11.597Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45-dirty)"
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
time=2025-01-29T21:41:11.598Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx]"
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
time=2025-01-29T21:41:11.598Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
time=2025-01-29T21:41:11.601Z level=WARN source=gpu.go:623 msg="unknown error initializing cuda driver library /usr/lib/x86_64-linux-gnu/libcuda.so.535.216.03: cuda driver library init failure: 999. see https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for more information"
````
Apperently, ollama is unable to recognize my GPU.
I can run `nvidia-smi` on the host and get the following result (which tells me that at least on the host everything is correctly installed):
```bash
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.216.03 Driver Version: 535.216.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3060 On | 00000000:00:10.0 Off | N/A |
| 0% 47C P5 18W / 170W | 1MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
I can run the same command within the ollama container and get this result:
```bash
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.216.03 Driver Version: 535.216.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3060 On | 00000000:00:10.0 Off | N/A |
| 0% 50C P5 18W / 170W | 1MiB / 12288MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
On the host machine i have installed the nvidia-driver using this method: https://ubuntu.com/server/docs/nvidia-drivers-installation.
I have also installed the cuda toolkit following these instructions: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
I have also installed the nvidia-container-toolkit and set it up in the `/etc/docker/daemon.json` accordingly.
I have read all the troubleshooting tipps here: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md and tried the hints. (Yes, i've reloaded the nvidia_uvm module several times).
I have passed through an ASUS RTX 3060 12GB via IOMMU from my proxmox host (v7.x) to the docker-vm, which runs on ubuntu. Apperently, the GPU is correctly working, but fails to be recognized by the ollama container.
This is also how my `/etc/docker/daemon.json` looks like:
```json
{
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
},
"exec-opts": ["native.cgroupdriver=cgroupfs"]
}
```
Any ideas what i could try?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
ollama version is 0.5.7-0-ga420a45-dirty
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8674/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/2890
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2890/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2890/comments
|
https://api.github.com/repos/ollama/ollama/issues/2890/events
|
https://github.com/ollama/ollama/pull/2890
| 2,165,257,399
|
PR_kwDOJ0Z1Ps5ogZzD
| 2,890
|
Add instructions for installing via Brew on Mac
|
{
"login": "imthath-m",
"id": 46041492,
"node_id": "MDQ6VXNlcjQ2MDQxNDky",
"avatar_url": "https://avatars.githubusercontent.com/u/46041492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imthath-m",
"html_url": "https://github.com/imthath-m",
"followers_url": "https://api.github.com/users/imthath-m/followers",
"following_url": "https://api.github.com/users/imthath-m/following{/other_user}",
"gists_url": "https://api.github.com/users/imthath-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imthath-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imthath-m/subscriptions",
"organizations_url": "https://api.github.com/users/imthath-m/orgs",
"repos_url": "https://api.github.com/users/imthath-m/repos",
"events_url": "https://api.github.com/users/imthath-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/imthath-m/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-03-03T08:50:37
| 2024-11-21T09:43:17
| 2024-11-21T09:43:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2890",
"html_url": "https://github.com/ollama/ollama/pull/2890",
"diff_url": "https://github.com/ollama/ollama/pull/2890.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2890.patch",
"merged_at": null
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2890/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2890/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6565
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6565/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6565/comments
|
https://api.github.com/repos/ollama/ollama/issues/6565/events
|
https://github.com/ollama/ollama/issues/6565
| 2,496,427,470
|
I_kwDOJ0Z1Ps6UzHXO
| 6,565
|
Does ollma have the feature to save model response in log file?
|
{
"login": "keezen",
"id": 14137944,
"node_id": "MDQ6VXNlcjE0MTM3OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/14137944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keezen",
"html_url": "https://github.com/keezen",
"followers_url": "https://api.github.com/users/keezen/followers",
"following_url": "https://api.github.com/users/keezen/following{/other_user}",
"gists_url": "https://api.github.com/users/keezen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keezen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keezen/subscriptions",
"organizations_url": "https://api.github.com/users/keezen/orgs",
"repos_url": "https://api.github.com/users/keezen/repos",
"events_url": "https://api.github.com/users/keezen/events{/privacy}",
"received_events_url": "https://api.github.com/users/keezen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-08-30T07:12:32
| 2024-12-02T21:57:50
| 2024-12-02T21:57:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
OS: Linux
ollama version: 0.3.7-rc5
model: starcoder2:3b
I am deploying ollama for code completion and set OLLAMA_DEBUG=1, but the log file only saves the model request but not the model reponse for completion.
Does ollma have the feature to save model response in log file?
Here are the log fragments from ollama serve:
--------------------------------------------------
[GIN] 2024/08/30 - 10:28:57 | 200 | 82.985212ms | 127.0.0.1 | POST "/v1/completions"
time=2024-08-30T10:28:57.412+08:00 level=DEBUG source=sched.go:403 msg="context for request finished"
time=2024-08-30T10:28:57.414+08:00 level=DEBUG source=sched.go:334 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/kas/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e duration=5m0s
time=2024-08-30T10:28:57.414+08:00 level=DEBUG source=sched.go:352 msg="after processing request finished event" modelPath=/home/kas/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e refCount=0
time=2024-08-30T10:28:59.157+08:00 level=DEBUG source=sched.go:571 msg="evaluating already loaded" model=/home/kas/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e
DEBUG [process_single_task] slot data | n_idle_slots=4 n_processing_slots=0 task_id=619 tid="140186734161920" timestamp=1724984939
time=2024-08-30T10:28:59.160+08:00 level=DEBUG source=routes.go:211 msg="generate request" prompt="xxxx"
time=2024-08-30T10:34:20.685+08:00 level=DEBUG source=sched.go:334 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/kas/.ollama/models/blobs/sha256-28bfdfaeba9f51611c00ed322ba684ce6db076756dbc46643f98a8a748c5199e duration=5m0s
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6565/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7069
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7069/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7069/comments
|
https://api.github.com/repos/ollama/ollama/issues/7069/events
|
https://github.com/ollama/ollama/issues/7069
| 2,560,261,217
|
I_kwDOJ0Z1Ps6Ymnxh
| 7,069
|
Support to loading multiple LLM models on the same GPU
|
{
"login": "DenisMontes",
"id": 92817003,
"node_id": "U_kgDOBYhGaw",
"avatar_url": "https://avatars.githubusercontent.com/u/92817003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DenisMontes",
"html_url": "https://github.com/DenisMontes",
"followers_url": "https://api.github.com/users/DenisMontes/followers",
"following_url": "https://api.github.com/users/DenisMontes/following{/other_user}",
"gists_url": "https://api.github.com/users/DenisMontes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DenisMontes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DenisMontes/subscriptions",
"organizations_url": "https://api.github.com/users/DenisMontes/orgs",
"repos_url": "https://api.github.com/users/DenisMontes/repos",
"events_url": "https://api.github.com/users/DenisMontes/events{/privacy}",
"received_events_url": "https://api.github.com/users/DenisMontes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-01T21:40:33
| 2024-10-01T22:49:42
| 2024-10-01T22:49:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hello! Is it possible to include support for loading multiple LLM models on the same GPU?
I'm trying to create an AI team that will automate tasks, but currently I can only run one inference at a time. I'm using small models around 700MB in size on a GPU with 8GB of VRAM. I know I need space for context window beyond the neural network layers. So if it's possible to run two models simultaneously in this configuration, I'll double my performance without needing to buy another GPU that would require a larger power supply and potentially a new motherboard and case."
I have some ideas on how to implement this, and I'm willing to discuss a more in-depth way to make this feature possible.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7069/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7219
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7219/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7219/comments
|
https://api.github.com/repos/ollama/ollama/issues/7219/events
|
https://github.com/ollama/ollama/pull/7219
| 2,590,501,896
|
PR_kwDOJ0Z1Ps5-wyJ9
| 7,219
|
FEAT: add rerank support
|
{
"login": "liuy",
"id": 1192888,
"node_id": "MDQ6VXNlcjExOTI4ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1192888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liuy",
"html_url": "https://github.com/liuy",
"followers_url": "https://api.github.com/users/liuy/followers",
"following_url": "https://api.github.com/users/liuy/following{/other_user}",
"gists_url": "https://api.github.com/users/liuy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liuy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liuy/subscriptions",
"organizations_url": "https://api.github.com/users/liuy/orgs",
"repos_url": "https://api.github.com/users/liuy/repos",
"events_url": "https://api.github.com/users/liuy/events{/privacy}",
"received_events_url": "https://api.github.com/users/liuy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 37
| 2024-10-16T03:37:43
| 2025-01-14T20:06:47
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/7219",
"html_url": "https://github.com/ollama/ollama/pull/7219",
"diff_url": "https://github.com/ollama/ollama/pull/7219.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7219.patch",
"merged_at": null
}
|
This patch set is tring to solve #3368, add reranking support in ollama based on the llama.cpp (edc26566), which got reranking support recently.
Basically:
patch 1 - bump llm/llama.cpp to 17bb9280
patch 2 - add rerank support
patch 3 - allow passing extra command to llama server before starting a new llmsever
patch 4 - go runner: add rerank support
### TODOs
1. ~~**'--reranking' flag is set and it can only runs rerank model. need to find a way to add '--reranking' flag to the llama server on the fly.**~~
2. ~~only ext_server is supported, need to rebase llama/* for go runner later~~
### Updates
**10.16**
patch 3 - allow passing extra command to llama server before starting a new llmsever
solved problem 1.
**10.18**
rebased on the latest branch main
**10.29**
add go runner support.
**Now both llama ext_server and go runner have rerank support in the same way** :)
**10.30**
removed server.cpp relabed patch as requisted
### How to test it
#### Download the source code
```
git clone https://github.com/liuy/ollama.git
````
#### Prepare the rerank model (Note, llama.cpp only support [this](https://huggingface.co/BAAI/bge-reranker-v2-m3) model that is top reranking multi-lang model right now)
```
git clone git@hf.co:BAAI/bge-reranker-v2-m3
python3 ollama/llm/llama.cpp/convert_hf_to_gguf.py bge-reranker-v2-m3/ --outfile bge-reranker-v2-m3-f16.gguf --outtype f16
echo ' FROM ./bge-reranker-v2-m3-f16.gguf' > modelfile
ollama create bge-reranker-v2-m3 -f modelfile
```
#### Compile: make sure you have go, cmake, gcc installed
```
go generate ./...
go build .
```
#### Run the compiled binary
```
OLLAMA_HOST=127.0.0.1:11435 ./ollama serve #use port 11435 to avoid mess with running ollama server
```
#### Open a new terminal
```
curl http://127.0.0.1:11435/api/rerank \
-H "Content-Type: application/json" \
-d '{
"model": "bge-reranker-v2-m3",
"query": "Organic skincare products for sensitive skin",
"top_n": 3,
"documents": [
"Organic skincare for sensitive skin with aloe vera and chamomile: Imagine the soothing embrace of nature with our organic skincare range, crafted specifically for sensitive skin. Infused with the calming properties of aloe vera and chamomile, each product provides gentle nourishment and protection. Say goodbye to irritation and hello to a glowing, healthy complexion.",
"New makeup trends focus on bold colors and innovative techniques: Step into the world of cutting-edge beauty with this seasons makeup trends. Bold, vibrant colors and groundbreaking techniques are redefining the art of makeup. From neon eyeliners to holographic highlighters, unleash your creativity and make a statement with every look.",
"Bio-Hautpflege für empfindliche Haut mit Aloe Vera und Kamille: Erleben Sie die wohltuende Wirkung unserer Bio-Hautpflege, speziell für empfindliche Haut entwickelt. Mit den beruhigenden Eigenschaften von Aloe Vera und Kamille pflegen und schützen unsere Produkte Ihre Haut auf natürliche Weise. Verabschieden Sie sich von Hautirritationen und genießen Sie einen strahlenden Teint.",
"Neue Make-up-Trends setzen auf kräftige Farben und innovative Techniken: Tauchen Sie ein in die Welt der modernen Schönheit mit den neuesten Make-up-Trends. Kräftige, lebendige Farben und innovative Techniken setzen neue Maßstäbe. Von auffälligen Eyelinern bis hin zu holografischen Highlightern – lassen Sie Ihrer Kreativität freien Lauf und setzen Sie jedes Mal ein Statement.",
"Cuidado de la piel orgánico para piel sensible con aloe vera y manzanilla: Descubre el poder de la naturaleza con nuestra línea de cuidado de la piel orgánico, diseñada especialmente para pieles sensibles. Enriquecidos con aloe vera y manzanilla, estos productos ofrecen una hidratación y protección suave. Despídete de las irritaciones y saluda a una piel radiante y saludable.",
"Las nuevas tendencias de maquillaje se centran en colores vivos y técnicas innovadoras: Entra en el fascinante mundo del maquillaje con las tendencias más actuales. Colores vivos y técnicas innovadoras están revolucionando el arte del maquillaje. Desde delineadores neón hasta iluminadores holográficos, desata tu creatividad y destaca en cada look.",
"针对敏感肌专门设计的天然有机护肤产品:体验由芦荟和洋甘菊提取物带来的自然呵护。我们的护肤产品特别为敏感肌设计,温和滋润,保护您的肌肤不受刺激。让您的肌肤告别不适,迎来健康光彩。",
"新的化妆趋势注重鲜艳的颜色和创新的技巧:进入化妆艺术的新纪元,本季的化妆趋势以大胆的颜色和创新的技巧为主。无论是霓虹眼线还是全息高光,每一款妆容都能让您脱颖而出,展现独特魅力。",
"敏感肌のために特別に設計された天然有機スキンケア製品: アロエベラとカモミールのやさしい力で、自然の抱擁を感じてください。敏感肌用に特別に設計された私たちのスキンケア製品は、肌に優しく栄養を与え、保護します。肌トラブルにさようなら、輝く健康な肌にこんにちは。",
"新しいメイクのトレンドは鮮やかな色と革新的な技術に焦点を当てています: 今シーズンのメイクアップトレンドは、大胆な色彩と革新的な技術に注目しています。ネオンアイライナーからホログラフィックハイライターまで、クリエイティビティを解き放ち、毎回ユニークなルックを演出しましょう。"
]
}' | jq
```
#### If you are lucky, you'll get the reranked result in a descendant manner like following:
```
{
"model": "bge-reranker-v2-m3",
"results": [
{
"document": "敏感肌のために特別に設計された天然有機スキンケア製品: アロエベラとカモミールのやさしい力で、自然の抱擁を感じてください。 敏感肌用に特別に設計された私たちのスキンケア製品は、肌に優しく栄養を与え、保護します。肌トラブルにさようなら、輝く健康な肌にこんにちは。",
"relevance_score": 6.4258623
},
{
"document": "Organic skincare for sensitive skin with aloe vera and chamomile: Imagine the soothing embrace of nature with our organic skincare range, crafted specifically for sensitive skin. Infused with the calming properties of aloe vera and chamomile, each product provides gentle nourishment and protection. Say goodbye to irritation and hello to a glowing, healthy complexion.",
"relevance_score": 6.3774652
},
{
"document": "针对敏感肌专门设计的天然有机护肤产品:体验由芦荟和洋甘菊提取物带来的自然呵护。我们的护肤产品特别为敏感肌设计,温和滋润, 保护您的肌肤不受刺激。让您的肌肤告别不适,迎来健康光彩。",
"relevance_score": 5.556752
}
]
}
```
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7219/reactions",
"total_count": 39,
"+1": 20,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 11,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7219/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1736
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1736/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1736/comments
|
https://api.github.com/repos/ollama/ollama/issues/1736/events
|
https://github.com/ollama/ollama/issues/1736
| 2,059,061,392
|
I_kwDOJ0Z1Ps56usiQ
| 1,736
|
Download slows to a crawl at 99%
|
{
"login": "Pugio",
"id": 286180,
"node_id": "MDQ6VXNlcjI4NjE4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/286180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pugio",
"html_url": "https://github.com/Pugio",
"followers_url": "https://api.github.com/users/Pugio/followers",
"following_url": "https://api.github.com/users/Pugio/following{/other_user}",
"gists_url": "https://api.github.com/users/Pugio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pugio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pugio/subscriptions",
"organizations_url": "https://api.github.com/users/Pugio/orgs",
"repos_url": "https://api.github.com/users/Pugio/repos",
"events_url": "https://api.github.com/users/Pugio/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pugio/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
},
{
"id": 6896227207,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmwwThw",
"url": "https://api.github.com/repos/ollama/ollama/labels/registry",
"name": "registry",
"color": "0052cc",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
},
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 88
| 2023-12-29T04:47:12
| 2025-01-30T06:44:32
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
For every model I've downloaded, the speed saturates my bandwidth (~13MB/sec) until it hits 98/99%. Then the download slows to a few tens of KB/s and takes hour(s) to finish.
<img width="884" alt="image" src="https://github.com/jmorganca/ollama/assets/286180/e47037e1-aea8-4a13-a6fc-7841baa0db6c">
I've tried multiple models and this behavior happens each time. Happy to debug, but I'm not sure what to try.
I'm in Australia, in case that matters.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1736/reactions",
"total_count": 51,
"+1": 47,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/ollama/ollama/issues/1736/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/45
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/45/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/45/comments
|
https://api.github.com/repos/ollama/ollama/issues/45/events
|
https://github.com/ollama/ollama/pull/45
| 1,792,042,633
|
PR_kwDOJ0Z1Ps5U1jF3
| 45
|
embed templates
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-07-06T18:34:05
| 2023-07-06T18:36:29
| 2023-07-06T18:36:26
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/45",
"html_url": "https://github.com/ollama/ollama/pull/45",
"diff_url": "https://github.com/ollama/ollama/pull/45.diff",
"patch_url": "https://github.com/ollama/ollama/pull/45.patch",
"merged_at": "2023-07-06T18:36:26"
}
| null |
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/45/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/45/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/385
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/385/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/385/comments
|
https://api.github.com/repos/ollama/ollama/issues/385/events
|
https://github.com/ollama/ollama/issues/385
| 1,857,685,058
|
I_kwDOJ0Z1Ps5uugZC
| 385
|
How to delete one downloaded modle file? Find no files locally. thanks
|
{
"login": "bookandlover",
"id": 61039415,
"node_id": "MDQ6VXNlcjYxMDM5NDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/61039415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bookandlover",
"html_url": "https://github.com/bookandlover",
"followers_url": "https://api.github.com/users/bookandlover/followers",
"following_url": "https://api.github.com/users/bookandlover/following{/other_user}",
"gists_url": "https://api.github.com/users/bookandlover/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bookandlover/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bookandlover/subscriptions",
"organizations_url": "https://api.github.com/users/bookandlover/orgs",
"repos_url": "https://api.github.com/users/bookandlover/repos",
"events_url": "https://api.github.com/users/bookandlover/events{/privacy}",
"received_events_url": "https://api.github.com/users/bookandlover/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-19T11:45:54
| 2023-08-19T11:50:11
| 2023-08-19T11:50:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null | null |
{
"login": "bookandlover",
"id": 61039415,
"node_id": "MDQ6VXNlcjYxMDM5NDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/61039415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bookandlover",
"html_url": "https://github.com/bookandlover",
"followers_url": "https://api.github.com/users/bookandlover/followers",
"following_url": "https://api.github.com/users/bookandlover/following{/other_user}",
"gists_url": "https://api.github.com/users/bookandlover/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bookandlover/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bookandlover/subscriptions",
"organizations_url": "https://api.github.com/users/bookandlover/orgs",
"repos_url": "https://api.github.com/users/bookandlover/repos",
"events_url": "https://api.github.com/users/bookandlover/events{/privacy}",
"received_events_url": "https://api.github.com/users/bookandlover/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/385/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6065
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6065/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6065/comments
|
https://api.github.com/repos/ollama/ollama/issues/6065/events
|
https://github.com/ollama/ollama/pull/6065
| 2,436,611,067
|
PR_kwDOJ0Z1Ps52z6pv
| 6,065
|
Update and Fix example models
|
{
"login": "thinkverse",
"id": 2221746,
"node_id": "MDQ6VXNlcjIyMjE3NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2221746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thinkverse",
"html_url": "https://github.com/thinkverse",
"followers_url": "https://api.github.com/users/thinkverse/followers",
"following_url": "https://api.github.com/users/thinkverse/following{/other_user}",
"gists_url": "https://api.github.com/users/thinkverse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thinkverse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thinkverse/subscriptions",
"organizations_url": "https://api.github.com/users/thinkverse/orgs",
"repos_url": "https://api.github.com/users/thinkverse/repos",
"events_url": "https://api.github.com/users/thinkverse/events{/privacy}",
"received_events_url": "https://api.github.com/users/thinkverse/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-29T23:53:24
| 2024-07-30T06:56:37
| 2024-07-30T06:56:37
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/6065",
"html_url": "https://github.com/ollama/ollama/pull/6065",
"diff_url": "https://github.com/ollama/ollama/pull/6065.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6065.patch",
"merged_at": "2024-07-30T06:56:37"
}
|
I updated the llama2/3 models to llama3.1 and gemma models to gemma2 in examples where needed and where I could test the example work. Fixed the dockerit example, it was pointing to a non-existing model. Lastly, I removed a blank unused README.md file in one of the Go examples.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6065/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3421
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3421/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3421/comments
|
https://api.github.com/repos/ollama/ollama/issues/3421/events
|
https://github.com/ollama/ollama/pull/3421
| 2,216,727,182
|
PR_kwDOJ0Z1Ps5rPMCe
| 3,421
|
add link to chat-ollama UI
|
{
"login": "wgong",
"id": 329928,
"node_id": "MDQ6VXNlcjMyOTkyOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/329928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wgong",
"html_url": "https://github.com/wgong",
"followers_url": "https://api.github.com/users/wgong/followers",
"following_url": "https://api.github.com/users/wgong/following{/other_user}",
"gists_url": "https://api.github.com/users/wgong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wgong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wgong/subscriptions",
"organizations_url": "https://api.github.com/users/wgong/orgs",
"repos_url": "https://api.github.com/users/wgong/repos",
"events_url": "https://api.github.com/users/wgong/events{/privacy}",
"received_events_url": "https://api.github.com/users/wgong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-03-31T01:12:19
| 2024-03-31T02:47:02
| 2024-03-31T02:47:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/3421",
"html_url": "https://github.com/ollama/ollama/pull/3421",
"diff_url": "https://github.com/ollama/ollama/pull/3421.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3421.patch",
"merged_at": null
}
|
`ChatOllama` is an open source chatbot based on LLMs. It supports a wide range of language models including:
- Ollama local models
- OpenAI
- Azure OpenAI
- Anthropic
- ChatOllama supports multiple types (including RAG)
feature list:
- Ollama models management
- Knowledge bases management
- Chat
- Commercial LLMs API keys management
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3421/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5501
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5501/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5501/comments
|
https://api.github.com/repos/ollama/ollama/issues/5501/events
|
https://github.com/ollama/ollama/pull/5501
| 2,393,042,851
|
PR_kwDOJ0Z1Ps50j7pY
| 5,501
|
fix typo in cgo directives in `llm.go`
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-05T19:18:20
| 2024-07-05T19:20:11
| 2024-07-05T19:18:37
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5501",
"html_url": "https://github.com/ollama/ollama/pull/5501",
"diff_url": "https://github.com/ollama/ollama/pull/5501.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5501.patch",
"merged_at": "2024-07-05T19:18:37"
}
| null |
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5501/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5825
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5825/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5825/comments
|
https://api.github.com/repos/ollama/ollama/issues/5825/events
|
https://github.com/ollama/ollama/pull/5825
| 2,421,204,697
|
PR_kwDOJ0Z1Ps52AN-U
| 5,825
|
Remove out of space test temporarily
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-21T04:16:51
| 2024-07-21T04:28:15
| 2024-07-21T04:22:12
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5825",
"html_url": "https://github.com/ollama/ollama/pull/5825",
"diff_url": "https://github.com/ollama/ollama/pull/5825.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5825.patch",
"merged_at": "2024-07-21T04:22:12"
}
|
Removes the out of space test which won't trigger on CI – I wasn't sure if there was a good way to actually test this since it would involve creating a subprocess in the unit tests
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5825/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5441
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5441/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5441/comments
|
https://api.github.com/repos/ollama/ollama/issues/5441/events
|
https://github.com/ollama/ollama/pull/5441
| 2,386,975,347
|
PR_kwDOJ0Z1Ps50PPjv
| 5,441
|
cmd: createBlob with copy on disk if local server
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-07-02T19:16:45
| 2024-08-28T18:36:57
| 2024-08-28T18:36:56
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5441",
"html_url": "https://github.com/ollama/ollama/pull/5441",
"diff_url": "https://github.com/ollama/ollama/pull/5441.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5441.patch",
"merged_at": null
}
|
This PR let's users with a local server to bypass the blob upload and directly copy to models directory in server.
Resolves: https://github.com/ollama/ollama/issues/4600
Changes:
added `Authorization` to api package to pass in Authorization headers
changed `KeyPath` and `PublicKey` methods to return objects instead of strings
TODO:
clean
|
{
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5441/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2458
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2458/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2458/comments
|
https://api.github.com/repos/ollama/ollama/issues/2458/events
|
https://github.com/ollama/ollama/pull/2458
| 2,129,246,838
|
PR_kwDOJ0Z1Ps5mlp-f
| 2,458
|
Add support for running llama.cpp with SYCL for Intel GPUs
|
{
"login": "felipeagc",
"id": 17355488,
"node_id": "MDQ6VXNlcjE3MzU1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/17355488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felipeagc",
"html_url": "https://github.com/felipeagc",
"followers_url": "https://api.github.com/users/felipeagc/followers",
"following_url": "https://api.github.com/users/felipeagc/following{/other_user}",
"gists_url": "https://api.github.com/users/felipeagc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felipeagc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felipeagc/subscriptions",
"organizations_url": "https://api.github.com/users/felipeagc/orgs",
"repos_url": "https://api.github.com/users/felipeagc/repos",
"events_url": "https://api.github.com/users/felipeagc/events{/privacy}",
"received_events_url": "https://api.github.com/users/felipeagc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 55
| 2024-02-12T00:26:06
| 2025-01-03T15:17:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2458",
"html_url": "https://github.com/ollama/ollama/pull/2458",
"diff_url": "https://github.com/ollama/ollama/pull/2458.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2458.patch",
"merged_at": null
}
|
This is my attempt at adding SYCL support to ollama. ~~It's not working yet, and there are still some parts marked as TODO.~~
~~If anyone wants to take a crack at finishing this PR, I'm currently stuck on this error:~~
```
No kernel named _ZTSZZL17rms_norm_f32_syclPKfPfiifPN4sycl3_V15queueEENKUlRNS3_7handlerEE0_clES7_EUlNS3_7nd_itemILi3EEEE_ was found -46 (PI_ERROR_INVALID_KERNEL_NAME)Exception caught at file:/home/felipe/Code/go/ollama/llm/llama.cpp/ggml-sycl.cpp, line:12708
```
~~It's probably due to the way ollama builds the C++ parts and Intel's compiler not expecting it to be done in this way. The kernels are probably getting eliminated from the binary in some build step.~~
~~I'm not sure when I'm going to have more time to work on this PR, so I'll just leave it here as a draft for now.~~
EDIT: it works now :)
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2458/reactions",
"total_count": 33,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 12,
"confused": 0,
"heart": 21,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2458/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/6266
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6266/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6266/comments
|
https://api.github.com/repos/ollama/ollama/issues/6266/events
|
https://github.com/ollama/ollama/issues/6266
| 2,456,761,463
|
I_kwDOJ0Z1Ps6SbzR3
| 6,266
|
MSBUILD : error MSB1009: Arquivo de projeto não existe. Opção: ollama_llama_server.vcxproj llm\generate\generate_windows.go:3: running "powershell": exit status 1
|
{
"login": "insinfo",
"id": 12227024,
"node_id": "MDQ6VXNlcjEyMjI3MDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/12227024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/insinfo",
"html_url": "https://github.com/insinfo",
"followers_url": "https://api.github.com/users/insinfo/followers",
"following_url": "https://api.github.com/users/insinfo/following{/other_user}",
"gists_url": "https://api.github.com/users/insinfo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/insinfo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/insinfo/subscriptions",
"organizations_url": "https://api.github.com/users/insinfo/orgs",
"repos_url": "https://api.github.com/users/insinfo/repos",
"events_url": "https://api.github.com/users/insinfo/events{/privacy}",
"received_events_url": "https://api.github.com/users/insinfo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-08-08T22:44:56
| 2024-10-16T02:19:58
| 2024-10-16T02:19:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I'm trying to compile on windows and I'm getting this error
```
PS C:\my_cpp_projects\ollama> Install-Module -Name ThreadJob -Scope CurrentUser
Untrusted repository
You are installing the modules from an untrusted repository. If you trust this repository, change its
InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from
'PSGallery'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): a
PS C:\my_cpp_projects\ollama> go generate ./...
Already on 'minicpm-v2.5'
Your branch is up to date with 'origin/minicpm-v2.5'.
Submodule path '../llama.cpp': checked out '65f7455cea443bd9b6fd8546ef53440d6f6d963f'
Checking for MinGW...
CommandType Name Version Source
----------- ---- ------- ------
Application gcc.exe 0.0.0.0 C:\mingw64\bin\gcc.exe
Application mingw32-make.exe 0.0.0.0 C:\mingw64\bin\mingw32-make.exe
Building static library
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off
cmake version 3.30.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- OpenMP found
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (0.6s)
-- Generating done (4.9s)
-- Build files have been written to: C:/my_cpp_projects/ollama/llm/build/windows/amd64_static
building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml
[ 0%] Building C object CMakeFiles/ggml.dir/ggml.c.obj
C:\my_cpp_projects\ollama\llm\llama.cpp\ggml.c:84:8: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
84 | static atomic_bool atomic_flag_test_and_set(atomic_flag * ptr) {
| ^~~~~~~~~~~
[ 16%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.obj
[ 16%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.obj
[ 33%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.obj
[ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.obj
[ 50%] Built target ggml
[ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj
C:\my_cpp_projects\ollama\llm\llama.cpp\llama.cpp: In member function 'std::string llama_file::GetErrorMessageWin32(DWORD) const':
C:\my_cpp_projects\ollama\llm\llama.cpp\llama.cpp:1319:46: warning: format '%s' expects argument of type 'char*', but argument 2 has type 'DWORD' {aka 'long unsigned int'} [-Wformat=]
1319 | ret = format("Win32 error code: %s", error_code);
| ~^ ~~~~~~~~~~
| | |
| | DWORD {aka long unsigned int}
| char*
| %ld
C:\my_cpp_projects\ollama\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file*, size_t, bool)':
C:\my_cpp_projects\ollama\llm\llama.cpp\llama.cpp:1657:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int (*)()'} to 'BOOL (*)(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int (*)(void*, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type]
1657 | pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory"));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
C:\my_cpp_projects\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)':
C:\my_cpp_projects\ollama\llm\llama.cpp\llama.cpp:18512:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=]
18512 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector<int>::size_type {aka long long unsigned int}
| %llu
C:\my_cpp_projects\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)':
C:\my_cpp_projects\ollama\llm\llama.cpp\llama.cpp:18557:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector<int>::size_type' {aka 'long long unsigned int'} [-Wformat=]
18557 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector<int>::size_type {aka long long unsigned int}
| %llu
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.obj
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.obj
[100%] Linking CXX static library libllama.a
[100%] Built target llama
[100%] Built target ggml
Building LCD CPU
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release
cmake version 3.30.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.19045.
-- The C compiler identification is MSVC 19.40.33813.0
-- The CXX compiler identification is MSVC 19.40.33813.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.46.0.windows.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Found OpenMP_C: -openmp (found version "2.0")
-- Found OpenMP_CXX: -openmp (found version "2.0")
-- Found OpenMP: TRUE (found version "2.0")
-- OpenMP found
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM: x64
-- x86 detected
-- Configuring done (28.0s)
-- Generating done (1.1s)
CMake Warning:
Manually-specified variables were not used by the project:
LLAMA_F16C
-- Build files have been written to: C:/my_cpp_projects/ollama/llm/build/windows/amd64/cpu
building with: cmake --build ../build/windows/amd64/cpu --config Release --target ollama_llama_server
Versão do MSBuild 17.10.4+10fbfbf2e para .NET Framework
MSBUILD : error MSB1009: Arquivo de projeto não existe.
Opção: ollama_llama_server.vcxproj
llm\generate\generate_windows.go:3: running "powershell": exit status 1
PS C:\my_cpp_projects\ollama>
```
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6266/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1883
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1883/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1883/comments
|
https://api.github.com/repos/ollama/ollama/issues/1883/events
|
https://github.com/ollama/ollama/issues/1883
| 2,073,518,651
|
I_kwDOJ0Z1Ps57l2I7
| 1,883
|
/api/tags open to extension without setting OLLAMA_ORIGINS
|
{
"login": "sublimator",
"id": 525211,
"node_id": "MDQ6VXNlcjUyNTIxMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/525211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sublimator",
"html_url": "https://github.com/sublimator",
"followers_url": "https://api.github.com/users/sublimator/followers",
"following_url": "https://api.github.com/users/sublimator/following{/other_user}",
"gists_url": "https://api.github.com/users/sublimator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sublimator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sublimator/subscriptions",
"organizations_url": "https://api.github.com/users/sublimator/orgs",
"repos_url": "https://api.github.com/users/sublimator/repos",
"events_url": "https://api.github.com/users/sublimator/events{/privacy}",
"received_events_url": "https://api.github.com/users/sublimator/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-10T03:31:54
| 2024-01-11T06:28:42
| 2024-01-11T06:28:41
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I'm not sure what's going on here, I could have sworn pre 0.1.19 ALL endpoints were restricted from chrome://extensions. But it seems I can now access /api/tags, a GET request, from an extension, without setting OLLAMA_ORIGINS?

Opening this issue as a reminder.
Will investigate more.
|
{
"login": "sublimator",
"id": 525211,
"node_id": "MDQ6VXNlcjUyNTIxMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/525211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sublimator",
"html_url": "https://github.com/sublimator",
"followers_url": "https://api.github.com/users/sublimator/followers",
"following_url": "https://api.github.com/users/sublimator/following{/other_user}",
"gists_url": "https://api.github.com/users/sublimator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sublimator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sublimator/subscriptions",
"organizations_url": "https://api.github.com/users/sublimator/orgs",
"repos_url": "https://api.github.com/users/sublimator/repos",
"events_url": "https://api.github.com/users/sublimator/events{/privacy}",
"received_events_url": "https://api.github.com/users/sublimator/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1883/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3928
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3928/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3928/comments
|
https://api.github.com/repos/ollama/ollama/issues/3928/events
|
https://github.com/ollama/ollama/issues/3928
| 2,264,856,389
|
I_kwDOJ0Z1Ps6G_vdF
| 3,928
|
rocm crash with 4 gfx900 GPUs
|
{
"login": "ZanMax",
"id": 1073721,
"node_id": "MDQ6VXNlcjEwNzM3MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1073721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZanMax",
"html_url": "https://github.com/ZanMax",
"followers_url": "https://api.github.com/users/ZanMax/followers",
"following_url": "https://api.github.com/users/ZanMax/following{/other_user}",
"gists_url": "https://api.github.com/users/ZanMax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZanMax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZanMax/subscriptions",
"organizations_url": "https://api.github.com/users/ZanMax/orgs",
"repos_url": "https://api.github.com/users/ZanMax/repos",
"events_url": "https://api.github.com/users/ZanMax/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZanMax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10
| 2024-04-26T02:50:38
| 2024-05-06T20:19:18
| 2024-05-06T20:19:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
My system:
Ubuntu: 22.04
CPU: E5 2620
GPU: WX 9100
I have installed drivers and ROCm.
But when I try to run ollama I receive:
> time=2024-04-26T02:45:47.779Z level=INFO source=routes.go:1063 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
> time=2024-04-26T02:45:47.780Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3693243001/runners
> time=2024-04-26T02:45:47.780Z level=DEBUG source=payload.go:180 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
> time=2024-04-26T02:45:47.780Z level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v0 file=build/linux/x86_64/rocm_v0/bin/deps.txt.gz
> time=2024-04-26T02:45:47.780Z level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v0 file=build/linux/x86_64/rocm_v0/bin/ollama_llama_server.gz
> time=2024-04-26T02:45:48.242Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3693243001/runners/cpu
> time=2024-04-26T02:45:48.242Z level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama3693243001/runners/rocm_v0
> time=2024-04-26T02:45:48.242Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu rocm_v0]"
> time=2024-04-26T02:45:48.242Z level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
> time=2024-04-26T02:45:48.242Z level=DEBUG source=sched.go:101 msg="starting llm scheduler"
> time=2024-04-26T02:45:48.242Z level=INFO source=gpu.go:96 msg="Detecting GPUs"
> time=2024-04-26T02:45:48.242Z level=DEBUG source=gpu.go:203 msg="Searching for GPU library" name=libcudart.so*
> time=2024-04-26T02:45:48.242Z level=DEBUG source=gpu.go:221 msg="gpu library search" globs="[/tmp/ollama3693243001/runners/cuda*/libcudart.so* /usr/local/cuda/lib64/libcudart.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/x86_64-linux-gnu/libcudart.so* /usr/lib/wsl/lib/libcudart.so* /usr/lib/wsl/drivers/*/libcudart.so* /opt/cuda/lib64/libcudart.so* /usr/local/cuda*/targets/aarch64-linux/lib/libcudart.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libcudart.so* /usr/lib/aarch64-linux-gnu/libcudart.so* /usr/local/cuda/lib*/libcudart.so* /usr/lib*/libcudart.so* /usr/local/lib*/libcudart.so* /opt/rocm/lib/libcudart.so** /home/dev/libcudart.so**]"
> time=2024-04-26T02:45:48.244Z level=DEBUG source=gpu.go:249 msg="discovered GPU libraries" paths=[]
> time=2024-04-26T02:45:48.244Z level=INFO source=cpu_common.go:15 msg="CPU has AVX"
> time=2024-04-26T02:45:48.244Z level=INFO source=amd_linux.go:46 msg="AMD Driver: 6.2.4"
> time=2024-04-26T02:45:48.244Z level=DEBUG source=amd_linux.go:78 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
> time=2024-04-26T02:45:48.244Z level=DEBUG source=amd_linux.go:102 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
> time=2024-04-26T02:45:48.244Z level=DEBUG source=amd_linux.go:78 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
> time=2024-04-26T02:45:48.244Z level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=0 total="16368.0 MiB"
> time=2024-04-26T02:45:48.245Z level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=0 available="5697.0 MiB"
> time=2024-04-26T02:45:48.245Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib"
> time=2024-04-26T02:45:48.245Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /home/dev"
> time=2024-04-26T02:45:48.245Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib"
> time=2024-04-26T02:45:48.245Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/bin/rocm"
> time=2024-04-26T02:45:48.245Z level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm"
> time=2024-04-26T02:45:48.245Z level=WARN source=amd_linux.go:321 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
> time=2024-04-26T02:45:48.245Z level=WARN source=amd_linux.go:253 msg="unable to verify rocm library, will use cpu" error="no suitable rocm found, falling back to CPU"
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
0.1.32
|
{
"login": "ZanMax",
"id": 1073721,
"node_id": "MDQ6VXNlcjEwNzM3MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1073721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZanMax",
"html_url": "https://github.com/ZanMax",
"followers_url": "https://api.github.com/users/ZanMax/followers",
"following_url": "https://api.github.com/users/ZanMax/following{/other_user}",
"gists_url": "https://api.github.com/users/ZanMax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZanMax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZanMax/subscriptions",
"organizations_url": "https://api.github.com/users/ZanMax/orgs",
"repos_url": "https://api.github.com/users/ZanMax/repos",
"events_url": "https://api.github.com/users/ZanMax/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZanMax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3928/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4370
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4370/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4370/comments
|
https://api.github.com/repos/ollama/ollama/issues/4370/events
|
https://github.com/ollama/ollama/issues/4370
| 2,291,149,796
|
I_kwDOJ0Z1Ps6IkCvk
| 4,370
|
Ollama’s speed in generating chat content slowed down by tenfold When switching the chat format to JSON
|
{
"login": "XDesktopSoft",
"id": 126927865,
"node_id": "U_kgDOB5DD-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/126927865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XDesktopSoft",
"html_url": "https://github.com/XDesktopSoft",
"followers_url": "https://api.github.com/users/XDesktopSoft/followers",
"following_url": "https://api.github.com/users/XDesktopSoft/following{/other_user}",
"gists_url": "https://api.github.com/users/XDesktopSoft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XDesktopSoft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XDesktopSoft/subscriptions",
"organizations_url": "https://api.github.com/users/XDesktopSoft/orgs",
"repos_url": "https://api.github.com/users/XDesktopSoft/repos",
"events_url": "https://api.github.com/users/XDesktopSoft/events{/privacy}",
"received_events_url": "https://api.github.com/users/XDesktopSoft/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
},
{
"id": 7706482389,
"node_id": "LA_kwDOJ0Z1Ps8AAAABy1eW1Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/api",
"name": "api",
"color": "bfdadc",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 12
| 2024-05-12T03:31:46
| 2024-12-05T00:36:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I just set the chat format to JSON, then the Ollama’s speed in generating chat content slowed down by tenfold.
For example, when I use the gemma7b model and the chat format is not set, I can get a chat reply in about 0.5s to 1s.
But if I set the chat format to JSON, it usually takes 6-15 seconds to get a chat reply.
Almost every LLM model are like this.
Is there any solution to this? Thanks.
code example:
`curl http://localhost:11434/api/chat -d '{
"model": "llama3",
"prompt": "What color is the sky at different times of the day? Respond using JSON",
"format": "json",
"stream": false
}'`
### OS
Windows 10
### GPU
Nvidia RTX4060Ti 16GB VRAM
### CPU
Intel
### Ollama version
0.1.37
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4370/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1042
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1042/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1042/comments
|
https://api.github.com/repos/ollama/ollama/issues/1042/events
|
https://github.com/ollama/ollama/pull/1042
| 1,983,420,500
|
PR_kwDOJ0Z1Ps5e6tpA
| 1,042
|
progressbar: make start and end seamless
|
{
"login": "mpldr",
"id": 33086936,
"node_id": "MDQ6VXNlcjMzMDg2OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/33086936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mpldr",
"html_url": "https://github.com/mpldr",
"followers_url": "https://api.github.com/users/mpldr/followers",
"following_url": "https://api.github.com/users/mpldr/following{/other_user}",
"gists_url": "https://api.github.com/users/mpldr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mpldr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpldr/subscriptions",
"organizations_url": "https://api.github.com/users/mpldr/orgs",
"repos_url": "https://api.github.com/users/mpldr/repos",
"events_url": "https://api.github.com/users/mpldr/events{/privacy}",
"received_events_url": "https://api.github.com/users/mpldr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2023-11-08T11:50:36
| 2023-11-09T00:42:40
| 2023-11-09T00:42:40
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1042",
"html_url": "https://github.com/ollama/ollama/pull/1042",
"diff_url": "https://github.com/ollama/ollama/pull/1042.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1042.patch",
"merged_at": "2023-11-09T00:42:40"
}
|
This just makes the bars limiting the progressbar width hug the progressbar because it looks nicer.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1042/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1042/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/409
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/409/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/409/comments
|
https://api.github.com/repos/ollama/ollama/issues/409/events
|
https://github.com/ollama/ollama/issues/409
| 1,866,634,833
|
I_kwDOJ0Z1Ps5vQpZR
| 409
|
Custom model based on codellama just outputs blank lines
|
{
"login": "tomduncalf",
"id": 5458070,
"node_id": "MDQ6VXNlcjU0NTgwNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5458070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomduncalf",
"html_url": "https://github.com/tomduncalf",
"followers_url": "https://api.github.com/users/tomduncalf/followers",
"following_url": "https://api.github.com/users/tomduncalf/following{/other_user}",
"gists_url": "https://api.github.com/users/tomduncalf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomduncalf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomduncalf/subscriptions",
"organizations_url": "https://api.github.com/users/tomduncalf/orgs",
"repos_url": "https://api.github.com/users/tomduncalf/repos",
"events_url": "https://api.github.com/users/tomduncalf/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomduncalf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-08-25T08:52:15
| 2023-08-30T16:44:23
| 2023-08-30T16:39:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, super cool project, impressed how easy it was to get started!
I created a custom model based on codellama with a system message explaining its task, but when I run it with my input, the model just infinitely outputs blank lines.
I saw on a [comment on Hacker News](https://news.ycombinator.com/item?id=37252690) that this was a more general problem with codellama until you fixed it, so I wondered if you had any specific thoughts or advice on what might be causing this?
Thanks!
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/409/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5610
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5610/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5610/comments
|
https://api.github.com/repos/ollama/ollama/issues/5610/events
|
https://github.com/ollama/ollama/issues/5610
| 2,401,424,638
|
I_kwDOJ0Z1Ps6PItT-
| 5,610
|
/clear - clears the terminal
|
{
"login": "dannyoo",
"id": 13410082,
"node_id": "MDQ6VXNlcjEzNDEwMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/13410082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dannyoo",
"html_url": "https://github.com/dannyoo",
"followers_url": "https://api.github.com/users/dannyoo/followers",
"following_url": "https://api.github.com/users/dannyoo/following{/other_user}",
"gists_url": "https://api.github.com/users/dannyoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dannyoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dannyoo/subscriptions",
"organizations_url": "https://api.github.com/users/dannyoo/orgs",
"repos_url": "https://api.github.com/users/dannyoo/repos",
"events_url": "https://api.github.com/users/dannyoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dannyoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 7
| 2024-07-10T18:42:15
| 2024-09-06T16:01:26
| 2024-07-12T16:07:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Greetings, I love using ollama.
How can we make the /clear command clear the terminal as well?
Thanks
|
{
"login": "dannyoo",
"id": 13410082,
"node_id": "MDQ6VXNlcjEzNDEwMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/13410082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dannyoo",
"html_url": "https://github.com/dannyoo",
"followers_url": "https://api.github.com/users/dannyoo/followers",
"following_url": "https://api.github.com/users/dannyoo/following{/other_user}",
"gists_url": "https://api.github.com/users/dannyoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dannyoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dannyoo/subscriptions",
"organizations_url": "https://api.github.com/users/dannyoo/orgs",
"repos_url": "https://api.github.com/users/dannyoo/repos",
"events_url": "https://api.github.com/users/dannyoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dannyoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5610/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5610/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3640
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3640/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3640/comments
|
https://api.github.com/repos/ollama/ollama/issues/3640/events
|
https://github.com/ollama/ollama/issues/3640
| 2,242,234,223
|
I_kwDOJ0Z1Ps6Fpcdv
| 3,640
|
Offering to help with readme PRs
|
{
"login": "mrdjohnson",
"id": 6767910,
"node_id": "MDQ6VXNlcjY3Njc5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6767910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrdjohnson",
"html_url": "https://github.com/mrdjohnson",
"followers_url": "https://api.github.com/users/mrdjohnson/followers",
"following_url": "https://api.github.com/users/mrdjohnson/following{/other_user}",
"gists_url": "https://api.github.com/users/mrdjohnson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrdjohnson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrdjohnson/subscriptions",
"organizations_url": "https://api.github.com/users/mrdjohnson/orgs",
"repos_url": "https://api.github.com/users/mrdjohnson/repos",
"events_url": "https://api.github.com/users/mrdjohnson/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrdjohnson/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-04-14T15:58:56
| 2024-04-16T10:33:41
| 2024-04-16T10:33:41
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
there are a lot of PRs that come in for new UIs and such, i'd like to offer to accept them as they come in
### How should we solve this?
Just keep being great
### What is the impact of not solving this?
None, my UI is already in the list 😅, but I know that there is a lot for y'all to work on.. this isint much but i'd like to help.
### Anything else?
Just wanting to say thank you again.
|
{
"login": "mrdjohnson",
"id": 6767910,
"node_id": "MDQ6VXNlcjY3Njc5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6767910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrdjohnson",
"html_url": "https://github.com/mrdjohnson",
"followers_url": "https://api.github.com/users/mrdjohnson/followers",
"following_url": "https://api.github.com/users/mrdjohnson/following{/other_user}",
"gists_url": "https://api.github.com/users/mrdjohnson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrdjohnson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrdjohnson/subscriptions",
"organizations_url": "https://api.github.com/users/mrdjohnson/orgs",
"repos_url": "https://api.github.com/users/mrdjohnson/repos",
"events_url": "https://api.github.com/users/mrdjohnson/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrdjohnson/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3640/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6764
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6764/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6764/comments
|
https://api.github.com/repos/ollama/ollama/issues/6764/events
|
https://github.com/ollama/ollama/issues/6764
| 2,520,919,217
|
I_kwDOJ0Z1Ps6WQiyx
| 6,764
|
llama3.1:70B 16fp not working on nvidia H100
|
{
"login": "AliAhmedNada",
"id": 17008257,
"node_id": "MDQ6VXNlcjE3MDA4MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/17008257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AliAhmedNada",
"html_url": "https://github.com/AliAhmedNada",
"followers_url": "https://api.github.com/users/AliAhmedNada/followers",
"following_url": "https://api.github.com/users/AliAhmedNada/following{/other_user}",
"gists_url": "https://api.github.com/users/AliAhmedNada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AliAhmedNada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AliAhmedNada/subscriptions",
"organizations_url": "https://api.github.com/users/AliAhmedNada/orgs",
"repos_url": "https://api.github.com/users/AliAhmedNada/repos",
"events_url": "https://api.github.com/users/AliAhmedNada/events{/privacy}",
"received_events_url": "https://api.github.com/users/AliAhmedNada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
},
{
"id": 6849881759,
"node_id": "LA_kwDOJ0Z1Ps8AAAABmEjmnw",
"url": "https://api.github.com/repos/ollama/ollama/labels/memory",
"name": "memory",
"color": "5017EA",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4
| 2024-09-11T22:44:08
| 2024-09-15T06:08:40
| 2024-09-15T06:08:40
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Hello
I am trying to work on on llama3.1 70B f16 , it seems not to work properly for some reason
```
root@xxxxx:/home/ollama/models# ollama run llama3.1:70b-instruct-fp16
Error: llama runner process no longer running: -1
```
how to investiagte in this part ?!
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10
|
{
"login": "AliAhmedNada",
"id": 17008257,
"node_id": "MDQ6VXNlcjE3MDA4MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/17008257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AliAhmedNada",
"html_url": "https://github.com/AliAhmedNada",
"followers_url": "https://api.github.com/users/AliAhmedNada/followers",
"following_url": "https://api.github.com/users/AliAhmedNada/following{/other_user}",
"gists_url": "https://api.github.com/users/AliAhmedNada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AliAhmedNada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AliAhmedNada/subscriptions",
"organizations_url": "https://api.github.com/users/AliAhmedNada/orgs",
"repos_url": "https://api.github.com/users/AliAhmedNada/repos",
"events_url": "https://api.github.com/users/AliAhmedNada/events{/privacy}",
"received_events_url": "https://api.github.com/users/AliAhmedNada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6764/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5273
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5273/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5273/comments
|
https://api.github.com/repos/ollama/ollama/issues/5273/events
|
https://github.com/ollama/ollama/issues/5273
| 2,372,371,704
|
I_kwDOJ0Z1Ps6NZ4T4
| 5,273
|
2024-June-25 conda-forge ollama v0.1.17 is too old
|
{
"login": "polySugar",
"id": 156925923,
"node_id": "U_kgDOCVp_4w",
"avatar_url": "https://avatars.githubusercontent.com/u/156925923?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polySugar",
"html_url": "https://github.com/polySugar",
"followers_url": "https://api.github.com/users/polySugar/followers",
"following_url": "https://api.github.com/users/polySugar/following{/other_user}",
"gists_url": "https://api.github.com/users/polySugar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polySugar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polySugar/subscriptions",
"organizations_url": "https://api.github.com/users/polySugar/orgs",
"repos_url": "https://api.github.com/users/polySugar/repos",
"events_url": "https://api.github.com/users/polySugar/events{/privacy}",
"received_events_url": "https://api.github.com/users/polySugar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-06-25T10:55:05
| 2024-07-02T21:27:31
| 2024-07-02T21:27:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please update the version in conda-forge after normal security checks.
https://anaconda.org/conda-forge/ollama
current values 2024-June-25
linux-64 v0.1.17
osx-64 v0.1.17
osx-arm64 v0.1.17
win-64 v0.1.17
is too old and is lacking in model-compatibility . Please note I appreciate your efforts and this feature request is just a reminder.
https://stackoverflow.com/a/67134507
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5273/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2842
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2842/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2842/comments
|
https://api.github.com/repos/ollama/ollama/issues/2842/events
|
https://github.com/ollama/ollama/issues/2842
| 2,161,904,474
|
I_kwDOJ0Z1Ps6A3Ata
| 2,842
|
Official Desktop GUI app
|
{
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/users/trymeouteh/followers",
"following_url": "https://api.github.com/users/trymeouteh/following{/other_user}",
"gists_url": "https://api.github.com/users/trymeouteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trymeouteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trymeouteh/subscriptions",
"organizations_url": "https://api.github.com/users/trymeouteh/orgs",
"repos_url": "https://api.github.com/users/trymeouteh/repos",
"events_url": "https://api.github.com/users/trymeouteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/trymeouteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 7
| 2024-02-29T18:54:56
| 2025-01-27T10:58:46
| 2024-03-12T00:24:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Please consider making an official GUI app for Ollama that runs on Windows, MacOS and Linux.
The official GUI app will install Ollama CLU and Ollama GUI
The GUI will allow you to do what can be done with the Ollama CLI which is mostly managing models and configuring Ollama. Essentially making Ollama GUI a user friendly settings app for Ollama.
Or even perhaps a desktop and mobile GUI app written in Dart/Flutter?
https://github.com/ollama/ollama/issues/2843
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2842/reactions",
"total_count": 12,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2842/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7215
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7215/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7215/comments
|
https://api.github.com/repos/ollama/ollama/issues/7215/events
|
https://github.com/ollama/ollama/issues/7215
| 2,589,952,465
|
I_kwDOJ0Z1Ps6aX4nR
| 7,215
|
Model change hash detection feature
|
{
"login": "jasonculligan",
"id": 15697557,
"node_id": "MDQ6VXNlcjE1Njk3NTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/15697557?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasonculligan",
"html_url": "https://github.com/jasonculligan",
"followers_url": "https://api.github.com/users/jasonculligan/followers",
"following_url": "https://api.github.com/users/jasonculligan/following{/other_user}",
"gists_url": "https://api.github.com/users/jasonculligan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jasonculligan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jasonculligan/subscriptions",
"organizations_url": "https://api.github.com/users/jasonculligan/orgs",
"repos_url": "https://api.github.com/users/jasonculligan/repos",
"events_url": "https://api.github.com/users/jasonculligan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jasonculligan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 0
| 2024-10-15T21:17:57
| 2024-10-15T21:27:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When I download a model and then manually flip a bit or two in the blob, there is no apparent check or complaint that the hash has changed. I've tried both quitting a running ollama prompt that uses the model and restarting it, and stopping and restarting the service. I can of course confirm that the SHA256 does indeed change. This leads me to believe that there is no checksum tests to ensure that the model has not been changed on disk, either accidentally or maliciously, so protection or detection of malicious or compromised models.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7215/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/7356
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7356/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7356/comments
|
https://api.github.com/repos/ollama/ollama/issues/7356/events
|
https://github.com/ollama/ollama/issues/7356
| 2,613,937,482
|
I_kwDOJ0Z1Ps6bzYVK
| 7,356
|
Console show formation backslash chars
|
{
"login": "lsalamon",
"id": 235938,
"node_id": "MDQ6VXNlcjIzNTkzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/235938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsalamon",
"html_url": "https://github.com/lsalamon",
"followers_url": "https://api.github.com/users/lsalamon/followers",
"following_url": "https://api.github.com/users/lsalamon/following{/other_user}",
"gists_url": "https://api.github.com/users/lsalamon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lsalamon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsalamon/subscriptions",
"organizations_url": "https://api.github.com/users/lsalamon/orgs",
"repos_url": "https://api.github.com/users/lsalamon/repos",
"events_url": "https://api.github.com/users/lsalamon/events{/privacy}",
"received_events_url": "https://api.github.com/users/lsalamon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 10
| 2024-10-25T12:12:16
| 2024-10-29T13:39:27
| 2024-10-29T13:39:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Why any question show text not formatted at console like this:
<code>A number \\( a \\) has a multiplicative inverse modulo \\( n \\) if there exists an integer \\( b \\) such that:
\\[ (a \\cdot b) \\equiv 1 \\pmod{n} \\]</code>
Ollama Win64 0.3.14.0.
Old version 0.3.10.0 has same issue.
I noticed that sometimes, in daily use, the problem doesn't always appear.
### OS
Windows
### GPU
Other
### CPU
AMD
### Ollama version
0.3.14.0
|
{
"login": "lsalamon",
"id": 235938,
"node_id": "MDQ6VXNlcjIzNTkzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/235938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsalamon",
"html_url": "https://github.com/lsalamon",
"followers_url": "https://api.github.com/users/lsalamon/followers",
"following_url": "https://api.github.com/users/lsalamon/following{/other_user}",
"gists_url": "https://api.github.com/users/lsalamon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lsalamon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsalamon/subscriptions",
"organizations_url": "https://api.github.com/users/lsalamon/orgs",
"repos_url": "https://api.github.com/users/lsalamon/repos",
"events_url": "https://api.github.com/users/lsalamon/events{/privacy}",
"received_events_url": "https://api.github.com/users/lsalamon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7356/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1970
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1970/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1970/comments
|
https://api.github.com/repos/ollama/ollama/issues/1970/events
|
https://github.com/ollama/ollama/pull/1970
| 2,079,806,496
|
PR_kwDOJ0Z1Ps5j-hzw
| 1,970
|
feat: add flag for specifying port number
|
{
"login": "P3rtang",
"id": 51847616,
"node_id": "MDQ6VXNlcjUxODQ3NjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/51847616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/P3rtang",
"html_url": "https://github.com/P3rtang",
"followers_url": "https://api.github.com/users/P3rtang/followers",
"following_url": "https://api.github.com/users/P3rtang/following{/other_user}",
"gists_url": "https://api.github.com/users/P3rtang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/P3rtang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/P3rtang/subscriptions",
"organizations_url": "https://api.github.com/users/P3rtang/orgs",
"repos_url": "https://api.github.com/users/P3rtang/repos",
"events_url": "https://api.github.com/users/P3rtang/events{/privacy}",
"received_events_url": "https://api.github.com/users/P3rtang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-01-12T22:19:48
| 2024-01-21T17:32:25
| 2024-01-18T23:23:36
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | true
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1970",
"html_url": "https://github.com/ollama/ollama/pull/1970",
"diff_url": "https://github.com/ollama/ollama/pull/1970.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1970.patch",
"merged_at": null
}
|
I haven't opened an issue about this since it is already possible to change to default port ollama uses with an env variable.
But it would be more convenient in my opinion to have the port a flag as well. Mostly because I often end up running two instances of ollama, one with gpu acceleration and one without.
The thing I'm most unsure about is having to modify the ```ClientFromEnvironment``` function to accept the cobra cmd to get out the port flag variable (this might be the very reason it's done only via the env variable)
This is more of a concept pull request and would love an opinion on this idea
|
{
"login": "P3rtang",
"id": 51847616,
"node_id": "MDQ6VXNlcjUxODQ3NjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/51847616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/P3rtang",
"html_url": "https://github.com/P3rtang",
"followers_url": "https://api.github.com/users/P3rtang/followers",
"following_url": "https://api.github.com/users/P3rtang/following{/other_user}",
"gists_url": "https://api.github.com/users/P3rtang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/P3rtang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/P3rtang/subscriptions",
"organizations_url": "https://api.github.com/users/P3rtang/orgs",
"repos_url": "https://api.github.com/users/P3rtang/repos",
"events_url": "https://api.github.com/users/P3rtang/events{/privacy}",
"received_events_url": "https://api.github.com/users/P3rtang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1970/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/1313
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1313/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1313/comments
|
https://api.github.com/repos/ollama/ollama/issues/1313/events
|
https://github.com/ollama/ollama/issues/1313
| 2,016,590,913
|
I_kwDOJ0Z1Ps54MrxB
| 1,313
|
Publish checksums of release binaries
|
{
"login": "davlgd",
"id": 1110600,
"node_id": "MDQ6VXNlcjExMTA2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1110600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davlgd",
"html_url": "https://github.com/davlgd",
"followers_url": "https://api.github.com/users/davlgd/followers",
"following_url": "https://api.github.com/users/davlgd/following{/other_user}",
"gists_url": "https://api.github.com/users/davlgd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davlgd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davlgd/subscriptions",
"organizations_url": "https://api.github.com/users/davlgd/orgs",
"repos_url": "https://api.github.com/users/davlgd/repos",
"events_url": "https://api.github.com/users/davlgd/events{/privacy}",
"received_events_url": "https://api.github.com/users/davlgd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1
| 2023-11-29T13:26:11
| 2024-03-21T08:17:32
| 2024-03-21T08:17:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If we download binaries for manual installation, there is no sha256/512 sum available to check integrity. It could be great to have them in release assets.
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1313/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1313/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3570
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3570/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3570/comments
|
https://api.github.com/repos/ollama/ollama/issues/3570/events
|
https://github.com/ollama/ollama/issues/3570
| 2,234,799,590
|
I_kwDOJ0Z1Ps6FNFXm
| 3,570
|
More details in Model Info
|
{
"login": "corani",
"id": 480775,
"node_id": "MDQ6VXNlcjQ4MDc3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/480775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/corani",
"html_url": "https://github.com/corani",
"followers_url": "https://api.github.com/users/corani/followers",
"following_url": "https://api.github.com/users/corani/following{/other_user}",
"gists_url": "https://api.github.com/users/corani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/corani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/corani/subscriptions",
"organizations_url": "https://api.github.com/users/corani/orgs",
"repos_url": "https://api.github.com/users/corani/repos",
"events_url": "https://api.github.com/users/corani/events{/privacy}",
"received_events_url": "https://api.github.com/users/corani/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2
| 2024-04-10T05:44:26
| 2024-06-19T21:19:03
| 2024-06-19T21:19:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What are you trying to do?
Would it be possible to expose more information in the model info (https://github.com/ollama/ollama/blob/main/docs/api.md#show-model-information) API, such as the context length, embedding length etc.? Basically, the more the merrier 😄
This would be useful to provide additional information to the user when building a service that wraps Ollama.
### How should we solve this?
_No response_
### What is the impact of not solving this?
_No response_
### Anything else?
_No response_
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3570/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3570/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4811
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4811/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4811/comments
|
https://api.github.com/repos/ollama/ollama/issues/4811/events
|
https://github.com/ollama/ollama/issues/4811
| 2,333,132,365
|
I_kwDOJ0Z1Ps6LEMZN
| 4,811
|
ollama qwen long text problem
|
{
"login": "kaka2008",
"id": 211139,
"node_id": "MDQ6VXNlcjIxMTEzOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/211139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaka2008",
"html_url": "https://github.com/kaka2008",
"followers_url": "https://api.github.com/users/kaka2008/followers",
"following_url": "https://api.github.com/users/kaka2008/following{/other_user}",
"gists_url": "https://api.github.com/users/kaka2008/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaka2008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaka2008/subscriptions",
"organizations_url": "https://api.github.com/users/kaka2008/orgs",
"repos_url": "https://api.github.com/users/kaka2008/repos",
"events_url": "https://api.github.com/users/kaka2008/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaka2008/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 4
| 2024-06-04T10:16:02
| 2024-10-17T16:28:19
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
When I Use ollama, deploy [qwen:110b-chat-v1.5-q6_K](https://ollama.com/library/qwen:110b-chat-v1.5-q5_K_M) model
When the context (Chinese) exceeds around 3000 characters (not precise), it fails to recognize the system prompt.
I tried increasing num_ctx to 32768 or max_tokens to 32768 through calling ollama itself or via the OpenAI API, but neither had any effect.
I saw someone mention setting a dynamic factor, but couldn't find where to set it in ollama.
How can I resolve this issue? Thank you.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.33
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4811/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/379
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/379/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/379/comments
|
https://api.github.com/repos/ollama/ollama/issues/379/events
|
https://github.com/ollama/ollama/issues/379
| 1,856,674,284
|
I_kwDOJ0Z1Ps5uqpns
| 379
|
Windows usage broken
|
{
"login": "FairyTail2000",
"id": 22645621,
"node_id": "MDQ6VXNlcjIyNjQ1NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22645621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FairyTail2000",
"html_url": "https://github.com/FairyTail2000",
"followers_url": "https://api.github.com/users/FairyTail2000/followers",
"following_url": "https://api.github.com/users/FairyTail2000/following{/other_user}",
"gists_url": "https://api.github.com/users/FairyTail2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FairyTail2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FairyTail2000/subscriptions",
"organizations_url": "https://api.github.com/users/FairyTail2000/orgs",
"repos_url": "https://api.github.com/users/FairyTail2000/repos",
"events_url": "https://api.github.com/users/FairyTail2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/FairyTail2000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 3
| 2023-08-18T12:51:40
| 2023-08-30T21:16:26
| 2023-08-30T21:16:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
While testing my own Frontend for ollama on Windows with the newest version I did noticed that llama seems to be a bit broken:
- While loading tags the filepath.Walk in server/routes.go:340 returns null. Why? Because on line 355 "slashIndex := strings.LastIndex(path, "/")" returns -1 since a windows path does not contain a / but rather an \. The fix for windows is to replace the character in the code and recompile
- In server/modelpath.go:82 is the next problem on line 87. "path := filepath.Join(home, ".ollama", "models", "manifests", mp.Registry, mp.Namespace, mp.Repository, mp.Tag)" is here the problem, at least for the codeup model
The parts that make up the path look like this: 'C:\Users\<censored> .ollama models manifests registry.ollama.ai library registry.ollama.ai\library\codeup latest' for codup. Which will obviously won't work as the path gets stanced together as ' C:\Users\<censored>\.ollama\models\manifests\registry.ollama.ai\library\registry.ollama.ai\library\codeup\latest'
The fix for at least the codeup model is this:
path := filepath.Join(home, ".ollama", "models", "manifests", mp.Repository, mp.Tag)
I haven't tested this however with other models as I don't have lightning speeds and little to no go experience
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/379/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5551
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5551/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5551/comments
|
https://api.github.com/repos/ollama/ollama/issues/5551/events
|
https://github.com/ollama/ollama/pull/5551
| 2,396,639,589
|
PR_kwDOJ0Z1Ps50v8-D
| 5,551
|
OpenAI v1/completions: allow stop token list
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-08T21:51:21
| 2024-07-09T21:01:28
| 2024-07-09T21:01:27
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5551",
"html_url": "https://github.com/ollama/ollama/pull/5551",
"diff_url": "https://github.com/ollama/ollama/pull/5551.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5551.patch",
"merged_at": "2024-07-09T21:01:27"
}
|
Resolves #5545
|
{
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5551/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2170
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2170/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2170/comments
|
https://api.github.com/repos/ollama/ollama/issues/2170/events
|
https://github.com/ollama/ollama/issues/2170
| 2,098,012,279
|
I_kwDOJ0Z1Ps59DSB3
| 2,170
|
Question: Are `qwen:72b-chat` and `qwen:72b-text` about to be added to `ollama.ai`?
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-01-24T11:00:40
| 2024-01-30T05:03:06
| 2024-01-24T17:23:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I was just about to download/quantize the transformer models from Hugging Face, but noticed `qwen` was added to `ollama.ai` and wondered if `qwen:72b-chat` and `qwen:72b-chat-text` were about to be added?
It says this on the 'Overview' page:
>This model is offered in four different parameter size tags:
>
>- `qwen:1.8b`
>- `qwen:7b (default)`
>- `qwen:14b`
>- `qwen:72b`
But there are no 72b variants listed on the 'Tags' page. I tried `ollama pull qwen:72b-chat-q8_0` to see if it might just be unlisted, but it returns `Error: pull model manifest: file does not exist`.
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2170/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/77
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/77/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/77/comments
|
https://api.github.com/repos/ollama/ollama/issues/77/events
|
https://github.com/ollama/ollama/pull/77
| 1,803,557,331
|
PR_kwDOJ0Z1Ps5Vcwlo
| 77
|
continue conversation
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-07-13T18:30:47
| 2023-07-14T21:57:47
| 2023-07-14T21:57:42
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/77",
"html_url": "https://github.com/ollama/ollama/pull/77",
"diff_url": "https://github.com/ollama/ollama/pull/77.diff",
"patch_url": "https://github.com/ollama/ollama/pull/77.patch",
"merged_at": "2023-07-14T21:57:42"
}
|
feed responses back into the llm
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/77/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/77/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/2274
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2274/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2274/comments
|
https://api.github.com/repos/ollama/ollama/issues/2274/events
|
https://github.com/ollama/ollama/issues/2274
| 2,107,737,683
|
I_kwDOJ0Z1Ps59oYZT
| 2,274
|
EDIT: `codellama-70b-instruct` is so censored it's basically useless, but useful info in the thead so will leave it open...
|
{
"login": "jukofyork",
"id": 69222624,
"node_id": "MDQ6VXNlcjY5MjIyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/69222624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jukofyork",
"html_url": "https://github.com/jukofyork",
"followers_url": "https://api.github.com/users/jukofyork/followers",
"following_url": "https://api.github.com/users/jukofyork/following{/other_user}",
"gists_url": "https://api.github.com/users/jukofyork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jukofyork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jukofyork/subscriptions",
"organizations_url": "https://api.github.com/users/jukofyork/orgs",
"repos_url": "https://api.github.com/users/jukofyork/repos",
"events_url": "https://api.github.com/users/jukofyork/events{/privacy}",
"received_events_url": "https://api.github.com/users/jukofyork/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 17
| 2024-01-30T12:23:02
| 2024-03-11T22:20:20
| 2024-03-11T22:20:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I pulled the 8-bit quant overnight using `ollama pull codellama:70b-instruct-q8_0` and seem to be having problems with it.
I've tried the default Ollama modelfile and also what I think is the correct prompt template based off the `tokenizer_config.json` that got added overnight:
```
TEMPLATE """{{ if .First }}<s>{{ end }}{{ if and .First .System }}Source: system
{{ .System }} <step> {{ end }}Source: user
{{ .Prompt }} <step> Source: assistant
Destination: user
{{ .Response }}"""
```
but both just give me this:
```
I cannot fulfill your request as it goes against ethical and moral principles, and may potentially violate laws and regulations.
```
when I ask it to refactor some very SWF (lol!) Java code???
Is there some chance the base and instruct models have got mixed up? I don't want to pull another 70GB just to find the same problem...
Anybody else having any luck with running `codellama-70b-instruct`?
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2274/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2274/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1630
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1630/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1630/comments
|
https://api.github.com/repos/ollama/ollama/issues/1630/events
|
https://github.com/ollama/ollama/issues/1630
| 2,050,516,738
|
I_kwDOJ0Z1Ps56OGcC
| 1,630
|
cant type
|
{
"login": "RootnuII",
"id": 66104474,
"node_id": "MDQ6VXNlcjY2MTA0NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/66104474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RootnuII",
"html_url": "https://github.com/RootnuII",
"followers_url": "https://api.github.com/users/RootnuII/followers",
"following_url": "https://api.github.com/users/RootnuII/following{/other_user}",
"gists_url": "https://api.github.com/users/RootnuII/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RootnuII/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RootnuII/subscriptions",
"organizations_url": "https://api.github.com/users/RootnuII/orgs",
"repos_url": "https://api.github.com/users/RootnuII/repos",
"events_url": "https://api.github.com/users/RootnuII/events{/privacy}",
"received_events_url": "https://api.github.com/users/RootnuII/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 12
| 2023-12-20T13:06:28
| 2024-11-13T19:37:40
| 2023-12-23T09:59:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
help im running it and cant type

|
{
"login": "RootnuII",
"id": 66104474,
"node_id": "MDQ6VXNlcjY2MTA0NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/66104474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RootnuII",
"html_url": "https://github.com/RootnuII",
"followers_url": "https://api.github.com/users/RootnuII/followers",
"following_url": "https://api.github.com/users/RootnuII/following{/other_user}",
"gists_url": "https://api.github.com/users/RootnuII/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RootnuII/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RootnuII/subscriptions",
"organizations_url": "https://api.github.com/users/RootnuII/orgs",
"repos_url": "https://api.github.com/users/RootnuII/repos",
"events_url": "https://api.github.com/users/RootnuII/events{/privacy}",
"received_events_url": "https://api.github.com/users/RootnuII/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1630/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/333
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/333/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/333/comments
|
https://api.github.com/repos/ollama/ollama/issues/333/events
|
https://github.com/ollama/ollama/pull/333
| 1,847,205,158
|
PR_kwDOJ0Z1Ps5XvzuE
| 333
|
ggml: fix off by one error
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-08-11T17:45:56
| 2023-08-11T17:51:08
| 2023-08-11T17:51:07
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/333",
"html_url": "https://github.com/ollama/ollama/pull/333",
"diff_url": "https://github.com/ollama/ollama/pull/333.diff",
"patch_url": "https://github.com/ollama/ollama/pull/333.patch",
"merged_at": "2023-08-11T17:51:07"
}
|
remove used Unknown FileType
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/333/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7172
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7172/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7172/comments
|
https://api.github.com/repos/ollama/ollama/issues/7172/events
|
https://github.com/ollama/ollama/issues/7172
| 2,581,066,146
|
I_kwDOJ0Z1Ps6Z1_Gi
| 7,172
|
Llama 3.2 11B and 90B
|
{
"login": "DewiarQR",
"id": 64423698,
"node_id": "MDQ6VXNlcjY0NDIzNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/64423698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DewiarQR",
"html_url": "https://github.com/DewiarQR",
"followers_url": "https://api.github.com/users/DewiarQR/followers",
"following_url": "https://api.github.com/users/DewiarQR/following{/other_user}",
"gists_url": "https://api.github.com/users/DewiarQR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DewiarQR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DewiarQR/subscriptions",
"organizations_url": "https://api.github.com/users/DewiarQR/orgs",
"repos_url": "https://api.github.com/users/DewiarQR/repos",
"events_url": "https://api.github.com/users/DewiarQR/events{/privacy}",
"received_events_url": "https://api.github.com/users/DewiarQR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-10-11T10:49:59
| 2024-10-11T23:11:11
| 2024-10-11T23:11:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
In your blog https://ollama.com/blog/llama3.2 - you wrote that you will soon add multimodal models Llama 3.2 11B and 90B Vision models. Today I see that you have added two new models to your system, but the promised llama3.2 models are not there yet. Can you roughly estimate when they will appear? We are really looking forward to it, we have been constantly checking their availability on your site for 2 weeks now!)))
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7172/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/1932
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1932/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1932/comments
|
https://api.github.com/repos/ollama/ollama/issues/1932/events
|
https://github.com/ollama/ollama/pull/1932
| 2,077,695,946
|
PR_kwDOJ0Z1Ps5j3S_0
| 1,932
|
api: add model for all requests
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-01-11T22:16:22
| 2024-01-18T22:56:52
| 2024-01-18T22:56:51
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/1932",
"html_url": "https://github.com/ollama/ollama/pull/1932",
"diff_url": "https://github.com/ollama/ollama/pull/1932.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1932.patch",
"merged_at": "2024-01-18T22:56:51"
}
|
Prefer using `req.Model` and fallback to `req.Name`. `req.Model` is already the field name for generate and chat which are by far the most popular endpoints. This change aligns the other requests.
Also update `CopyRequest.Destination` to `CopyRequest.Target` which better describe field
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1932/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/5747
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5747/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5747/comments
|
https://api.github.com/repos/ollama/ollama/issues/5747/events
|
https://github.com/ollama/ollama/issues/5747
| 2,413,549,255
|
I_kwDOJ0Z1Ps6P29bH
| 5,747
|
Support to Intel NPU by Intel NPU Acceleration Library
|
{
"login": "lordpba",
"id": 40633120,
"node_id": "MDQ6VXNlcjQwNjMzMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/40633120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordpba",
"html_url": "https://github.com/lordpba",
"followers_url": "https://api.github.com/users/lordpba/followers",
"following_url": "https://api.github.com/users/lordpba/following{/other_user}",
"gists_url": "https://api.github.com/users/lordpba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordpba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordpba/subscriptions",
"organizations_url": "https://api.github.com/users/lordpba/orgs",
"repos_url": "https://api.github.com/users/lordpba/repos",
"events_url": "https://api.github.com/users/lordpba/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordpba/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6677491450,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgJu-g",
"url": "https://api.github.com/repos/ollama/ollama/labels/intel",
"name": "intel",
"color": "226E5B",
"default": false,
"description": "issues relating to Intel GPUs"
}
] |
open
| false
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-07-17T12:52:16
| 2025-01-29T13:15:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Should be great to add support to the new and future Intel Neural Processing Units
There is already a library for this https://github.com/intel/intel-npu-acceleration-library and i.e. for Phi-3 is great.
I am sure that NPUs will be everywhere, and they will be a viable alternative to CUDA and Nvidia GPUs, imho.
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5747/reactions",
"total_count": 51,
"+1": 51,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5747/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/1402
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/1402/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/1402/comments
|
https://api.github.com/repos/ollama/ollama/issues/1402/events
|
https://github.com/ollama/ollama/issues/1402
| 2,028,973,688
|
I_kwDOJ0Z1Ps547654
| 1,402
|
Optimum-NVIDIA - Unlock blazingly fast LLM inference in just 1 line of code
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2023-12-06T16:45:20
| 2024-01-11T03:18:38
| 2024-01-11T03:18:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This article looks like it would be worth while to implement a check to see if nvidia is installed and add that line of code. Looks like it's quantizing an 8 bit float.
https://huggingface.co/blog/optimum-nvidia
|
{
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/1402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/1402/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/7907
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7907/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7907/comments
|
https://api.github.com/repos/ollama/ollama/issues/7907/events
|
https://github.com/ollama/ollama/issues/7907
| 2,711,199,763
|
I_kwDOJ0Z1Ps6hmaAT
| 7,907
|
llama3.2:3b-instruct-fp16 - truncating input prompt limit=2048 prompt=17624 keep=5 new=2048
|
{
"login": "Arslan-Mehmood1",
"id": 51626734,
"node_id": "MDQ6VXNlcjUxNjI2NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/51626734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arslan-Mehmood1",
"html_url": "https://github.com/Arslan-Mehmood1",
"followers_url": "https://api.github.com/users/Arslan-Mehmood1/followers",
"following_url": "https://api.github.com/users/Arslan-Mehmood1/following{/other_user}",
"gists_url": "https://api.github.com/users/Arslan-Mehmood1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arslan-Mehmood1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arslan-Mehmood1/subscriptions",
"organizations_url": "https://api.github.com/users/Arslan-Mehmood1/orgs",
"repos_url": "https://api.github.com/users/Arslan-Mehmood1/repos",
"events_url": "https://api.github.com/users/Arslan-Mehmood1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arslan-Mehmood1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-12-02T09:33:28
| 2024-12-14T15:37:22
| 2024-12-14T15:37:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
**Platform: Google Colab**
**GPU : Nvidia T4**
**RAM : 12.7 GB**
**Python: 3.10.12**
**why the input prompt is getting truncated to 2048?**
```
time=2024-12-02T09:25:57.024Z level=INFO source=sched.go:507 msg="updated VRAM based on existing loaded models" gpu=GPU-cf31ce6b-6d8b-ba6f-ba51-6655c750dcf4 library=cuda total="14.7 GiB" available="11.0 GiB"
time=2024-12-02T09:25:57.027Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-e2f46f5b501c2982b2c495a4694cb4e620aabfa2c37ebb23a90ffc8cce93854b gpu=GPU-cf31ce6b-6d8b-ba6f-ba51-6655c750dcf4 parallel=4 available=11863298048 required="7.9 GiB"
time=2024-12-02T09:25:57.244Z level=INFO source=server.go:105 msg="system memory" total="12.7 GiB" free="10.6 GiB" free_swap="0 B"
time=2024-12-02T09:25:57.245Z level=INFO source=memory.go:343 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[11.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="7.9 GiB" memory.required.partial="7.9 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[7.9 GiB]" memory.weights.total="6.1 GiB" memory.weights.repeating="5.4 GiB" memory.weights.nonrepeating="751.5 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"
time=2024-12-02T09:25:57.252Z level=INFO source=server.go:380 msg="starting llama server" cmd="/tmp/ollama4267948165/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-e2f46f5b501c2982b2c495a4694cb4e620aabfa2c37ebb23a90ffc8cce93854b --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 1 --parallel 4 --port 45759"
time=2024-12-02T09:25:57.252Z level=INFO source=sched.go:449 msg="loaded runners" count=2
time=2024-12-02T09:25:57.252Z level=INFO source=server.go:559 msg="waiting for llama runner to start responding"
time=2024-12-02T09:25:57.253Z level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server error"
time=2024-12-02T09:25:57.607Z level=INFO source=runner.go:939 msg="starting go runner"
time=2024-12-02T09:25:57.607Z level=INFO source=runner.go:940 msg=system info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | cgo(gcc)" threads=1
time=2024-12-02T09:25:57.607Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:45759"
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-e2f46f5b501c2982b2c495a4694cb4e620aabfa2c37ebb23a90ffc8cce93854b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Llama-3.2
llama_model_loader: - kv 5: general.size_label str = 3B
llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 8: llama.block_count u32 = 28
llama_model_loader: - kv 9: llama.context_length u32 = 131072
llama_model_loader: - kv 10: llama.embedding_length u32 = 3072
llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
llama_model_loader: - kv 12: llama.attention.head_count u32 = 24
llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
llama_model_loader: - kv 18: general.file_type u32 = 1
llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
time=2024-12-02T09:25:57.756Z level=INFO source=server.go:593 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 58 tensors
llama_model_loader: - type f16: 197 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 3072
llm_load_print_meta: n_layer = 28
llm_load_print_meta: n_head = 24
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 3
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 8192
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 3B
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 3.21 B
llm_load_print_meta: model size = 5.98 GiB (16.00 BPW)
llm_load_print_meta: general.name = Llama 3.2 3B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: Tesla T4, compute capability 7.5, VMM: yes
llm_load_tensors: ggml ctx size = 0.24 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors: CPU buffer size = 751.50 MiB
llm_load_tensors: CUDA0 buffer size = 6128.17 MiB
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 896.00 MiB
llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.00 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 424.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 22.01 MiB
llama_new_context_with_model: graph nodes = 902
llama_new_context_with_model: graph splits = 2
time=2024-12-02T09:26:24.141Z level=INFO source=server.go:598 msg="llama runner started in 26.89 seconds"
----------------------
**_time=2024-12-02T09:26:24.289Z level=WARN source=runner.go:129 msg="truncating input prompt" limit=2048 prompt=17624 keep=5 new=2048_**
----------------------
llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-e2f46f5b501c2982b2c495a4694cb4e620aabfa2c37ebb23a90ffc8cce93854b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Llama-3.2
llama_model_loader: - kv 5: general.size_label str = 3B
llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 8: llama.block_count u32 = 28
llama_model_loader: - kv 9: llama.context_length u32 = 131072
llama_model_loader: - kv 10: llama.embedding_length u32 = 3072
llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
llama_model_loader: - kv 12: llama.attention.head_count u32 = 24
llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
llama_model_loader: - kv 18: general.file_type u32 = 1
llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 29: general.quantization_version u32 = 2
llama_model_loader: - type f32: 58 tensors
llama_model_loader: - type f16: 197 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = all F32
llm_load_print_meta: model params = 3.21 B
llm_load_print_meta: model size = 5.98 GiB (16.00 BPW)
llm_load_print_meta: general.name = Llama 3.2 3B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llama_model_load: vocab only - skipping tensors
[GIN] 2024/12/02 - 09:26:42 | 200 | 46.031081065s | 127.0.0.1 | POST "/api/generate"
```
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.7
|
{
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7907/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3697
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3697/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3697/comments
|
https://api.github.com/repos/ollama/ollama/issues/3697/events
|
https://github.com/ollama/ollama/issues/3697
| 2,248,126,394
|
I_kwDOJ0Z1Ps6F_6-6
| 3,697
|
No llama.cpp acknowledgement
|
{
"login": "survirtual",
"id": 20385618,
"node_id": "MDQ6VXNlcjIwMzg1NjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/20385618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/survirtual",
"html_url": "https://github.com/survirtual",
"followers_url": "https://api.github.com/users/survirtual/followers",
"following_url": "https://api.github.com/users/survirtual/following{/other_user}",
"gists_url": "https://api.github.com/users/survirtual/gists{/gist_id}",
"starred_url": "https://api.github.com/users/survirtual/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/survirtual/subscriptions",
"organizations_url": "https://api.github.com/users/survirtual/orgs",
"repos_url": "https://api.github.com/users/survirtual/repos",
"events_url": "https://api.github.com/users/survirtual/events{/privacy}",
"received_events_url": "https://api.github.com/users/survirtual/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 3
| 2024-04-17T12:04:14
| 2024-04-17T18:23:16
| 2024-04-17T17:49:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
This project is heavily dependent on [llama.cpp](https://github.com/ggerganov/llama.cpp), as seen [in this search](https://github.com/search?q=repo%3Aollama%2Follama%20llama.cpp&type=code), but there is no mention of that in the readme. This creates some conflict and distaste for this project that I keep seeing on communities for developers that is entirely unnecessary. Additionally, it prevents people from easily understanding how this project works at a lower level.
### What did you expect to see?
Add an acknowledgement about llama.cpp. An example of what this could look like can be found in libraries such as Rust's [axum](https://github.com/tokio-rs/axum), which is built on top of [hyper](https://github.com/hyperium/hyper), where there are several mentions of hyper:
> ## Performance
>
> `axum` is a relatively thin layer on top of [`hyper`] and adds very little
> overhead. So `axum`'s performance is comparable to [`hyper`]. You can find
> benchmarks [here](https://github.com/programatik29/rust-web-benchmarks) and
> [here](https://web-frameworks-benchmark.netlify.app/result?l=rust).
And at the bottom, there are acknowledgement links:
> ...
> [`tower`]: https://crates.io/crates/tower
> [`hyper`]: https://crates.io/crates/hyper
> [`tower-http`]: https://crates.io/crates/tower-http
>...
Something similar can be done for Ollama, which completely alleviates the perception issues, along with helping users and developers get a deeper understanding of the package.
### Steps to reproduce
Go to https://github.com/ollama/ollama, search the readme for llama.cpp, feel a tiny sting of disappointment that such a good project doesn't have any acknowledgements :(
### Are there any recent changes that introduced the issue?
_No response_
### OS
_No response_
### Architecture
_No response_
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
_No response_
### Other software
_No response_
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3697/reactions",
"total_count": 35,
"+1": 35,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3697/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/6702
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/6702/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/6702/comments
|
https://api.github.com/repos/ollama/ollama/issues/6702/events
|
https://github.com/ollama/ollama/issues/6702
| 2,512,472,182
|
I_kwDOJ0Z1Ps6VwUh2
| 6,702
|
Problem Serving Custom LLAMA3 Using Google Cloud Run
|
{
"login": "Oluwafemi-Jegede",
"id": 39559350,
"node_id": "MDQ6VXNlcjM5NTU5MzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/39559350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oluwafemi-Jegede",
"html_url": "https://github.com/Oluwafemi-Jegede",
"followers_url": "https://api.github.com/users/Oluwafemi-Jegede/followers",
"following_url": "https://api.github.com/users/Oluwafemi-Jegede/following{/other_user}",
"gists_url": "https://api.github.com/users/Oluwafemi-Jegede/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oluwafemi-Jegede/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oluwafemi-Jegede/subscriptions",
"organizations_url": "https://api.github.com/users/Oluwafemi-Jegede/orgs",
"repos_url": "https://api.github.com/users/Oluwafemi-Jegede/repos",
"events_url": "https://api.github.com/users/Oluwafemi-Jegede/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oluwafemi-Jegede/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 6677677816,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgVG-A",
"url": "https://api.github.com/repos/ollama/ollama/labels/docker",
"name": "docker",
"color": "0052CC",
"default": false,
"description": "Issues relating to using ollama in containers"
}
] |
closed
| false
| null |
[] | null | 16
| 2024-09-08T16:41:56
| 2024-11-06T00:37:44
| 2024-11-06T00:37:44
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I can run a custom LLAMA3 model locally using this docker config
```
FROM ollama/ollama:latest
COPY custom_llama.txt /App/custom_llama.txt
WORKDIR /App
RUN ollama serve & sleep 5 && ollama create ai-agent -f custom_llama.txt && ollama run ai-agent
EXPOSE 11434
```
However when I deploy on GCP cloud run, I don't see any model running. `$URL/api/tags = {"models":[]}`, but it says `ollama running` on the homepage
FYI: Custom model is LLAMA3:8B
### OS
Docker
### GPU
_No response_
### CPU
_No response_
### Ollama version
LLAMA3
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/6702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/6702/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3367
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3367/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3367/comments
|
https://api.github.com/repos/ollama/ollama/issues/3367/events
|
https://github.com/ollama/ollama/issues/3367
| 2,210,021,757
|
I_kwDOJ0Z1Ps6DukF9
| 3,367
|
Failed to load dynamic library on Windows when the user name and path have Chinese characters.
|
{
"login": "mili-tan",
"id": 24996957,
"node_id": "MDQ6VXNlcjI0OTk2OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/24996957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mili-tan",
"html_url": "https://github.com/mili-tan",
"followers_url": "https://api.github.com/users/mili-tan/followers",
"following_url": "https://api.github.com/users/mili-tan/following{/other_user}",
"gists_url": "https://api.github.com/users/mili-tan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mili-tan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mili-tan/subscriptions",
"organizations_url": "https://api.github.com/users/mili-tan/orgs",
"repos_url": "https://api.github.com/users/mili-tan/repos",
"events_url": "https://api.github.com/users/mili-tan/events{/privacy}",
"received_events_url": "https://api.github.com/users/mili-tan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5
| 2024-03-27T06:59:27
| 2024-05-05T18:21:05
| 2024-05-05T18:21:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Failed to load dynamic library on Windows when the user name and path have Chinese characters.
I tried replacing the user with a non-Chinese name and the problem no longer occurs.
```
time=2024-03-27T14:21:24.451+08:00 level=INFO source=images.go:710 msg="total blobs: 3"
time=2024-03-27T14:21:24.476+08:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
time=2024-03-27T14:21:24.477+08:00 level=INFO source=routes.go:1021 msg="Listening on 127.0.0.1:11434 (version 0.1.28)"
time=2024-03-27T14:21:24.477+08:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
time=2024-03-27T14:21:24.640+08:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu cpu_avx2 cuda_v11.3 cpu_avx]"
time=2024-03-27T14:21:31.095+08:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
time=2024-03-27T14:21:31.095+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library nvml.dll"
time=2024-03-27T14:21:31.104+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [c:\\Windows\\System32\\nvml.dll C:\\WINDOWS\\system32\\nvml.dll]"
time=2024-03-27T14:21:31.119+08:00 level=INFO source=gpu.go:323 msg="Unable to load CUDA management library c:\\Windows\\System32\\nvml.dll: nvml vram init failure: 4"
time=2024-03-27T14:21:31.121+08:00 level=INFO source=gpu.go:323 msg="Unable to load CUDA management library C:\\WINDOWS\\system32\\nvml.dll: nvml vram init failure: 4"
time=2024-03-27T14:21:31.121+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library rocm_smi64.dll"
time=2024-03-27T14:21:31.131+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-03-27T14:21:31.131+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-27T14:21:31.131+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-03-27T14:21:31.131+08:00 level=INFO source=llm.go:77 msg="GPU not available, falling back to CPU"
time=2024-03-27T14:21:31.131+08:00 level=INFO source=dyn_ext_server.go:385 msg="Updating PATH to C:\\Users\\李\\AppData\\Local\\Temp\\ollama2276364636\\cpu_avx2;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\Python311\\Scripts\\;C:\\Python311\\;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\WINDOWS\\System32\\OpenSSH\\;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn\\;C:\\Program Files\\dotnet\\;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn\\;C:\\Program Files (x86)\\Windows Kits\\8.1\\Windows Performance Toolkit\\;C:\\Program Files\\Common Files\\Autodesk Shared\\;C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\100\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\100\\DTS\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\100\\Tools\\Binn\\VSShell\\Common7\\IDE\\;C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\Common7\\IDE\\PrivateAssemblies\\;C:\\Program Files (x86)\\Microsoft SQL Server\\100\\DTS\\Binn\\;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Git\\bin;C:\\ProgramData\\chocolatey\\bin;C:\\maven\\apache-maven-3.9.1\\bin;C:\\Program Files\\nodejs\\;C:\\Program Files (x86)\\Java\\jdk-1.8\\bin;C:\\Program Files (x86)\\Java\\jdk-1.8\\jre\\bin;C:\\Program Files\\Java\\jdk1.8.0_321\\bin;C:\\Users\\李\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\李\\.dotnet\\tools;C:\\Users\\李\\AppData\\Local\\Programs\\Microsoft VS Code\\bin;C:\\Program Files\\JetBrains\\IntelliJ IDEA 2023.1.1\\bin;;C:\\Users\\李\\AppData\\Roaming\\npm;;C:\\Users\\李\\AppData\\Local\\Programs\\Ollama"
time=2024-03-27T14:21:31.131+08:00 level=WARN source=llm.go:162 msg="Failed to load dynamic library C:\\Users\\李\\AppData\\Local\\Temp\\ollama2276364636\\cpu_avx2\\ext_server.dll Unable to load dynamic library: Unable to load dynamic server library: \xd5Ҳ\xbb\xb5\xbdָ\xb6\xa8\xb5\xc4ģ\xbf顣\r\n"
[GIN] 2024/03/27 - 14:21:31 | 500 | 1.3342561s | 127.0.0.1 | POST "/api/chat"
```
### What did you expect to see?
When requesting `/api/chat`
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
_No response_
### GPU
Nvidia
### GPU info
_No response_
### CPU
Intel
### Other software
_No response_
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3367/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/3443
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3443/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3443/comments
|
https://api.github.com/repos/ollama/ollama/issues/3443/events
|
https://github.com/ollama/ollama/issues/3443
| 2,219,231,848
|
I_kwDOJ0Z1Ps6ERspo
| 3,443
|
May I know whether Ollama support DBRX model?
|
{
"login": "OPDEV001",
"id": 120762872,
"node_id": "U_kgDOBzKx-A",
"avatar_url": "https://avatars.githubusercontent.com/u/120762872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OPDEV001",
"html_url": "https://github.com/OPDEV001",
"followers_url": "https://api.github.com/users/OPDEV001/followers",
"following_url": "https://api.github.com/users/OPDEV001/following{/other_user}",
"gists_url": "https://api.github.com/users/OPDEV001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OPDEV001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OPDEV001/subscriptions",
"organizations_url": "https://api.github.com/users/OPDEV001/orgs",
"repos_url": "https://api.github.com/users/OPDEV001/repos",
"events_url": "https://api.github.com/users/OPDEV001/events{/privacy}",
"received_events_url": "https://api.github.com/users/OPDEV001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-01T22:56:34
| 2024-10-23T18:11:32
| 2024-10-23T18:11:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What model would you like?
I checked the https://ollama.com/library but can not find the DBRX in list.
May I run DBRX model on local machine, CPU and GPU, like ollama run dbrx-xxx-yyyy?
Thanks,
|
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3443/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4206
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4206/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4206/comments
|
https://api.github.com/repos/ollama/ollama/issues/4206/events
|
https://github.com/ollama/ollama/issues/4206
| 2,281,252,971
|
I_kwDOJ0Z1Ps6H-Shr
| 4,206
|
Llama3 generating Incorrect and repeated response for my custom modelfile (finetuned llama3 8b)
|
{
"login": "balaji-2k1",
"id": 112004377,
"node_id": "U_kgDOBq0NGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/112004377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balaji-2k1",
"html_url": "https://github.com/balaji-2k1",
"followers_url": "https://api.github.com/users/balaji-2k1/followers",
"following_url": "https://api.github.com/users/balaji-2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/balaji-2k1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balaji-2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balaji-2k1/subscriptions",
"organizations_url": "https://api.github.com/users/balaji-2k1/orgs",
"repos_url": "https://api.github.com/users/balaji-2k1/repos",
"events_url": "https://api.github.com/users/balaji-2k1/events{/privacy}",
"received_events_url": "https://api.github.com/users/balaji-2k1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 2
| 2024-05-06T16:19:05
| 2024-12-16T09:11:32
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
Fine-tuned the Llama3 8b model for my custom dataset following the llama3 template, used the unsloth library for fine-tuning, and generated the GGUF file.
Got relevant results when using this model for inference, but when I created the Modelfile and ran it as an Ollama process, the responses were irrelevant and repetitive.
Below is the prompt I used during the inference in the jupyter notebook (It generated the expected response):
`prompt = """<|start_header_id|>system<|end_header_id|>You are `Dbot,`
Respond to user questions naturally.
<|eot_id|><|start_header_id|>user<|end_header_id|>
{}<|eot_id|>
<|start_header_id|>Dbot<|end_header_id|>
{}<|eot_id|>"""`
Below is my Modelfile
`FROM llama3_merged.gguf
TEMPLATE """<|start_header_id|>system<|end_header_id|>You are Dbot,
Respond to user questions naturally. >
<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>
<|start_header_id|>Dbot<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<|reserved_special_token"
`
I used below command to run the Modelfile and infer the model which gave me irrelevant responses and did not stop.
`ollama run llama3:v1 "What is EoC"
EoC: Ecosystems and EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
The EoC: Ecosystems and EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
Ecosystems have a significant impact on the EoC: Ecosystems, EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
The EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
Ecosystems have a significant impact on the EoC: Ecosystems, EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
The EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
Ecosystems have a significant impact on the EoC: Ecosystems, EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
The EoP: Ecosystems, EoT: Ecosystems, and EoL: Ecosystems.
Ecosystems have a significant impact on the EoC: Ecosystems, EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
The EoT: Ecosystems, EoL: Ecosystems, EoP: Ecosystems, and EoE: Ecosystems.
Ecosystems have a significant impact on the EoC: Ecosystems, EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
The EoE: Ecosystems, EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
Ecosystems have a significant impact on the EoC: Ecosystems, EoL: Ecosystems, EoP: Ecosystems, and EoT: Ecosystems.
The EoT: Ecosystems, EoE: Ecosystems, and EoL: Ecosystems.
Ecosystems have a significant impact on the EoC: Ecos`
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.32
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4206/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/5814
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5814/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5814/comments
|
https://api.github.com/repos/ollama/ollama/issues/5814/events
|
https://github.com/ollama/ollama/issues/5814
| 2,420,953,479
|
I_kwDOJ0Z1Ps6QTNGH
| 5,814
|
Always output GGGGGGG when encountering problems that will not occur... .
|
{
"login": "enryteam",
"id": 20081090,
"node_id": "MDQ6VXNlcjIwMDgxMDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20081090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enryteam",
"html_url": "https://github.com/enryteam",
"followers_url": "https://api.github.com/users/enryteam/followers",
"following_url": "https://api.github.com/users/enryteam/following{/other_user}",
"gists_url": "https://api.github.com/users/enryteam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enryteam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enryteam/subscriptions",
"organizations_url": "https://api.github.com/users/enryteam/orgs",
"repos_url": "https://api.github.com/users/enryteam/repos",
"events_url": "https://api.github.com/users/enryteam/events{/privacy}",
"received_events_url": "https://api.github.com/users/enryteam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 8
| 2024-07-20T15:54:18
| 2024-09-12T22:16:04
| 2024-09-12T22:16:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
https://ollama.com/library/glm4
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.7
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5814/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/5621
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/5621/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/5621/comments
|
https://api.github.com/repos/ollama/ollama/issues/5621/events
|
https://github.com/ollama/ollama/pull/5621
| 2,401,993,290
|
PR_kwDOJ0Z1Ps51CIIZ
| 5,621
|
llm: remove `/usr/local/cuda/compat` from linker path
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-07-11T00:52:31
| 2024-07-12T18:05:48
| 2024-07-11T03:01:52
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/5621",
"html_url": "https://github.com/ollama/ollama/pull/5621",
"diff_url": "https://github.com/ollama/ollama/pull/5621.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5621.patch",
"merged_at": "2024-07-11T03:01:52"
}
|
Fixes https://github.com/ollama/ollama/issues/5573
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/5621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/5621/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/606
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/606/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/606/comments
|
https://api.github.com/repos/ollama/ollama/issues/606/events
|
https://github.com/ollama/ollama/pull/606
| 1,913,904,421
|
PR_kwDOJ0Z1Ps5bQKFc
| 606
|
ordered list of install locations
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2023-09-26T16:39:12
| 2023-09-29T18:30:48
| 2023-09-29T18:30:46
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/606",
"html_url": "https://github.com/ollama/ollama/pull/606",
"diff_url": "https://github.com/ollama/ollama/pull/606.diff",
"patch_url": "https://github.com/ollama/ollama/pull/606.patch",
"merged_at": "2023-09-29T18:30:46"
}
|
select install dir based what's in the PATH. if `/usr/local/bin` is in the path, install into there, otherwise `/usr/bin` or `/bin`, in order
|
{
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/606/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/219
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/219/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/219/comments
|
https://api.github.com/repos/ollama/ollama/issues/219/events
|
https://github.com/ollama/ollama/issues/219
| 1,822,670,559
|
I_kwDOJ0Z1Ps5so77f
| 219
|
App singleton lock behavior is incorrect
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5675428184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUkgpWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/app",
"name": "app",
"color": "000000",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null | 1
| 2023-07-26T15:28:54
| 2023-07-27T05:05:15
| 2023-07-27T05:05:15
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
If a user starts a new, different instance of `Ollama.app`, it should terminate the previous one and take the singleton lock. Currently it terminates itself instead.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/219/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/319
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/319/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/319/comments
|
https://api.github.com/repos/ollama/ollama/issues/319/events
|
https://github.com/ollama/ollama/pull/319
| 1,845,882,473
|
PR_kwDOJ0Z1Ps5XrTxG
| 319
|
RFC: optional generate header to not stream response
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2023-08-10T20:49:59
| 2023-10-20T16:44:04
| 2023-09-28T21:02:47
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/319",
"html_url": "https://github.com/ollama/ollama/pull/319",
"diff_url": "https://github.com/ollama/ollama/pull/319.diff",
"patch_url": "https://github.com/ollama/ollama/pull/319.patch",
"merged_at": null
}
|
Add an optional request header to the generate endpoint that returns the full response in one JSON body, rather than streaming:
```
curl -X POST -H "Content-Type: application/json" -H "X-Streamed: false" -d '{
"model": "llama2",
"prompt": "why is the sky blue?"
}' 'localhost:11434/api/generate'
```
The issue suggests setting the `Content-Type` header to `application/json` to indicate the result should not be streamed, but thats not quite right since the content-type indicates the type of content in the request, rather than the response.
We also can't use the `Accept: application/json`, this indicates the response that is expected, but clients would also use `Accept: application/json` in the case of a streaming response, because the returned objects will be json.
resolves #281
|
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/319/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/8544
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8544/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8544/comments
|
https://api.github.com/repos/ollama/ollama/issues/8544/events
|
https://github.com/ollama/ollama/issues/8544
| 2,805,760,863
|
I_kwDOJ0Z1Ps6nPINf
| 8,544
|
API parameter: 'reasoning_effort' (for DeepSeek-R1)
|
{
"login": "jonathanhecl",
"id": 1691623,
"node_id": "MDQ6VXNlcjE2OTE2MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1691623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanhecl",
"html_url": "https://github.com/jonathanhecl",
"followers_url": "https://api.github.com/users/jonathanhecl/followers",
"following_url": "https://api.github.com/users/jonathanhecl/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanhecl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanhecl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanhecl/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanhecl/orgs",
"repos_url": "https://api.github.com/users/jonathanhecl/repos",
"events_url": "https://api.github.com/users/jonathanhecl/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanhecl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null | 2
| 2025-01-23T02:35:05
| 2025-01-29T13:17:07
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
When I will be able to use `reasoning_effort` like on DeepSeek-R1? :)
Doc: https://api-docs.deepseek.com/guides/reasoning_model
| null |
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8544/timeline
| null | null | false
|
https://api.github.com/repos/ollama/ollama/issues/4703
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4703/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4703/comments
|
https://api.github.com/repos/ollama/ollama/issues/4703/events
|
https://github.com/ollama/ollama/issues/4703
| 2,323,181,782
|
I_kwDOJ0Z1Ps6KePDW
| 4,703
|
Could you please support deepseek v2 ?
|
{
"login": "netspym",
"id": 74223710,
"node_id": "MDQ6VXNlcjc0MjIzNzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/74223710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netspym",
"html_url": "https://github.com/netspym",
"followers_url": "https://api.github.com/users/netspym/followers",
"following_url": "https://api.github.com/users/netspym/following{/other_user}",
"gists_url": "https://api.github.com/users/netspym/gists{/gist_id}",
"starred_url": "https://api.github.com/users/netspym/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/netspym/subscriptions",
"organizations_url": "https://api.github.com/users/netspym/orgs",
"repos_url": "https://api.github.com/users/netspym/repos",
"events_url": "https://api.github.com/users/netspym/events{/privacy}",
"received_events_url": "https://api.github.com/users/netspym/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-05-29T12:18:58
| 2024-06-11T22:12:00
| 2024-06-11T22:11:59
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
It is really a great model with excellent RAG support, great for server cpu inferencing with big memory. Its only 21b active parameters. Looks like llama.cpp is working on it, could you please also support it.
A lot of people are waiting for the support of this model.
Many Thanks
Yuming
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4703/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4703/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4424
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4424/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4424/comments
|
https://api.github.com/repos/ollama/ollama/issues/4424/events
|
https://github.com/ollama/ollama/pull/4424
| 2,294,770,791
|
PR_kwDOJ0Z1Ps5vXTW6
| 4,424
|
Fixed the API endpoint /api/tags when the model list is empty.
|
{
"login": "machimachida",
"id": 55428929,
"node_id": "MDQ6VXNlcjU1NDI4OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/55428929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/machimachida",
"html_url": "https://github.com/machimachida",
"followers_url": "https://api.github.com/users/machimachida/followers",
"following_url": "https://api.github.com/users/machimachida/following{/other_user}",
"gists_url": "https://api.github.com/users/machimachida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/machimachida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/machimachida/subscriptions",
"organizations_url": "https://api.github.com/users/machimachida/orgs",
"repos_url": "https://api.github.com/users/machimachida/repos",
"events_url": "https://api.github.com/users/machimachida/events{/privacy}",
"received_events_url": "https://api.github.com/users/machimachida/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-05-14T08:22:19
| 2024-05-14T18:18:10
| 2024-05-14T18:18:10
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4424",
"html_url": "https://github.com/ollama/ollama/pull/4424",
"diff_url": "https://github.com/ollama/ollama/pull/4424.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4424.patch",
"merged_at": "2024-05-14T18:18:10"
}
|
## Summary
Fixed an issue with the `/api/tags` endpoint where an empty model list was returning `{models: null}`. The endpoint now returns `{models: []}` when no models are available.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4424/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/3578
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/3578/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/3578/comments
|
https://api.github.com/repos/ollama/ollama/issues/3578/events
|
https://github.com/ollama/ollama/issues/3578
| 2,235,755,367
|
I_kwDOJ0Z1Ps6FQutn
| 3,578
|
Can't connect to registry.ollama.ai "read: connection refused"
|
{
"login": "simonfrey",
"id": 24354822,
"node_id": "MDQ6VXNlcjI0MzU0ODIy",
"avatar_url": "https://avatars.githubusercontent.com/u/24354822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonfrey",
"html_url": "https://github.com/simonfrey",
"followers_url": "https://api.github.com/users/simonfrey/followers",
"following_url": "https://api.github.com/users/simonfrey/following{/other_user}",
"gists_url": "https://api.github.com/users/simonfrey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonfrey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonfrey/subscriptions",
"organizations_url": "https://api.github.com/users/simonfrey/orgs",
"repos_url": "https://api.github.com/users/simonfrey/repos",
"events_url": "https://api.github.com/users/simonfrey/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonfrey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null | 2
| 2024-04-10T14:34:08
| 2024-04-10T14:44:37
| 2024-04-10T14:44:37
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
I try to run `ollama run codegemma`, but on the `pulling manifest` stage I get the following error:
```
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/codegemma/manifests/latest": dial tcp: lookup registry.ollama.ai on 10.89.3.1:53: read udp 10.89.3.11:54907->10.89.3.1:53: read: connection refused
```
### What did you expect to see?
Running downloaded model
### Steps to reproduce
Enter `ollama run codegemma`
### Platform
In docker
### Docker-Compose file
```yaml
version: '3'
services:
ollama:
image: 'docker.io/ollama/ollama:0.1.31'
ports:
- 11434:11434
networks:
- ollama-network
volumes:
- ./ollama/:/root/.ollama
networks:
ollama-network:
driver: bridge
```
|
{
"login": "simonfrey",
"id": 24354822,
"node_id": "MDQ6VXNlcjI0MzU0ODIy",
"avatar_url": "https://avatars.githubusercontent.com/u/24354822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonfrey",
"html_url": "https://github.com/simonfrey",
"followers_url": "https://api.github.com/users/simonfrey/followers",
"following_url": "https://api.github.com/users/simonfrey/following{/other_user}",
"gists_url": "https://api.github.com/users/simonfrey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonfrey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonfrey/subscriptions",
"organizations_url": "https://api.github.com/users/simonfrey/orgs",
"repos_url": "https://api.github.com/users/simonfrey/repos",
"events_url": "https://api.github.com/users/simonfrey/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonfrey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/3578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/3578/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/8632
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/8632/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/8632/comments
|
https://api.github.com/repos/ollama/ollama/issues/8632/events
|
https://github.com/ollama/ollama/issues/8632
| 2,815,629,446
|
I_kwDOJ0Z1Ps6n0xiG
| 8,632
|
Ollama unable to download/run deepseek-r1:7b, other models work
|
{
"login": "arjunivor",
"id": 123751821,
"node_id": "U_kgDOB2BNjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/123751821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjunivor",
"html_url": "https://github.com/arjunivor",
"followers_url": "https://api.github.com/users/arjunivor/followers",
"following_url": "https://api.github.com/users/arjunivor/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunivor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arjunivor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunivor/subscriptions",
"organizations_url": "https://api.github.com/users/arjunivor/orgs",
"repos_url": "https://api.github.com/users/arjunivor/repos",
"events_url": "https://api.github.com/users/arjunivor/events{/privacy}",
"received_events_url": "https://api.github.com/users/arjunivor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] |
closed
| false
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 40
| 2025-01-28T13:09:40
| 2025-01-30T02:15:17
| 2025-01-30T00:07:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### What is the issue?
while trying to run `ollama run deepseek-r1:7b` it repeatedly fails at 6%. I tried to run llama 3.2 and it downloaded that flawlessly, but everytime i try to run deepseek i get an error saying `error max retries exceeded: EOF`
### OS
WSL2
### GPU
Nvidia
### CPU
AMD
### Ollama version
latest
|
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/8632/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/8632/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4016
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4016/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4016/comments
|
https://api.github.com/repos/ollama/ollama/issues/4016/events
|
https://github.com/ollama/ollama/issues/4016
| 2,267,978,945
|
I_kwDOJ0Z1Ps6HLpzB
| 4,016
|
Export Pulled Model
|
{
"login": "ChenTao98",
"id": 40379119,
"node_id": "MDQ6VXNlcjQwMzc5MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/40379119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenTao98",
"html_url": "https://github.com/ChenTao98",
"followers_url": "https://api.github.com/users/ChenTao98/followers",
"following_url": "https://api.github.com/users/ChenTao98/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenTao98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenTao98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenTao98/subscriptions",
"organizations_url": "https://api.github.com/users/ChenTao98/orgs",
"repos_url": "https://api.github.com/users/ChenTao98/repos",
"events_url": "https://api.github.com/users/ChenTao98/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenTao98/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 1
| 2024-04-29T01:54:43
| 2024-05-01T22:37:54
| 2024-05-01T22:37:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I run a service on an offline server, so I can't use the "pull" command directly.
How can I export a pulled Model in a online computer( e.g. Windows PC) and import it to the offline server(Linux).
Importing from gguf or torch tensor sometimes can't run normally.
It would be better if we can directly download modelfile from [ollama.com ](https://ollama.com/) and import it into offline server.
|
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4016/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4016/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4116
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4116/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4116/comments
|
https://api.github.com/repos/ollama/ollama/issues/4116/events
|
https://github.com/ollama/ollama/pull/4116
| 2,276,818,468
|
PR_kwDOJ0Z1Ps5ubaCu
| 4,116
|
Update 'llama2' -> 'llama3' in most places
|
{
"login": "drnic",
"id": 108,
"node_id": "MDQ6VXNlcjEwOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drnic",
"html_url": "https://github.com/drnic",
"followers_url": "https://api.github.com/users/drnic/followers",
"following_url": "https://api.github.com/users/drnic/following{/other_user}",
"gists_url": "https://api.github.com/users/drnic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drnic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drnic/subscriptions",
"organizations_url": "https://api.github.com/users/drnic/orgs",
"repos_url": "https://api.github.com/users/drnic/repos",
"events_url": "https://api.github.com/users/drnic/events{/privacy}",
"received_events_url": "https://api.github.com/users/drnic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-05-03T03:12:22
| 2024-05-03T19:25:04
| 2024-05-03T19:25:04
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/4116",
"html_url": "https://github.com/ollama/ollama/pull/4116",
"diff_url": "https://github.com/ollama/ollama/pull/4116.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4116.patch",
"merged_at": "2024-05-03T19:25:04"
}
| null |
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4116/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4116/timeline
| null | null | true
|
https://api.github.com/repos/ollama/ollama/issues/7075
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/7075/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/7075/comments
|
https://api.github.com/repos/ollama/ollama/issues/7075/events
|
https://github.com/ollama/ollama/issues/7075
| 2,560,804,262
|
I_kwDOJ0Z1Ps6YosWm
| 7,075
|
Hallucination fix?
|
{
"login": "Lu-Yi-Fan",
"id": 88626655,
"node_id": "MDQ6VXNlcjg4NjI2NjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/88626655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lu-Yi-Fan",
"html_url": "https://github.com/Lu-Yi-Fan",
"followers_url": "https://api.github.com/users/Lu-Yi-Fan/followers",
"following_url": "https://api.github.com/users/Lu-Yi-Fan/following{/other_user}",
"gists_url": "https://api.github.com/users/Lu-Yi-Fan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lu-Yi-Fan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lu-Yi-Fan/subscriptions",
"organizations_url": "https://api.github.com/users/Lu-Yi-Fan/orgs",
"repos_url": "https://api.github.com/users/Lu-Yi-Fan/repos",
"events_url": "https://api.github.com/users/Lu-Yi-Fan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lu-Yi-Fan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 4
| 2024-10-02T06:59:17
| 2024-10-21T07:20:19
| 2024-10-21T07:20:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi, when i use the models(llama3:70b/llama3:latest) via ollama, it seems to keep track of all the conversations and query. This causes hallucination and information to appear across different channels which shouldnt be the case. What could be a possible remedy for this? Would it be possible to instaniate the model without keeping track of the history. Thank you in advance
|
{
"login": "Lu-Yi-Fan",
"id": 88626655,
"node_id": "MDQ6VXNlcjg4NjI2NjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/88626655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lu-Yi-Fan",
"html_url": "https://github.com/Lu-Yi-Fan",
"followers_url": "https://api.github.com/users/Lu-Yi-Fan/followers",
"following_url": "https://api.github.com/users/Lu-Yi-Fan/following{/other_user}",
"gists_url": "https://api.github.com/users/Lu-Yi-Fan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lu-Yi-Fan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lu-Yi-Fan/subscriptions",
"organizations_url": "https://api.github.com/users/Lu-Yi-Fan/orgs",
"repos_url": "https://api.github.com/users/Lu-Yi-Fan/repos",
"events_url": "https://api.github.com/users/Lu-Yi-Fan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lu-Yi-Fan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/7075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/7075/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/4701
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/4701/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/4701/comments
|
https://api.github.com/repos/ollama/ollama/issues/4701/events
|
https://github.com/ollama/ollama/issues/4701
| 2,322,997,438
|
I_kwDOJ0Z1Ps6KdiC-
| 4,701
|
Quick model updates with `ollama pull`
|
{
"login": "LaurentBonnaud",
"id": 2168323,
"node_id": "MDQ6VXNlcjIxNjgzMjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2168323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LaurentBonnaud",
"html_url": "https://github.com/LaurentBonnaud",
"followers_url": "https://api.github.com/users/LaurentBonnaud/followers",
"following_url": "https://api.github.com/users/LaurentBonnaud/following{/other_user}",
"gists_url": "https://api.github.com/users/LaurentBonnaud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LaurentBonnaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LaurentBonnaud/subscriptions",
"organizations_url": "https://api.github.com/users/LaurentBonnaud/orgs",
"repos_url": "https://api.github.com/users/LaurentBonnaud/repos",
"events_url": "https://api.github.com/users/LaurentBonnaud/events{/privacy}",
"received_events_url": "https://api.github.com/users/LaurentBonnaud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null | 5
| 2024-05-29T10:45:31
| 2024-09-13T11:33:14
| 2024-09-12T23:28:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I would like to be able to update models that I have previously downloaded.
When I do this, `ollama pull` does 2 steps:
1. It checks if the model is up-to-date and if not, it downloads the newest version
1. It checks the sha256 digest
Unfortunately, step 2 is always performed, even when it is not needed, which is very slow (I have a lot of models).
Therefore, I propose the following behavior:
- `ollama pull` should do step 1. and then step 2. **only if** the model has been updated
- a new command `ollama check` could allow to check the sha256 digest when needed by the user.
Thanks.
|
{
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/4701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/4701/timeline
| null |
completed
| false
|
https://api.github.com/repos/ollama/ollama/issues/2313
|
https://api.github.com/repos/ollama/ollama
|
https://api.github.com/repos/ollama/ollama/issues/2313/labels{/name}
|
https://api.github.com/repos/ollama/ollama/issues/2313/comments
|
https://api.github.com/repos/ollama/ollama/issues/2313/events
|
https://github.com/ollama/ollama/pull/2313
| 2,113,096,710
|
PR_kwDOJ0Z1Ps5lu4PN
| 2,313
|
Feature - Add Wingman Extension
|
{
"login": "RussellCanfield",
"id": 17344904,
"node_id": "MDQ6VXNlcjE3MzQ0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/17344904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RussellCanfield",
"html_url": "https://github.com/RussellCanfield",
"followers_url": "https://api.github.com/users/RussellCanfield/followers",
"following_url": "https://api.github.com/users/RussellCanfield/following{/other_user}",
"gists_url": "https://api.github.com/users/RussellCanfield/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RussellCanfield/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RussellCanfield/subscriptions",
"organizations_url": "https://api.github.com/users/RussellCanfield/orgs",
"repos_url": "https://api.github.com/users/RussellCanfield/repos",
"events_url": "https://api.github.com/users/RussellCanfield/events{/privacy}",
"received_events_url": "https://api.github.com/users/RussellCanfield/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 0
| 2024-02-01T17:53:53
| 2024-02-01T19:16:25
| 2024-02-01T19:16:25
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | false
|
{
"url": "https://api.github.com/repos/ollama/ollama/pulls/2313",
"html_url": "https://github.com/ollama/ollama/pull/2313",
"diff_url": "https://github.com/ollama/ollama/pull/2313.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2313.patch",
"merged_at": "2024-02-01T19:16:25"
}
|
Add Wingman-AI VSCode extension to README
#2297
|
{
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/ollama/ollama/issues/2313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/ollama/ollama/issues/2313/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.